(snip on poll results)
Without many assumptions, the uncertainty in a count, such as
the number of poll respondants, is the square root of that number.
We were given neither a count, nor the number of respondents, so that
wouldn't be a very useful rule even if it were applicable.
The number we were given is a percentage. If p is the population average
for the fraction of the population for which something is true, the
margin of error when estimating p using a sample size of n is
sqrt(p*(1-p)/n). IIRC, using the sample mean to estimate the value of p
for purposes of applying this equation make it an under-estimate, but
that's less of a problem for sufficiently large n.
But we weren't given the sample size, either.
A 5% uncertainty, 1 in 20, would be usual if 400 people were
in that sample. It then takes 625 people to get 4%, about 1000
to get 3%, and 2500 to get 2%. Assuming the cost is linear in the
number of people polled, note that the cost starts to go up
pretty fast at that point.
The testing methodology involves searching for references to a given
programming language on a variety of web sites. The cost per data point
is pretty small, and the number of samples taken should be pretty large.
I'd suspect that the errors in their results are dominated by systematic
error associated with their sampling technique, rather than statistical
error due to sample size.
As far as I know, we have no information from which to estimate those
systematic errors.