Finding size of Variable

G

Grant Edwards

A mathematician, a chemist, and a physicist are arguing the nature of
prime numbers. The chemist says, "All odd numbers are prime. Look, I
can prove it. Three is prime. Five is prime. Seven is prime". The
mathematician says, "That's nonsense. Nine is not prime". The
physicist looks at him and says, "Hmmmm, you may be right, but eleven
is prime, and thirteen is prime. It appears that within the limits of
experimental error, all odd number are indeed prime!"

Assuming spherical odd numbers in a vacuum on a frictionless surface,
of course.
 
R

Roy Smith

Steven D'Aprano said:
They ask a computer programmer to adjudicate who is right, so he writes a
program to print out all the primes:

1 is prime
1 is prime
1 is prime
1 is prime
1 is prime
...

So, a mathematician, a biologist, and a physicist are watching a house.
The physicist says, "It appears to be empty". Sometime later, a man and
a woman go into the house. Shortly after that, the man and the woman
come back out, with a child. The biologist says, "They must have
reproduced". The mathematician says, "If one more person goes into the
house, it'll be empty again".
 
O

Oscar Benjamin

No idea how you came up with that, but I see an error in my
stated algorithm, which does surely penalize it. The slope isn't
2, but 2x. So the line should have been
guess += err/(2*guess)

Ah, now I understand. This is the same algorithm that Chris posted
which in turn is the same as the one that I had previously posted: all
three are just using Newton's method.

Newton's method uses the slope argument to find a root of f(x) i.e. an
x0 such that f(x0) = 0. We start with a guess x[n]. We then find the
value of the function f(x[n]) and slope fp(x[n]) and extrapolating to
the point where f(x) hits the x-axis we find an improved estimate with

x[n+1] = x[n] - f(x[n]) / fp(x[n])

In our case f(x) = x**2 - y and so fp(x) = 2*x which gives

x[n+1] = x[n] - (x[n]**2 - y) / (2*x[n])

and after rearranging we can express this as

x[n+1] = (x[n] + y/x[n]) / 2.

So my loop

while x ** 2 - y > x * eps:
x = (x + y/x) / 2

and Chris' loop:

while abs(guess1-guess2) > epsilon:
guess1 = n/guess2
guess2 = (guess1 + guess2)/2

and now your loop

while err > 1e-10:
err = n - guess * guess
guess += err/(2 * guess)

are all performing the same basic algorithm. The only significant
difference is that mine tests for a relative error condition where as
the other two test for absolute error. This means that it will still
converge to an answer with the correct precision even when the root is
large e.g. sqrt(1e100). The other two are susceptible to infinite
loops in this case.
Now if you stop the loop after 3 iterations (or use some other
approach to get a low-precision estimate, then you can calculate
scale = 1/(2*estimate)

and then for remaining iterations,
guess += err *scale

Fair enough. That's a reasonable suggestion for improvement. This is
often known as the "modified Newton's method". For lack of a better
reference see here:
http://math.stackexchange.com/questions/645088/modified-newtons-method

Using an approximate slope to avoid division can be a significant
optimisation. This is especially true when using the multidimensional
generalisation of Newton's method since in this case the division is
replaced with solving a system of linear equations (vastly more
expensive). However as noted in that stackexchange answer the use of
an approximate slope prevents quadratic convergence so in the
asymptotic case where we want large numbers of digits it can be much
slower.

Actually it's worse than that since we are guaranteed to converge to
an incorrect answer. The update step x = (x + y/x) / 2 will converge
when x^2 = y but we replace it with x = (x + y/x0)/2 where x0 is near
to the true root. This will converge when x == (x + y/x0)/2 i.e. when
x = y/x0 which is not the correct answer (and can be computed without
a loop in any case). It's possible to fix that by alternating between
steps where we improve our estimate of the slope and steps where we
use that estimate for refinement but I don't think that the advantage
of not using division counts against the loss of quadratic convergence
once we get to more than 10s of digits.

You can compare them yourself with this code:

def sqrt1(y, prec=1000):
'''Solve x**2 = y'''
assert y > 0
eps = D(10) ** -(prec + 5)
x = D(y)
with localcontext() as ctx:
ctx.prec = prec + 10
while abs(x ** 2 - y) > x * eps:
x = (x + y/x) / 2
return x

def sqrt2(y, prec=1000):
'''Solve x**2 = y'''
assert y > 0
eps = D(10) ** -(prec + 5)
x = D(y)
with localcontext() as ctx:
ctx.prec = prec + 10
for n in range(7):
x = (x + y/x) / 2
x0r = 1 / x
while abs(x ** 2 - y) > x * eps:
err = x**2 - y
err2 = x - x0r * y
x = (x + x0r * y) / 2
return x

sqrt2(2) goes into an infinite loop at an error level of ~1e-100 (the
exact point where this happens depends on how many warmup iterations
it uses).

[snip]
Well considering you did not special-case the divide by 2, I'm not
surprised it's slower.

Multiplying by D('0.5') instead of dividing by 2 makes no measurable
difference to the timings. I don't know whether that's because it's
already special-cased by Decimal or just that the performance costs
don't match up in the same way for the multiprecision algorithms (when
the divisor is a single digit number).


Oscar
 
C

Chris Angelico

So my loop

while x ** 2 - y > x * eps:
x = (x + y/x) / 2

and Chris' loop:

while abs(guess1-guess2) > epsilon:
guess1 = n/guess2
guess2 = (guess1 + guess2)/2

and now your loop

while err > 1e-10:
err = n - guess * guess
guess += err/(2 * guess)

are all performing the same basic algorithm. The only significant
difference is that mine tests for a relative error condition where as
the other two test for absolute error. This means that it will still
converge to an answer with the correct precision even when the root is
large e.g. sqrt(1e100). The other two are susceptible to infinite
loops in this case.

This is one place where I find the REXX 'numeric fuzz' setting to be a
useful trick. The definition is that, whenever two numbers are
compared for equality, the 'numeric digits' setting is effectively
reduced by the fuzz for that comparison. So let's say we're
calculating to a precision of 100 digits ('numeric digits 100'). You
could then code the loop as "do until guess1=guess2", with a numeric
fuzz of, say, 2. That'll give you 98 digits of accuracy, *regardless
of scale*. It's like setting epsilon to "whatever would be 1e-98 if we
were working with numbers between 0 and 1", more or less. A sliding
epsilon.

ChrisA
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,767
Messages
2,569,570
Members
45,045
Latest member
DRCM

Latest Threads

Top