The old round off problem?

S

sam

Hello all, I am taking a class in scientific programming at the local
college. My problem is that the following difference produces round off
errors as the value of x increases. For x >= 19 the diference goes to
zero.I understand the problem, but am curious as to whether their
exists a solution. I have tried various graphing programs,and they all
exihibit this problem.

thanks in advance Sam Schulenburg
f(x) = cosh^2(x) - sinh^2(x) = 1
print "x= %2d Sinh^2(x) = %20.3f f(x) = %2.10f
"%(x,pow(cosh(x),2),pow(cosh(x),2)- pow(sinh(x),2))


x= 0 Sinh^2(x) = 1.000 f(x) = 1.0000000000
x= 1 Sinh^2(x) = 2.381 f(x) = 1.0000000000
x= 2 Sinh^2(x) = 14.154 f(x) = 1.0000000000
x= 3 Sinh^2(x) = 101.358 f(x) = 1.0000000000
x= 4 Sinh^2(x) = 745.740 f(x) = 1.0000000000
x= 5 Sinh^2(x) = 5507.116 f(x) = 1.0000000000
x= 6 Sinh^2(x) = 40689.198 f(x) = 1.0000000000
x= 7 Sinh^2(x) = 300651.571 f(x) = 0.9999999999
x= 8 Sinh^2(x) = 2221528.130 f(x) = 1.0000000000
x= 9 Sinh^2(x) = 16414992.784 f(x) = 1.0000000037
x= 10 Sinh^2(x) = 121291299.352 f(x) = 1.0000000298
x= 11 Sinh^2(x) = 896228212.033 f(x) = 0.9999998808
x= 12 Sinh^2(x) = 6622280532.961 f(x) = 1.0000019073
x= 13 Sinh^2(x) = 48932402357.710 f(x) = 0.9999923706
x= 14 Sinh^2(x) = 361564266073.369 f(x) = 0.9999389648
x= 15 Sinh^2(x) = 2671618645381.616 f(x) = 1.0000000000
x= 16 Sinh^2(x) = 19740740045670.668 f(x) = 0.9921875000
x= 17 Sinh^2(x) = 145865435631864.219 f(x) = 1.0000000000
x= 18 Sinh^2(x) = 1077807886778799.250 f(x) = 1.0000000000
x= 19 Sinh^2(x) = 7963982939278438.000 f(x) = 0.0000000000
 
P

Paul Rubin

sam said:
Hello all, I am taking a class in scientific programming at the local
college. My problem is that the following difference produces round off
errors as the value of x increases. For x >= 19 the diference goes to
zero.I understand the problem, but am curious as to whether their
exists a solution. [f(x) = cosh^2(x) - sinh^2(x) = 1]

The usual way is to notice that the difference goes to zero because
the lowest order terms of the Taylor series for cosh^2 and sinh^2 are
equal and cancel each other out. The solution is to write down the
series for (cosh^2(x) - sinh^2(x)) and add up a few non-cancelled terms.
All these series should converge very fast.
 
D

David Treadwell

sam said:
Hello all, I am taking a class in scientific programming at the local
college. My problem is that the following difference produces
round off
errors as the value of x increases. For x >= 19 the diference goes to
zero.I understand the problem, but am curious as to whether their
exists a solution. [f(x) = cosh^2(x) - sinh^2(x) = 1]

The usual way is to notice that the difference goes to zero because
the lowest order terms of the Taylor series for cosh^2 and sinh^2 are
equal and cancel each other out. The solution is to write down the
series for (cosh^2(x) - sinh^2(x)) and add up a few non-cancelled
terms.
All these series should converge very fast.

Here's my analysis:

First, keep in mind that the range of values for a Taylor expansion
of sinh() and cosh() is limited. Then...

Write down the series for (cosh^2(x) - sinh^2(x)), as Paul suggested.

c2 = taylor(cosh(x)^2,x,8)

s2 = taylor(sinh(x)^2,x,8)

The first method gives the expected answer:

eval(c2-s2) ==> 1

Write down the series for cosh(x), square that; write the series for
sinh(x), square that and subtract.

Compare the two (I just did this in Eigenmath). Higher-order terms
remain for the second method.. I don't know how many terms my cosh()
and sinh() functions use, but if I evaluate (again, in Eigenmath)

The second method, which is more akin to what Python is doing (unless
there is an explicit 'sinh**2(x)' function) is different.

c2 = (taylor(cosh(x),x,8))^2, which gives terms up to and including
(x**8)**2 = x**16

s2 = (taylor(sinh(x),x,9))^2, which gives terms up to and including
x**18, then subtract the two, the result is

eval(c2 - s2) ==> 1 - 1/1814400 * x**10 - ... higher terms.

This begs the question, "What is the algorithm used by my Python?"

And, the ever-important question, "Does it matter in my final
result?" This is important, because I do not, in general, trust the
result when subtracting two large numbers.

:--David T.
 
D

Dennis Lee Bieber

Hello all, I am taking a class in scientific programming at the local
college. My problem is that the following difference produces round off
errors as the value of x increases. For x >= 19 the diference goes to
zero.I understand the problem, but am curious as to whether their
exists a solution. I have tried various graphing programs,and they all
exihibit this problem.
Only solution is to implement an infinite precision floating point
library (and all math operations on it).

Traditional floating point (I think Python uses double precision) is
typically only capable of 15 significant digits of precision. Any
operations that span greater than 15 digits essentially lose the "right
end"... That is, while two individual floats may each have 15
significant digits:

12345678912345.0
and
0.12345678912345

adding them requires the equivalence of shifting the second so the
decimal point aligns.

12345678912345.0|
0.1|2345678912345

the | shows how much of each number is now significant; anything to the
right is lost. If the shift is even greater, the smaller value
"vanishes" from the equation.

It is nearly a science just to formulate computer floating point
equations so that the smaller terms are summed first, to minimize the
loss..

100000000000000
+ .6
+ .6
= 100000000000000

but
.6
+ .6 (=> 1.2)
+ 100000000000000
= 100000000000001


--
 
S

sam

David I beg I beg
Can you answer the question?

Also thanks for the information on using the Taylor series.

Sam Schulenburg
 
D

David Treadwell

I wish I knew!

So I asked Google. Here's what I learned:

Most implementations are based on, or similar to the implementation
in the fdlibm package.

sinh(x) and cosh(x) are both based on exp(x). See http://
www.netlib.org/cgi-bin/netlibfiles.pl?filename=/fdlibm/e_sinh.c

exp(x) is implemented by:

1. reducing x into the range |r| <= 0.5 * ln(2), such that x = k *
ln(2) + r
2. approximating exp(r) with a fifth-order polynomial,
3. re-scaling by multiplying by 2^k: exp(x) = 2^k * exp(r)

sinh(x) is mathematically ( exp(x) - exp(-x) )/2 but it's not
calculated directly that way.

My brain hurts at this point; it's late. I'll have another go at
hunting down the errors tomorrow. In the interim, does anybody out
there already know the answer?

:--David
 
S

sam

David said:
exp(x) is implemented by:

1. reducing x into the range |r| <= 0.5 * ln(2), such that x = k *
ln(2) + r
2. approximating exp(r) with a fifth-order polynomial,
3. re-scaling by multiplying by 2^k: exp(x) = 2^k * exp(r)

sinh(x) is mathematically ( exp(x) - exp(-x) )/2 but it's not
calculated directly that way.

My brain hurts at this point; it's late. I'll have another go at
hunting down the errors tomorrow. In the interim, does anybody out
there already know the answer?

:--David
I tried the exp(x) equivalent of sinh(x) and cosh(x) with the same
results.
I think I will write my own or steal the Taylor(f(x),x,n) function in
both C++, and python.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,764
Messages
2,569,567
Members
45,041
Latest member
RomeoFarnh

Latest Threads

Top