Floating-point glitches with the math module. Bug? Or am I missing something?

C

Chris Metzler

I'm getting some extremely odd results using the trig functions
in the math module. I don't know whether there's a bug here, or
(more likely) I'm just missing something. They kinda look like
problems I'd see with truncation/roundoff errors, but they're
occurring for larger values of the floating point exponent than
I'd expect this to happen.

The script below illustrates the kind of thing I'm seeing. Do
other people see the same thing? Am I missing something here?

Thanks,

-c



import math

x = 0.90455926850657064620
y = 0.90463240818925083619
z = -2.00200807043415807129e-08


cos_gamma1 = math.cos(x - y) - math.sin(x)*math.sin(y)*(1.0 - math.cos(z))
gamma1 = math.acos(cos_gamma1)

# note that with this formula, as long as x and y are between 0 and pi,
# it's mathematically impossible for gamma1 to be smaller than | x - y |,
# since (1-cos(z)) will range from 0 to 2, and the sines will be positive
# as long as x and y are between 0 and pi. So I'm always subtracting
# a positive number off of cos(x-y). So that means the expression
# gives a number smaller than cos(x-y); the arccos should then give an
# angle larger than | x - y |.
# So gamma1 should be greater than | x - y |. And it is.

print "This should be positive: ", gamma1 - math.fabs(x-y)

# I get 1.20693498327e-12

# But now let's rearrange the equation.
cos_gamma2 = math.cos(x)*math.cos(y) + math.sin(x)*math.sin(y)*math.cos(z)
gamma2 = math.acos(cos_gamma2)

# This is the SAME EQUATION as above -- simply substitute the
# trigonometric identity cos(x-y) = cos(x)*cos(y) + sin(x)*sin(y)
# into the first equation.
# So, since it's still the same equation, we should get the same
# result -- and in particular, gamma2 should be greater than | x - y |.
# It isn't.

print "This should be positive: ", gamma2 - math.fabs(x-y)

# I get -3.11013937044e-13

# Note that we don't need anything as weird as my data above
# to generate weirdness.

x = 0.999999
y = 0.999997
z = 1e-08

cos_gamma3 = math.cos(x - y) - math.sin(x)*math.sin(y)*(1.0 - math.cos(z))
gamma3 = math.acos(cos_gamma3)

# Again, just from the functional form, gamma3 should be larger
# than | x - y |. However . . .

print "This should be positive: ", gamma3 - math.fabs(x-y)

# it isn't. I get -2.21217876213e-11.
 
C

Chris Metzler

I'm getting some extremely odd results using the trig functions
in the math module.

Upon further investigation, it looks like the series used to
compute trig functions in the math module are truncated at a
surprisingly small number of terms.
0.0

5% error at 2e-8 is awful; and anything below 1.1e-8 is 0,
pretty much. I can't find a module in the Library Reference
that provides higher-precision functions. Is there one
around?

Thanks,

-c
 
J

Jeff Epler

If it's any consolation, I get similar problems in C. Whenever Python
does floating point math, it thinly wraps your C compiler's math
library.

$ cat > metzler.c <<EOF
#include <math.h>

double x, y, z;
double cos_gamma3, gamma3;

int main(void) {
x = 0.999999;
y = 0.999997;
z = 1e-08;

cos_gamma3 = cos(x - y) - sin(x)*sin(y)*(1.0 - cos(z));
gamma3 = acos(cos_gamma3);

printf("This shold be positive: %g\n", gamma3 - fabs(x-y));
return 0;
}
EOF
$ gcc metzler.c -lm -o metzler
$ ./metzler
This shold be positive: -2.21218e-11

acos(cos(d)) for small d gives fairly large errors on my system:(1.2999928800133351e-06,
1.2999999999818712e-06)a half dozen correct digits isn't great, but it's all my platform's trig
functions seem to give me. And you're trying to make a silk purse out
of a sow's ear...

Incidentally, the second part of your expression goes to 0 here, with
the given values:True

Jeff

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.5 (GNU/Linux)

iD8DBQFBT2/sJd01MZaTXX0RAosTAJ4qKl5v0/uHxSW83TLfz7j/TqMz6ACfYrOu
kFS6GCkcPSTtk1dK8S2vOJw=
=90Pi
-----END PGP SIGNATURE-----
 
C

Chris Metzler

Incidentally, the second part of your expression goes to 0 here, with
the given values:

Yeah, I noticed this later (it's sorta implicit in the content of my
other, followup post).

This kinda sucks because I need to do some precision surveying calcs
-- computing lat/lons for a variety of points separated by known
distances and angles from a point of known lat/lon. If I do everything
in spherical trig, I run into this issue (that whole dividing-by-earth
-radius thing). But if I do things using planar approximations, for
most of the points I get lat/lons that are too far off from where they
should be.

-c
 
J

John Roth

Chris Metzler said:
Yeah, I noticed this later (it's sorta implicit in the content of my
other, followup post).

This kinda sucks because I need to do some precision surveying calcs
-- computing lat/lons for a variety of points separated by known
distances and angles from a point of known lat/lon. If I do everything
in spherical trig, I run into this issue (that whole dividing-by-earth
-radius thing). But if I do things using planar approximations, for
most of the points I get lat/lons that are too far off from where they
should be.

As Jeff points out, Python is at the mercy of the platform's
C library - it does not have it's own numerical computation
library. It simply wraps the C library functions.

You might want to look at one of the math packages.
SciPy comes highly recommended, it might do what
you need.

John Roth
 
R

Robert Kern

John Roth wrote:

[snip]
As Jeff points out, Python is at the mercy of the platform's
C library - it does not have it's own numerical computation
library. It simply wraps the C library functions.

You might want to look at one of the math packages.
SciPy comes highly recommended, it might do what
you need.

SciPy does wrap the cephes library which has (presumably better) inverse
trig functions, but does not, unfortunately, wrap those particular
functions since Numeric provides them. Numeric just wraps the C library,
too.

--
Robert Kern
(e-mail address removed)

"In the fields of hell where the grass grows high
Are the graves of dreams allowed to die."
-- Richard Harter
 
P

Paul Rubin

Chris Metzler said:
2.10734242554e-08

5% error at 2e-8 is awful; and anything below 1.1e-8 is 0, pretty much.

I don't know if you can expect much better. For small x, cos(x) is
approx. 1-x^2/2, so cos(2e-8) is about 1-(2e-16), which is pushing the
precision of an IEEE double. It's not a library error; you're
reaching the limits of what the machine arithmetic can do.
I can't find a module in the Library Reference that provides
higher-precision functions. Is there one around?

I guess you could use gmpy and some infinite series, or there may be a
way to recompile Python to use 80-bit extended floats if your hardware
supports them (the x86 does). However, the best approach is to
arrange your calculation to not lose precision so easily. Any
numerical analysis book spends a lot of time discussing how to do that.
 
T

Tim Peters

[Jeff Epler]
...
acos(cos(d)) for small d gives fairly large errors on my system:
a half dozen correct digits isn't great, but it's all my platform's trig
functions seem to give me.

This is an instance of a general phenomenon. Compute
cos(acos(cos(d))), and compare it to cos(d). On my box:

But I also have:

But as shown by the first line, cos() applied to either of those
computes exactly the same result -- to machine precision, d and
acos(cos(d)) have equal claim to being "the correct" inverse of
cos(d).

This isn't the fault of sloppy trig functions, it's an unavoidable
consequence of using finite precision arithmetic to evaluate a
function *near* a local minimum or local maximum. cos() has a local
maximum at cos(0), and d is near 0. For any reasonable function f
evaluated at point a such that f(a) is a local min or max, the first
derivative of f at a is 0. Therefore f(a+h) ~= f(a) + h**2*f''(a)/2
(the first-order term of the Taylor expansion around a vanishes).
That in turn roughly means that if you change any of the bits in the
least significant half of a, it makes no visible difference to the
computed value of f(a).

That's why any numerical analysis text will tell you that you can't
expect to compute the value at which a function achieves a local min
or max to better than about half the bits of precision in your
floating-point format. Computing inverses near such peaks/valleys
suffers the same problem, for the same reason: while the mathematical
cos() is 1-to-1 near 0, the machine-precision cos() is many-to-1 near
0, because cos() is very flat near 0 (== the first derivative is 0 at
cos(0)). And since machine-precision cos() is many-to-1 near 0,
machine-precision acos() near cos(d) for a small |d| can pick many
results e for which cos(e) is exactly equal to cos(d) to machine
precision.

Note that your box gave a different acos(cos(d)) than my box did. Nevertheless,

on my box, where the far right is the answer your box gave for
acos(cos(d)). There are simply a ton of floats that have the same
machine-precision cosine.
 
A

Alex Martelli

Chris Metzler said:
Yeah, I noticed this later (it's sorta implicit in the content of my
other, followup post).

This kinda sucks because I need to do some precision surveying calcs
-- computing lat/lons for a variety of points separated by known
distances and angles from a point of known lat/lon. If I do everything
in spherical trig, I run into this issue (that whole dividing-by-earth
-radius thing). But if I do things using planar approximations, for
most of the points I get lat/lons that are too far off from where they
should be.

I wonder if IBM's Accurate Portable Math Library might help you: they
guarantee precision to the ULP for any function they offer, and of
course you could talk to it from Python via ctypes, pyrex, swig, ...
(haven't tried it out myself, though).

APMathlib's download page is at:
<http://www-124.ibm.com/developerworks/downloads/?group_id=28>

If IEEE 754's precision just ain't enough for you, welcome to the
wonderful but slow world of multi-precision arithmetic!-)

GMP (which gmpy wraps) doesn't do transcendentals. MPFR, I believe,
does, but I know of no Python interface for it. Still, check out their
page, it's full of useful links -- <http://www.mpfr.org/> -- worst case,
you can use gmpy and program the transcendentals you need as per
the various well-known series, etc... but you're likely to be able to
find something more direct if you look around a little!


Alex
 
G

giacomo boffi

Chris Metzler said:
This kinda sucks because I need to do some precision surveying calcs
-- computing lat/lons for a variety of points separated by known
distances and angles from a point of known lat/lon.

may i suggest asking on sci.geo.cartography? it's full ok kind,
smart people waiting for interesting questions...

ciao
 
B

Bengt Richter

I'm getting some extremely odd results using the trig functions
in the math module. I don't know whether there's a bug here, or
(more likely) I'm just missing something. They kinda look like
problems I'd see with truncation/roundoff errors, but they're
occurring for larger values of the floating point exponent than
I'd expect this to happen.

The script below illustrates the kind of thing I'm seeing. Do
other people see the same thing? Am I missing something here?

Thanks,

-c



import math

x = 0.90455926850657064620
y = 0.90463240818925083619
z = -2.00200807043415807129e-08


cos_gamma1 = math.cos(x - y) - math.sin(x)*math.sin(y)*(1.0 - math.cos(z))
gamma1 = math.acos(cos_gamma1)
As Tim has pointed out, you are doing an inverse in a very sensitive part of the
inverse function. Not only that, if you want the gamma angle, acos is ambiguous re sign,
since cos(x)=cos(-x). So IWT your best option would be to reformulate. Probably
in terms of cartesian components, which you resolve in such a way that you can
get the angles you want using atan2 as the inverse function. What do x, y, and z
represent in your real problem?

Regards,
Bengt Richter
 
T

Terry Reedy

Tim Peters said:
This isn't the fault of sloppy trig functions, it's an unavoidable
consequence of using finite precision arithmetic to evaluate a
function *near* a local minimum or local maximum. cos() has a local
maximum at cos(0), and d is near 0. For any reasonable function f
evaluated at point a such that f(a) is a local min or max, the first
derivative of f at a is 0. Therefore f(a+h) ~= f(a) + h**2*f''(a)/2
(the first-order term of the Taylor expansion around a vanishes).
That in turn roughly means that if you change any of the bits in the
least significant half of a, it makes no visible difference to the
computed value of f(a).

To illustrate:....
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
0.99999999999999989
0.99999999999999989
0.99999999999999989
0.99999999999999989
0.99999999999999989
0.99999999999999989
0.99999999999999989
0.99999999999999989
0.99999999999999978
0.99999999999999978
0.99999999999999978
0.99999999999999978
0.99999999999999978
0.99999999999999967
0.99999999999999967
0.99999999999999967
0.99999999999999967
0.99999999999999956
0.99999999999999956
0.99999999999999956
0.99999999999999956
0.99999999999999944
0.99999999999999944
0.99999999999999944
0.99999999999999933
0.99999999999999933
0.99999999999999933
0.99999999999999922
0.99999999999999922
0.99999999999999922
0.99999999999999911
0.99999999999999911
0.99999999999999911
0.999999999999999
0.999999999999999
0.99999999999999889
0.99999999999999889
0.99999999999999889
0.99999999999999878

The mathematically smooth cos(x) is effectively a stairstep function for
|x|<5e-8. Stairsteps are hard to invert ;-).

Terry J. Reedy
 
A

Alex Martelli

Terry Reedy said:
To illustrate:

FWIW, these are exactly identical to the results I get from IBM's
Accurate Portable Mathematical Library (APMathLib), claimed to equal
"the exact theoretical values correctly rounded (nearest or even) to the
closest number representable by the IEEE 754 double format" (in other
words, correct to the Unit of Least Precision, ULP). So, for once, it's
apparently not the fault of the underlying C mathematical library.


Alex
 
A

Alexandre Fayolle

Le 20-09-2004 said:
import math

x = 0.90455926850657064620
y = 0.90463240818925083619
z = -2.00200807043415807129e-08


cos_gamma1 = math.cos(x - y) - math.sin(x)*math.sin(y)*(1.0 - math.cos(z))
gamma1 = math.acos(cos_gamma1)

Here's my attempt.

Firstly, I use the fact that scipy provides a cosm1 function which
returns cos(x)-1, and which is more precise that for small values of x.
-2.67470646622e-09

Then I'd like to use an acosm1 function in order to keep the precision.
Since there is no such function available, I'll just solve
cosm1(x)-cos_gamma_minus_1=0 for which I know there is a solution
between 0 and 1. Any solver will do the trick. To keep things simple,
I'll use scipy.optimize.zeros.bisect (not the fastest around, but does a
good job)
7.31396809867e-05
 
P

Paul Rubin

Alexandre Fayolle said:
Firstly, I use the fact that scipy provides a cosm1 function which
returns cos(x)-1, and which is more precise that for small values of x.

The simple approximation is cos(x)-1 = -x**2/2. Fancier approximation
is cos(x)-1 = -x**2/2 + x**4/24, which should exceed machine precision
for x < 2**-12 or so.
Then I'd like to use an acosm1 function in order to keep the precision.
Since there is no such function available, I'll just solve
cosm1(x)-cos_gamma_minus_1=0 for which I know there is a solution

acosm1(x) = sqrt(-2*x) for the simple approximation. Use quadratic
formula to solve the fancier one.
 
T

Tim Peters

[Tim, sez that when you evaluate a function near a local min or max,
then it's unavoidable that using finite-precision arithmetic makes
a 1-to-1 mathematical function appear to be many-to-1]

[Terry Reedy]
To illustrate:

....

The mathematically smooth cos(x) is effectively a stairstep function for
|x|<5e-8. Stairsteps are hard to invert ;-).

Yup! I'd like to point out too that the new decimal module lowers the
bar for doing numeric *investigation* in two important ways: (1) just
because it's decimal, it's much easier for people to understand what
they're seeing; and, (2) because you can boost precision to anything
you like whenever you like, it's often dead easy to write a function
that delivers excellent accuracy.

I won't say much about the following, but hope it illustrates both
points for those who run it and stare at it. The decimal cosine
routine is about as naive as you can imagine, but by temporarily
boosting the working precision to twice the caller's precision, then
rounding back at the end, it delivers high-quality results. Until the
input argument gets "too big". Then it's a disaster. So I hope this
also illustrates that decimal isn't a magical cure either.

"""
import decimal, math

def deccos(x):
origprec = decimal.getcontext().prec
decimal.getcontext().prec = 2 * origprec
result = term = decimal.Decimal(1)
# Compute 1 - x**2/2! + x**4/4! - x**6/6! ...
i = 1
sq = -x*x
while True:
term *= sq
term /= i*(i+1)
i += 2
newresult = result + term
if result == newresult:
break
result = newresult
decimal.getcontext().prec = origprec
return result + 0 # round to orig precision

decimal.getcontext().prec = 20
base = decimal.Decimal("1e-9")

for i in range(50):
x = base * i
dcos = deccos(x)
fcos = math.cos(float(x))
print "%6s %22s %19.17f %s" % (x,
dcos,
fcos,
fcos == float(dcos))
"""

One thing to notice is that, working with 20 decimal digits, all these
inputs have distinct computed cos() values. Another is that
math.cos() delivers the same results, after rounding to native binary
fp precision. Another is that while we can tell that's so because the
4th column always says "True", you'd have a hard time guessing that
from staring at the 2nd and 3rd columns; for example, it's probably
not intuitively obvious that the decimal

0.999999999999999838
"rounds to"
0.99999999999999989

when converted to native binary fp and then back to 17-digit decimal.
Getting binary<->decimal conversion errors out of the equation makes
working with the decimal module easier for everyone.

decimal-is-too-important-to-waste-on-money<wink>-ly y'rs - tim
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,534
Members
45,008
Latest member
Rahul737

Latest Threads

Top