Float precision and float equality

M

Mark Dickinson

If you can depend on IEEE 754 semantics, one relatively robust method
is to use the number of representable floats between two numbers. The
main advantage compared to the proposed methods is that it somewhat
automatically takes into account the amplitude of input numbers:

FWIW, there's a function that can be used for this in Lib/test/
test_math.py in Python svn; it's used to check that math.gamma isn't
out by more than 20 ulps (for a selection of test values).

def to_ulps(x):
"""Convert a non-NaN float x to an integer, in such a way that
adjacent floats are converted to adjacent integers. Then
abs(ulps(x) - ulps(y)) gives the difference in ulps between two
floats.

The results from this function will only make sense on platforms
where C doubles are represented in IEEE 754 binary64 format.

"""
n = struct.unpack('<q', struct.pack('<d', x))[0]
if n < 0:
n = ~(n+2**63)
return n
 
D

dbd

...

You don't understand this at all do you?

If you have a sine wave with an amplitude less than the truncation
error, it will always be approximately equal to zero.

Numerical maths is about approximations, not symbolic equalities.


Which is the reason 0.5*eps*sin(x) is never distinguishable from 0.
...

A calculated value of 0.5*eps*sin(x) has a truncation error on the
order of eps squared. 0.5*eps and 0.495*eps are readily distinguished
(well, at least for values of eps << 0.01 :).

At least one of us doesn't understand floating point.

Dale B. Dalrymple
 
C

Carl Banks

A calculated value of 0.5*eps*sin(x) has a truncation error on the
order of eps squared. 0.5*eps and 0.495*eps are readily distinguished
(well, at least for values of eps << 0.01 :).

At least one of us doesn't understand floating point.

You're talking about machine epsilon? I think everyone else here is
talking about a number that is small relative to the expected smallest
scale of the calculation.


Carl Banks
 
D

dbd

You're talking about machine epsilon?  I think everyone else here is
talking about a number that is small relative to the expected smallest
scale of the calculation.

Carl Banks

When you implement an algorithm supporting floats (per the OP's post),
the expected scale of calculation is the range of floating point
numbers. For floating point numbers the intrinsic truncation error is
proportional to the value represented over the normalized range of the
floating point representation. At absolute values smaller than the
normalized range, the truncation has a fixed value. These are not
necessarily 'machine' characteristics but the characteristics of the
floating point format implemented.

A useful description of floating point issues can be found:

http://dlc.sun.com/pdf/800-7895/800-7895.pdf

Dale B. Dalrymple
 
C

Carl Banks

When you implement an algorithm supporting floats (per the OP's post),
the expected scale of calculation is the range of floating point
numbers. For floating point numbers the intrinsic truncation error is
proportional to the value represented over the normalized range of the
floating point representation. At absolute values smaller than the
normalized range, the truncation has a fixed value. These are not
necessarily 'machine' characteristics but the characteristics of the
floating point format implemented.

I know, and it's irrelevant, because no one, I don't think, is talking
about magnitude-specific truncation value either, nor about any other
tomfoolery with the floating point's least significant bits.

A useful description of floating point issues can be found:
[snip]

I'm not reading it because I believe I grasp the situation just fine.
But you are welcome to convince me otherwise. Here's how:

Say I have two numbers, a and b. They are expected to be in the range
(-1000,1000). As far as I'm concerned, if they differ by less than
0.1, they might as well be equal. Therefore my test for "equality"
is:

abs(a-b) < 0.08

Can you give me a case where this test fails?

If a and b are too far out of their expected range, all bets are off,
but feel free to consider arbitrary values of a and b for extra
credit.


Carl Banks
 
R

Raymond Hettinger

[Carl Banks]
That was also my reading of the OP's question.

The suggestion to use round() was along the
lines of performing a quantize or snap-to-grid
operation after each step in the calculation.
That approach parallels the recommendation for how
to use the decimal module for fixed point calculations:
http://docs.python.org/library/decimal.html#decimal-faq


Raymond
 
D

dbd

...
A useful description of floating point issues can be found:

[snip]

I'm not reading it because I believe I grasp the situation just fine.
...

Say I have two numbers, a and b.  They are expected to be in the range
(-1000,1000).  As far as I'm concerned, if they differ by less than
0.1, they might as well be equal.
...
Carl Banks

I don't expect Carl to read. I posted the reference for the OP whose
only range specification was "calculations with floats" and "equality
of floats" and who expressed concern about "truncation errors". Those
who can't find "floats" in the original post will find nothing of
interest in the reference.

Dale B. Dalrymple
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,781
Messages
2,569,615
Members
45,294
Latest member
LandonPigo

Latest Threads

Top