Python math is off by .000000000000045

J

Jussi Piitulainen

Alec said:
Simple mathematical problem, + and - only:

-60.950000000000045

That's wrong.

Not by much. I'm not an expert, but my guess is that the exact value
is not representable in binary floating point, which most programming
languages use for this. Ah, indeed:
0.94999999999999996

Some languages hide the error by printing fewer decimals than they use
internally.
Proof
http://www.wolframalpha.com/input/?i=1800.00-1041.00-555.74+530.74-794.95
-60.95 aka (-(1219/20))

Is there a reason Python math is only approximated? - Or is this a bug?

There are practical reasons. Do learn about "floating point".

There is a price to pay, but you can have exact rational arithmetic in
Python when you need or want it - I folded the long lines by hand
afterwards:
- Fraction(79495, 100)
Fraction(-1219, 20)- Fraction(79495, 100))
-60.950000000000003
 
G

Grant Edwards

Simple mathematical problem, + and - only:

-60.950000000000045

That's wrong.

Oh good. We haven't have this thread for several days.
Proof
http://www.wolframalpha.com/input/?i=1800.00-1041.00-555.74+530.74-794.95
-60.95 aka (-(1219/20))

Is there a reason Python math is only approximated?

http://docs.python.org/tutorial/floatingpoint.html

Python uses binary floating point with a fixed size (64 bit IEEE-754
on all the platforms I've ever run across). Floating point numbers
are only approximations of real numbers. For every floating point
number there is a corresponding real number, but 0% of real numbers
can be represented exactly by floating point numbers.
- Or is this a bug?

No, it's how floating point works.

If you want something else, then perhaps you should use rationals or
decimals:

http://docs.python.org/library/fractions.html
http://docs.python.org/library/decimal.html
 
T

Tobiah

For every floating point
number there is a corresponding real number, but 0% of real numbers
can be represented exactly by floating point numbers.

It seems to me that there are a great many real numbers that can be
represented exactly by floating point numbers. The number 1 is an
example.

I suppose that if you divide that count by the infinite count of all
real numbers, you could argue that the result is 0%.
 
T

Tim Wintle

It seems to me that there are a great many real numbers that can be
represented exactly by floating point numbers. The number 1 is an
example.

I suppose that if you divide that count by the infinite count of all
real numbers, you could argue that the result is 0%.

It's not just an argument - it's mathematically correct.

The same can be said for ints representing the natural numbers, or
positive integers.

However, ints can represent 100% of integers within a specific range,
where floats can't represent all real numbers for any range (except for
the empty set) - because there's an infinate number of real numbers
within any non-trivial range.


Tim
 
T

Terry Reedy

It seems to me that there are a great many real numbers that can be
represented exactly by floating point numbers. The number 1 is an
example.

Binary floats can represent and integer and any fraction with a
denominator of 2**n within certain ranges. For decimal floats,
substitute 10**n or more exactly, 2**j * 5**k since if J < k,
n / (2**j * 5**k) = (n * 2**(k-j)) / 10**k and similarly if j > k.
 
S

Steven D'Aprano


What's your point? I'm afraid my crystal ball is out of order and I have
no idea whether you have a question or are just demonstrating your
mastery of copy and paste from the Python interactive interpreter.
 
D

Devin Jeanpierre

It's not just an argument - it's mathematically correct.

^ this

The floating point numbers are a finite set. Any infinite set, even
the rationals, is too big to have "many" floats relative to the whole,
as in the percentage sense.

----

In fact, any number we can reasonably deal with must have some finite
representation, even if the decimal expansion has an infinite number
of digits. We can work with pi, for example, because there are
algorithms that can enumerate all the digits up to some precision. But
we can't really work with a number for which no algorithm can
enumerate the digits, and for which there are infinitely many digits.
Most (in some sense involving infinities, which is to say, one that is
not really intuitive) of the real numbers cannot in any way or form be
represented in a finite amount of space, so most of them can't be
worked on by computers. They only exist in any sense because it's
convenient to pretend they exist for mathematical purposes, not for
computational purposes.

What this boils down to is to say that, basically by definition, the
set of numbers representable in some finite number of binary digits is
countable (just count up in binary value). But the whole of the real
numbers are uncountable. The hard part is then accepting that some
countable thing is 0% of an uncountable superset. I don't really know
of any "proof" of that latter thing, it's something I've accepted
axiomatically and then worked out backwards from there. But surely
it's obvious, somehow, that the set of finite strings is tiny compared
to the set of infinite strings? If we look at binary strings,
representing numbers, the reals could be encoded as the union of the
two, and by far most of them would be infinite.


Anyway, all that aside, the real numbers are kind of dumb.

-- Devin
 
T

Terry Reedy

What this boils down to is to say that, basically by definition, the
set of numbers representable in some finite number of binary digits is
countable (just count up in binary value). But the whole of the real
numbers are uncountable. The hard part is then accepting that some
countable thing is 0% of an uncountable superset. I don't really know
of any "proof" of that latter thing, it's something I've accepted
axiomatically and then worked out backwards from there.

Informally, if the infinity of counts were some non-zero fraction f of
the reals, then there would, in some sense, be 1/f times a many reals as
counts, so the count could be expanded to count 1/f reals for each real
counted before, and the reals would be countable. But Cantor showed that
the reals are not countable.

But as you said, this is all irrelevant for computing. Since the number
of finite strings is practically finite, so is the number of algorithms.
And even a countable number of algorithms would be a fraction 0, for
instance, of the uncountable predicate functions on 0, 1, 2, ... . So we
do what we actually can that is of interest.
 
J

jmfauth

What's your point? I'm afraid my crystal ball is out of order and I have
no idea whether you have a question or are just demonstrating your
mastery of copy and paste from the Python interactive interpreter.


It should be enough to indicate the right direction
for casual interested readers.
 
J

John Ladasky

Curiosity prompts me to ask...

Those of you who program in other languages regularly: if you visit
comp.lang.java, for example, do people ask this question about
floating-point arithmetic in that forum? Or in comp.lang.perl?

Is there something about Python that exposes the uncomfortable truth
about practical computer arithmetic that these other languages
obscure? For of course, arithmetic is surely no less accurate in
Python than in any other computing language.

I always found it helpful to ask someone who is confused by this issue
to imagine what the binary representation of the number 1/3 would be.

0.011 to three binary digits of precision:
0.0101 to four:
0.01011 to five:
0.010101 to six:
0.0101011 to seven:
0.01010101 to eight:

And so on, forever. So, what if you want to do some calculator-style
math with the number 1/3, that will not require an INFINITE amount of
time? You have to round. Rounding introduces errors. The more
binary digits you use for your numbers, the smaller those errors will
be. But those errors can NEVER reach zero in finite computational
time.

If ALL the numbers you are using in your computations are rational
numbers, you can use Python's rational and/or decimal modules to get
error-free results. Learning to use them is a bit of a specialty.

But for those of us who end up with numbers like e, pi, or the square
root of 2 in our calculations, the compromise of rounding must be
accepted.
 
T

Terry Reedy

I always found it helpful to ask someone who is confused by this issue
to imagine what the binary representation of the number 1/3 would be.

0.011 to three binary digits of precision:
0.0101 to four:
0.01011 to five:
0.010101 to six:
0.0101011 to seven:
0.01010101 to eight:

And so on, forever. So, what if you want to do some calculator-style
math with the number 1/3, that will not require an INFINITE amount of
time? You have to round. Rounding introduces errors. The more
binary digits you use for your numbers, the smaller those errors will
be. But those errors can NEVER reach zero in finite computational
time.

Ditto for 1/3 in decimal.
....
0.33333333 to eitht
If ALL the numbers you are using in your computations are rational
numbers, you can use Python's rational and/or decimal modules to get
error-free results.

Decimal floats are about as error prone as binary floats. One can only
exact represent a subset of rationals of the form n / (2**j * 5**k). For
a fixed number of bits of storage, they are 'lumpier'. For any fixed
precision, the arithmetic issues are the same.

The decimal module decimals have three advantages (sometimes) over floats.

1. Variable precision - but there are multiple-precision floats also
available outside the stdlib.

2. They better imitate calculators - but that is irrelevant or a minus
for scientific calculation.

3. They better follow accounting rules for financial calculation,
including a multiplicity of rounding rules. Some of these are laws that
*must* be followed to avoid nasty consequences. This is the main reason
for being in the stdlib.
Learning to use them is a bit of a specialty.

Definitely true.
 
S

Steven D'Aprano

Curiosity prompts me to ask...

Those of you who program in other languages regularly: if you visit
comp.lang.java, for example, do people ask this question about
floating-point arithmetic in that forum? Or in comp.lang.perl?

Yes.

http://stackoverflow.com/questions/588004/is-javascripts-math-broken

And look at the "Linked" sidebar. Obviously StackOverflow users no more
search the internet for the solutions to their problems than do
comp.lang.python posters.


http://compgroups.net/comp.lang.java.programmer/Floating-point-roundoff-error
 
G

Grant Edwards

M

Michael Torrie

One might wonder if the frequency of such questions decreases as the
programming language becomes "lower level" (e.g. C or assembly).

I think that most of the use cases in C or assembly of math are
integer-based only. For example, counting, bit-twiddling, addressing
character cells or pixel coordinates, etc. Maybe when programmers have
to statically declare a variable type in advance, since the common use
cases require only integer, that gets used far more, so experiences with
float happen less often. Some of this could have to do with the fact
that historically floating point required a special library to do
floating point math, and since a lot of people didn't have
floating-point coprocessors back then, most code was integer-only.

Early BASIC interpreters defaulted to floating point for everything, and
implemented all the floating point arithmetic internally with integer
arithmetic, without the help of the x87 processor, but no doubt they did
round the results when printing to the screen. They also did not have
very much precision to begin with. Anyone remember Microsoft's
proprietary floating point binary system and how there were function
calls to convert back and forth between the IEEE standard?

Another key thing is that most C programmers don't normally just print
out floating point numbers without a %.2f kind of notation that properly
rounds a number.

Now, of course, every processor has a floating-point unit, and the C
compilers can generate code that uses it just as easily as integer code.

No matter what language, or what floating point scheme you use,
significant digits is definitely important to understand!
 
E

Ethan Furman

jmfauth said:
It should be enough to indicate the right direction
for casual interested readers.

I'm a casual interested reader and I have no idea what your post is
trying to say.

~Ethan~
 
M

Michael Torrie

I'm a casual interested reader and I have no idea what your post is
trying to say.

He's simply showing you the hex (binary) representation of the
floating-point number's binary representation. As you can clearly see
in the case of 1.1, there is no finite sequence that can store that.
You end up with repeating numbers. Just like 1/3, when represented in
base 10 fractions (x1/10 + x2/100, x3/1000, etc), is a repeating
sequence, the number base 10 numbers 1.1 or 0.2, or many others that are
represented by exact base 10 fractions, end up as repeating sequences in
base 2 fractions. This should help you understand why you get errors
doing simple things like x/y*y doesn't quite get you back to x.
 
E

Ethan Furman

Michael said:
He's simply showing you the hex (binary) representation of the
floating-point number's binary representation. As you can clearly see
in the case of 1.1, there is no finite sequence that can store that.
You end up with repeating numbers.

Thanks for the explanation.
This should help you understand why you get errors
doing simple things like x/y*y doesn't quite get you back to x.

I already understood that. I just didn't understand what point he was
trying to make since he gave no explanation.

~Ethan~
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,581
Members
45,056
Latest member
GlycogenSupporthealth

Latest Threads

Top