4 hundred quadrillonth?

S

seanm.py

The explaination in my introductory Python book is not very
satisfying, and I am hoping someone can explain the following to me:
0.80000000000000004

4 / 5.0 is 0.8. No more, no less. So what's up with that 4 at the end.
It bothers me.
 
C

Carl Banks

The explaination in my introductory Python book is not very
satisfying, and I am hoping someone can explain the following to me:


0.80000000000000004

4 / 5.0 is 0.8. No more, no less.

That would depend on how you define the numbers and division.

What you say is correct for real numbers and field division. It's not
true for the types of numbers Python uses, which are not real numbers.

Python numbers are floating point numbers, defined (approximately) by
IEEE 754, and they behave similar to but not exactly the same as real
numbers. There will always be small round-off errors, and there is
nothing you can do about it except to understand it.

It bothers me.

Oh well.

You can try Rational numbers if you want, I think they were added in
Python 2.6. But if you're not careful the divisors can get
ridiculously large.


Carl Banks
 
C

Chris Rebert

That would depend on how you define the numbers and division.

What you say is correct for real numbers and field division.  It's not
true for the types of numbers Python uses, which are not real numbers.

Python numbers are floating point numbers, defined (approximately) by
IEEE 754, and they behave similar to but not exactly the same as real
numbers.  There will always be small round-off errors, and there is
nothing you can do about it except to understand it.



Oh well.

You can try Rational numbers if you want, I think they were added in
Python 2.6.  But if you're not careful the divisors can get
ridiculously large.

The `decimal` module's Decimal type is also an option to consider:

Python 2.6.2 (r262:71600, May 14 2009, 16:34:51)Decimal('0.8')

Cheers,
Chris
 
C

Chris Rebert

That would depend on how you define the numbers and division.

What you say is correct for real numbers and field division.  It's not
true for the types of numbers Python uses, which are not real numbers.

Python numbers are floating point numbers, defined (approximately) by
IEEE 754, and they behave similar to but not exactly the same as real
numbers.  There will always be small round-off errors, and there is
nothing you can do about it except to understand it.



Oh well.

You can try Rational numbers if you want, I think they were added in
Python 2.6.  But if you're not careful the divisors can get
ridiculously large.

The `decimal` module's Decimal type is also an option to consider:

Python 2.6.2 (r262:71600, May 14 2009, 16:34:51)Decimal('0.8')

Cheers,
Chris
 
N

norseman

The explaination in my introductory Python book is not very
satisfying, and I am hoping someone can explain the following to me:

0.80000000000000004

4 / 5.0 is 0.8. No more, no less. So what's up with that 4 at the end.
It bothers me.
======================================

Machine architecture, actual implementation of logic on the chip and
what the compiler maker did all add up to creating rounding errors. I
have read where python, if left to its own, will output everything it
computed. I guess the idea is to show
1) python's accuracy and
2) what was left over
so the picky people can have something to gnaw on.

Astrophysics, Astronomers and like kind may have wants of such.
If you work much in finite math you may want to test the combo to see if
it will allow the accuracy you need. Or do you need to change machines?

Beyond that - just fix the money at 2, gas pumps at 3 and the
sine/cosine at 8 and let it ride. :)


Steve
 
C

Carl Banks

Beyond that - just fix the money at 2, gas pumps at 3 and the
sine/cosine at 8 and let it ride. :)


Or just use print.
0.8

Since interactive prompt is usually used by programmers who are
inspecting values it makes a little more sense to print enough digits
to give unambiguous representation.


Carl Banks
 
R

R. David Murray

Gary Herron said:
+1 as QOTW

And just to add one bit of clarity: This problem has nothing to do with
the OP's division of 4 by 5.0, but rather that the value of 0.8 itself
cannot be represented exactly in IEEE 754. Just try

'0.80000000000000004'

Python 3.1b1+ (py3k:72432, May 7 2009, 13:51:24)
[GCC 4.1.2 (Gentoo 4.1.2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.0.8

In py3k Eric Smith and Mark Dickinson have implemented Gay's floating
point algorithm for Python so that the shortest repr that will round
trip correctly is what is used as the floating point repr....

--David
 
G

Gary Herron

R. David Murray said:
Gary Herron said:
+1 as QOTW

And just to add one bit of clarity: This problem has nothing to do with
the OP's division of 4 by 5.0, but rather that the value of 0.8 itself
cannot be represented exactly in IEEE 754. Just try

'0.80000000000000004'

Python 3.1b1+ (py3k:72432, May 7 2009, 13:51:24)
[GCC 4.1.2 (Gentoo 4.1.2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
0.8

In py3k Eric Smith and Mark Dickinson have implemented Gay's floating
point algorithm for Python so that the shortest repr that will round
trip correctly is what is used as the floating point repr....

--David

Which won't change the fact that 0.8 and lots of other favorite floats
are still not representable exactly, but it will hide this fact from
most newbies. One of the nicer results of this will be that these
(almost) weekly questions and discussions will be come a thing of the
past.

With a sigh of relief,
Gary Herron
 
A

AggieDan04

======================================

Machine architecture, actual implementation of logic on the chip and
what the compiler maker did all add up to creating rounding errors. I
have read where python, if left to its own, will output everything it
computed. I guess the idea is to show
        1) python's accuracy and
        2) what was left over
so the picky people can have something to gnaw on.

If you want to be picky, the exact value is
0.8000000000000000444089209850062616169452667236328125 (i.e.,
3602879701896397/2**52). Python's repr function rounds numbers to 17
significant digits. This is the minimum that ensures that float(repr
(x)) == x for all x (using IEEE 754 double precision).
Astrophysics, Astronomers and like kind may have wants of such.
If you work much in finite math you may want to test the combo to see if
  it will allow the accuracy you need. Or do you need to change machines?

The error in this example is roughly equivalent to the width of a red
blood cell compared to the distance between Earth and the sun. There
are very few applications that need more accuracy than that.
 
A

AggieDan04

....

The `decimal` module's Decimal type is also an option to consider:

Python 2.6.2 (r262:71600, May 14 2009, 16:34:51)
Decimal("1.999999999999999999999999999")

Decimal isn't a panacea for floating-point rounding errors. It also
has the disadvantage of being much slower.

It is useful for financial applications, in which an exact value for
0.01 actually means something.
 
A

Andre Engels

The explaination in my introductory Python book is not very
satisfying, and I am hoping someone can explain the following to me:

0.80000000000000004

4 / 5.0 is 0.8. No more, no less. So what's up with that 4 at the end.
It bothers me.

Well, how much would 1 / 3.0 be? Maybe 0.3333333333... with a certain
(large) number of threes? And if you multiply that by 3, will it be
1.0 again? No, because you cannot represent 1/3.0 as a precise decimal
fraction.

Internally, what is used are not decimal but binary fractions. And as
a binary fraction, 4/5.0 is just as impossible to represent as 1/3.0
is (1/3.0 = 0.0101010101... and 4/5.0 = 0.110011001100... to be
exact). So 4 / 5.0 gives you the binary fraction of a certain
precision that is closest to 0.8. And apparently that is close to
0.80000000000000004
 
R

rustom

The error in this example is roughly equivalent to the width of a red
blood cell compared to the distance between Earth and the sun.  There
are very few applications that need more accuracy than that.

For a mathematician there are no inexact numbers; for a physicist no
exact ones.
Our education system is on the math side; reality it seems is on the
other.
 
S

Steven D'Aprano

Which won't change the fact that 0.8 and lots of other favorite floats
are still not representable exactly, but it will hide this fact from
most newbies. One of the nicer results of this will be that these
(almost) weekly questions and discussions will be come a thing of the
past.

With a sigh of relief,

Yay! We now will have lots of subtle floating point bugs that people
can't see! Ignorance is bliss and what you don't know about floating
point can't hurt you!
False
 
S

Steven D'Aprano

The error in this example is roughly equivalent to the width of a red
blood cell compared to the distance between Earth and the sun. There
are very few applications that need more accuracy than that.

Which is fine if the error *remains* that small, but the problem is that
errors usually increase and rarely cancel.
 
M

Mark Dickinson

Yay! We now will have lots of subtle floating point bugs that people
can't see! Ignorance is bliss and what you don't know about floating
point can't hurt you!

Why do you think this change will give rise to 'lots of subtle
floating point bugs'? The new repr is still faithful, in the
sense that if x != y then repr(x) != repr(y). Personally, I'm
not sure that the new repr is likely to do anything for
floating-point confusion either way.

What's gone in 3.1 is the capricious nature of the old "produce
17 significant digits and then remove all trailing zeros" rule
for repr.

For a specific example of the randomness of the current repr
rule, choose a random decimal in [0.5, 1.0) with at
most 12 places (say) after the decimal point; for example, 0.567819.
Now type that number into Python at the interpreter prompt.
Then there's approximately a 9% chance (where the number 0.09 comes
from computing 2.**53/10.**17) that you'll see the number you
typed in (possibly with trailing zeros removed if you typed something
like 0.846100), and a 91% chance that you'll get the full 17
significant digits.

Try the same experiment for random decimals in the
interval [1.0, 2.0) and there's about a 45% chance you'll get
back what you typed in, and a 55% chance you'll get 17 sig. digs.

With the new repr, a float that can be specified with 15 significant
decimal digits or fewer will always use those digits for its repr.
It's not a panacea, but I don't see how it's worse than the old
repr.

Mark
 
S

Steven D'Aprano

Why do you think this change will give rise to 'lots of subtle floating
point bugs'? The new repr is still faithful, in the sense that if x !=
y then repr(x) != repr(y). Personally, I'm not sure that the new repr
is likely to do anything for floating-point confusion either way.

I'm sorry, did I forget a wink? Apparently I did :)

I don't think this change will *cause* bugs. However, it *may* (and I
emphasis the may, because it hasn't been around long enough to see the
effect) allow newbies to remain in blissful ignorance of floating point
issues longer than they should.

Today, the first time you call repr(0.8) you should notice that the float
you have is *not quite* the number you thought you had, which alerts you
to the fact that floats aren't the real numbers you learned about it
school. From Python 3.1, that reality will be hidden just a little bit
longer.


[...]
With the new repr, a float that can be specified with 15 significant
decimal digits or fewer will always use those digits for its repr. It's
not a panacea, but I don't see how it's worse than the old repr.

It's only worse in the sense that ignorance isn't really bliss, and this
change will allow programmers to remain ignorant a little longer. I
expect that instead of obviously wet-behind-the-ears newbies asking "I'm
trying to creating a float 0.1, but can't, does Python have a bug?" we'll
start seeing not-such-newbies asking "I've written a function that
sometimes misbehaves, and after heroic effort to debug it, I discovered
that Python has a bug in simple arithmetic, 0.2 + 0.1 != 0.3".

I don't think this will be *worse* than the current behaviour, only bad
in a different way.
 
L

Lawrence D'Oliveiro

Christian said:
Welcome to IEEE 754 floating point land! :)

It used to be worse in the days before IEEE 754 became widespread. Anybody
remember a certain Prof William Kahan from Berkeley, and the foreword he
wrote to the Apple Numerics Manual, 2nd Edition, published in 1988? It's
such a classic piece that I think it should be posted somewhere...
 
L

Lawrence D'Oliveiro

In message <7b986ef0-d118-4e0c-
For a mathematician there are no inexact numbers; for a physicist no
exact ones.

On the contrary, mathematics have worked out a precise theory of
inexactness.

As for exactitude in physics, Gregory Chaitin among others has been trying
to rework physics to get rid of real numbers altogether.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,743
Messages
2,569,478
Members
44,898
Latest member
BlairH7607

Latest Threads

Top