3.2*2 is 9.6 ... or maybe it isn't?

B

Bojan Sudarevic

Hi,

I'm PHP developer and entirely new to Python. I installed it (version
2.5.2, from Debian repos) today on the persuasion of a friend, who is a
Python addict.

The first thing I typed into it was 3.2*3 (don't ask why I typed *that*,
I don*t know, I just did). And the answer wasn't 9.6.

Here it is:
9.6000000000000014

So I became curious...
19.200000000000003
.... and so on ...

After that I tried Windows version (3.1rc2), and...
9.600000000000001

I wasn't particularly good in math in school and university, but I'm
pretty sure that 3.2*3 is 9.6.

Cheers,
Bojan
 
M

Mark Dickinson

Hi,

I'm PHP developer and entirely new to Python. I installed it (version
2.5.2, from Debian repos) today on the persuasion of a friend, who is a
Python addict.

The first thing I typed into it was 3.2*3 (don't ask why I typed *that*,
I don*t know, I just did). And the answer wasn't 9.6.

[examples snipped]

Hi Bojan,

This is a FAQ. Take a look at:

http://docs.python.org/tutorial/floatingpoint.html

and let us know whether that explains things to your
satisfaction.

Mark
 
T

Tomasz Zieliñski

I wasn't particularly good in math in school and university, but I'm
pretty sure that 3.2*3 is 9.6.

It's not math, it's floating point representation of numbers - and its
limited accuracy.
Type 9.6 and you'll get 9.5999999999999996
 
E

Emile van Sebille

On 6/25/2009 11:04 AM Bojan Sudarevic said...

9.600000000000001

I wasn't particularly good in math in school and university, but I'm
pretty sure that 3.2*3 is 9.6.

Yes -- in this world. But in the inner workings of computers, 3.2 isn't
accurately representable in binary. This is a faq.

ActivePython 2.6.2.2 (ActiveState Software Inc.) based on
Python 2.6.2 (r262:71600, Apr 21 2009, 15:05:37) [MSC v.1500 32 bit
(Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.

Emile
 
P

Paul Rudin

Bojan Sudarevic said:
Hi,

I'm PHP developer and entirely new to Python. I installed it (version
2.5.2, from Debian repos) today on the persuasion of a friend, who is a
Python addict.

The first thing I typed into it was 3.2*3 (don't ask why I typed *that*,
I don*t know, I just did). And the answer wasn't 9.6.

Here it is:

9.6000000000000014

So I became curious...

19.200000000000003
... and so on ...

After that I tried Windows version (3.1rc2), and...

9.600000000000001

I wasn't particularly good in math in school and university, but I'm
pretty sure that 3.2*3 is 9.6.

This is almost certainly nothing to do with python per se, but the
floating point implementation of your hardware. Floating point
arithmetic on computers is not accurate to arbitrary precision. If you
want such precision use a library that supports it or make you own
translations to and from appropriate integer sums (but it's going to be
slower).
 
M

Mark Dickinson

The first thing I typed into it was 3.2*3 (don't ask why I typed *that*,
I don*t know, I just did). And the answer wasn't 9.6.

It looks like it's false in PHP too, by the way (not
that I know any PHP, so I could well be missing
something...)


bernoulli:py3k dickinsm$ php -a
Interactive mode enabled

<?
$a = 3.2*3;
$b = 9.6;
var_dump($a);
float(9.6)
var_dump($b);
float(9.6)
var_dump($a == $b);
bool(false)


Mark
 
M

Michael Torrie

Bojan said:
The first thing I typed into it was 3.2*3 (don't ask why I typed *that*,
I don*t know, I just did). And the answer wasn't 9.6.

Here it is:

9.6000000000000014

I'm surprised how often people encounter this and wonder about it. As I
began programming back in the day using C, this is just something I grew
up with (grudging acceptance).

I guess PHP artificially rounds the results or something to make it seem
like it's doing accurate calculations, which is a bit surprising to me.
We all know that IEEE floating point is a horribly inaccurate
representation, but I guess I'd rather have my language not hide that
fact from me. Maybe PHP is using BCD or something under the hood (slow
but accurate).

If you want accurate math, check out other types like what is in the
decimal module:
9.6
 
R

Robert Kern

If you want accurate math, check out other types like what is in the
decimal module:

9.6

I wish people would stop representing decimal floating point arithmetic as "more
accurate" than binary floating point arithmetic. It isn't. Decimal floating
point arithmetic does have an extremely useful niche: where the inputs have
finite decimal representations and either the only operations are addition,
subtraction and multiplication (e.g. many accounting problems) OR there are
conventional rounding modes to follow (e.g. most of the other accounting problems).

In the former case, you can claim that decimal floating point is more accurate
*for those problems*. But as soon as you have a division operation, decimal
floating point has the same accuracy problems as binary floating point.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
 
M

Mark Dickinson

I guess PHP artificially rounds the results or something to make it seem
like it's doing accurate calculations, which is a bit surprising to me.

After a bit of experimentation on my machine, it *looks* as though PHP
is using the usual hardware floats internally (no big surprise there),
but implicit conversions to string use 14 significant digits. If
Python's repr used '%.14g' internally instead of '%.17g' then we'd see
pretty much the same thing in Python.
We all know that IEEE floating point is a horribly inaccurate
representation [...]

That's a bit extreme! Care to elaborate?

, but I guess I'd rather have my language not hide that
fact from me.  Maybe PHP is using BCD or something under the hood (slow
but accurate).

If you want accurate math, check out other types like what is in the
decimal module:

As Robert Kern already said, there really isn't any sense in which
decimal
floating-point is any more accurate than binary floating-point, except
that---somewhat tautologically---it's better at representing decimal
values exactly.

The converse isn't true, though, from a numerical perspective: there
are some interesting examples of bad things that can happen with
decimal floating-point but not with binary. For example, given any
two Python floats a and b, and assuming IEEE 754 arithmetic with
default rounding, it's always true that a <= (a+b)/2 <= b, provided
that a+b doesn't overflow. Not so for decimal floating-point:
Decimal('7.12345')

Similarly, sqrt(x*x) == x is always true for a positive IEEE 754
double x (again
assuming the default roundTiesToEven rounding mode, and assuming that
x*x neither overflows nor underflows). But this property fails for
IEEE 754-compliant decimal floating-point.

Mark
 
R

Robert Kern

Well, we don't actually have an arbitrary-precision, huge exponent
version of binary floating point.

You may not. I do.

http://code.google.com/p/mpmath/

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
 
M

Mensanator

Well, we don't actually have an arbitrary-precision, huge exponent
version of binary floating point.  In that sense the Decimal floating
point beats it.  Not that it would be too hard to have such a floating
point in Python (long for mantissa, int for exponent, ...), but we don't
in fact have such a module in place.

We have the gmpy module which can do arbitray precision floats.
 
R

Robert Kern

Well, we don't actually have an arbitrary-precision, huge exponent
version of binary floating point. In that sense the Decimal floating
point beats it.

And while that's true, to a point, that isn't what Michael or the many others
are referring to when they claim that decimal is more accurate (without any
qualifiers). They are misunderstanding the causes and limitations of the example
"3.2 * 3 == 9.6". You can see a great example of this in the comparison between
new Cobra language and Python:

http://cobra-language.com/docs/python/

In that case, they have a fixed-precision decimal float from the underlying .NET
runtime but still making the claim that it is more accurate arithmetic. While
you may make (completely correct) claims that decimal.Decimal can be more
accurate because of its arbitrary precision capabilities, this is not the claim
others are making or the one I am arguing against.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
 
U

Ulrich Eckhardt

Robert said:
I wish people would stop representing decimal floating point arithmetic as
"more accurate" than binary floating point arithmetic.

Those that failed, learned. You only see those that haven't learnt yet.

Dialog between two teachers:
T1: Oh those pupils, I told them hundred times! when will they learn?
T2: They did, but there's always new pupils.

TGIF

Uli
(wave and smile)
 
S

Steven D'Aprano

If you want accurate math, check out other types like what is in the
decimal module:

9.6

Not so. Decimal suffers from the exact same problem, just with different
numbers:
False

Some numbers can't be represented exactly in base 2, and some numbers
can't be represented exactly in base 10.
 
S

Steven D'Aprano

We all know that IEEE floating point is a horribly inaccurate
representation [...]

That's a bit extreme! Care to elaborate?

Well, 0.1 requires an infinite number of binary places, and IEEE floats
only have a maximum of 53 or so, so that implies that floats are
infinitely inaccurate...

*wink*
 
P

pdpi

Not so. Decimal suffers from the exact same problem, just with different
numbers:


False

Some numbers can't be represented exactly in base 2, and some numbers
can't be represented exactly in base 10.

But since 10 = 2 * 5, all numbers that can be finitely represented in
binary can be represented finitely in decimal as well, with the exact
same number of places for the fractional part (and no more digits
than the binary representation in the integer part)
 
A

Andre Engels

OK, so base 30 is the obvious choice, digits and letters, and 1/N works
for n in range(1, 7) + range(8, 11).  Gödel numbers, anyone? :)

To get even more working, use real rational numbers: p/q represented
by the pair of numbers (p,q) with p,q natural numbers. Then 1/N works
for every N, and upto any desired precision.
 
R

Robert Kern

Those that failed, learned. You only see those that haven't learnt yet.

Dialog between two teachers:
T1: Oh those pupils, I told them hundred times! when will they learn?
T2: They did, but there's always new pupils.

Unfortunately, I keep seeing people who claim to be old hands at floating point
making these unlearned remarks. I have no issue with neophytes like the OP
expecting different results and asking questions. It is those who answer them
with an air of authority that need to take a greater responsibility for knowing
what they are talking about. I lament the teachers, not the pupils.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top