Floating point multiplication in python

X

xyz

hi all:

As we know , 1.1 * 1.1 is 1.21 .
But in python ,I got following :
1.2100000000000002

why python get wrong result? Who can tell me where's the 0.0000000000000002 from?
 
S

Steven D'Aprano

hi all:

As we know , 1.1 * 1.1 is 1.21 .
But in python ,I got following :

1.2100000000000002


The problem is that 1.1 is a *decimal* number, but computers use *binary*,
and it is impossible to express 1.1 exactly as a binary number. So when you
type 1.1 into nearly all computer languages, what you are actually getting
is a tiny bit more than 1.1:
'1.1000000000000001'


which is the closest number to 1.1 that is possible using a C double
floating point number.

Normally you don't see it, because Python truncates the result when
printing:
'1.1'

but the difference is there.
 
T

Thomas Rachel

Am 06.09.2011 07:57 schrieb xyz:
hi all:

As we know , 1.1 * 1.1 is 1.21 .
But in python ,I got following :

1.2100000000000002

why python get wrong result? Who can tell me where's the 0.0000000000000002 from?

1.1 does not fit in a binary floating point number. It is approximated -
in binary! - as 1.000110011001100110011 ... (periodically).

Note that, while in the decimal system we normally use, only numbers
which have other components in the denominator than 2 or 5 are
periodically, in the binary systems only components with 2 are allowed
in order not to be periodically.

Example: 3.453 is not periodically, because it is 3453/100 and 100 has
only the factors 2 and 5, each twice.

1/3 = .3333333... is periodically, because it has the factor 3. The same
applies to 1/6, which has 2 and 3 as factors. The latter destroys the
non-periodical behaviour.

As said, in the dual system, only the 2 is allowed.

..5 (10) = 2** -1 = .1 (2).
..25 (10) = 2 ** -2 = .01 (2).
..75 (10) = their sum = .11 (2).

But .1 (1/10) is more complicated, -2 would be as well.

As the IEEE floating point representation is limited, there is a slight
error value which makes the stored value differ from the intended one.

Look here:


....
1 -0.1 1 1
1.0 -0.1 0.0 0.0
1.0 -0.1 0.0 0.0
1.0 -0.1 0.0 0.0
1.0625 -0.0375 0.0625 0.0625
1.09375 -0.00625 0.03125 0.03125
1.09375 -0.00625 0.0 0.0
1.09375 -0.00625 0.0 0.0
1.09765625 -0.00234375 0.00390625 0.00390625
1.099609375 -0.000390625 0.001953125 0.001953125
1.099609375 -0.000390625 0.0 0.0
1.099609375 -0.000390625 0.0 0.0
1.09985351562 -0.000146484375 0.000244140625 0.000244140625
1.09997558594 -2.44140625001e-05 0.0001220703125 0.0001220703125
1.09997558594 -2.44140625001e-05 0.0 0.0
1.09997558594 -2.44140625001e-05 0.0 0.0
1.09999084473 -9.15527343759e-06 1.52587890625e-05 1.52587890625e-05
1.09999847412 -1.52587890634e-06 7.62939453125e-06 7.62939453125e-06
1.09999847412 -1.52587890634e-06 0.0 0.0
1.09999847412 -1.52587890634e-06 0.0 0.0
1.0999994278 -5.72204589933e-07 9.53674316406e-07 9.53674316406e-07
1.09999990463 -9.53674317294e-08 4.76837158203e-07 4.76837158203e-07
1.09999990463 -9.53674317294e-08 0.0 0.0
1.09999990463 -9.53674317294e-08 0.0 0.0
1.09999996424 -3.57627869541e-08 5.96046447754e-08 5.96046447754e-08
1.09999999404 -5.96046456636e-09 2.98023223877e-08 2.98023223877e-08
1.09999999404 -5.96046456636e-09 0.0 0.0
1.09999999404 -5.96046456636e-09 0.0 0.0
1.09999999776 -2.23517426789e-09 3.72529029846e-09 3.72529029846e-09
1.09999999963 -3.72529118664e-10 1.86264514923e-09 1.86264514923e-09
1.09999999963 -3.72529118664e-10 0.0 0.0
1.09999999963 -3.72529118664e-10 0.0 0.0
1.09999999986 -1.3969847501e-10 2.32830643654e-10 2.32830643654e-10
1.09999999998 -2.32831531832e-11 1.16415321827e-10 1.16415321827e-10
1.09999999998 -2.32831531832e-11 0.0 0.0
1.09999999998 -2.32831531832e-11 0.0 0.0
1.09999999999 -8.73123795486e-12 1.45519152284e-11 1.45519152284e-11
1.1 -1.45528034068e-12 7.27595761418e-12 7.27595761418e-12
1.1 -1.45528034068e-12 0.0 0.0
1.1 -1.45528034068e-12 0.0 0.0
1.1 -5.45785638906e-13 9.09494701773e-13 9.09494701773e-13
1.1 -9.10382880193e-14 4.54747350886e-13 4.54747350886e-13
1.1 -9.10382880193e-14 0.0 0.0
1.1 -9.10382880193e-14 0.0 0.0
1.1 -3.41948691585e-14 5.68434188608e-14 5.68434188608e-14
1.1 -5.77315972805e-15 2.84217094304e-14 2.84217094304e-14
1.1 -5.77315972805e-15 0.0 0.0
1.1 -5.77315972805e-15 0.0 0.0
1.1 -2.22044604925e-15 3.5527136788e-15 3.5527136788e-15
1.1 -4.4408920985e-16 1.7763568394e-15 1.7763568394e-15
1.1 -4.4408920985e-16 0.0 0.0
1.1 -4.4408920985e-16 0.0 0.0
1.1 -2.22044604925e-16 2.22044604925e-16 2.22044604925e-16
1.1 0.0 1.11022302463e-16 2.22044604925e-16
1.1 0.0 0.0 0.0
1.1 0.0 0.0 0.0
1.1 0.0 1.38777878078e-17 0.0
1.1 0.0 6.93889390391e-18 0.0

So here we have reached the point where the maximum precision is reached
- a doesn't change anymore, although it should. The error is about 1E-16.

Now if you multiply two values with an error, the error also propagates
into the result - PLUs the result can have its own error source - in the
same order of magnitude.

(a+e) * (a+e) = a*a + 2*a*e + e*e. So your new error term is 2*a*e + e*e
or (2*a + e) * e.
 
T

Thomas 'PointedEars' Lahn

Thomas said:
Now if you multiply two values with an error, the error also propagates
into the result - PLUs the result can have its own error source - in the
same order of magnitude.

(a+e) * (a+e) = a*a + 2*a*e + e*e. So your new error term is 2*a*e + e*e
or (2*a + e) * e.

Your explanation about floating-point precision, which I already knew about
but have only scanned here – so it might be flawed as well –,
notwithstanding, it is not clear to me at all what you are trying to prove
there.

Computers (well, perhaps outside of mathematical software) do NOT compute an
equation as humans would do, so the binomial theorem does NOT apply. In an
algorithm of the real implementation,

(a + e) * (a + e)

would be computed as follows:

b := a + e
c := b * b
or
c := b + … + b

[add the value of `b' to the value of `b' (b−1) times, since binary logic
does not support multiplication]

IOW, the error does propagate into the result indeed, but not as you
described. Indeed, thanks to rounding on assignment and multiplication
(i. e., setting register values or IEEE-754 floating-point mantissa and
exponent), the error will be different, probably greater than you compute
here.
 
S

Steven D'Aprano

Your explanation about floating-point precision, which I already knew
about but have only scanned here – so it might be flawed as well –,
notwithstanding, it is not clear to me at all what you are trying to prove
there.

Computers (well, perhaps outside of mathematical software) do NOT compute
an equation as humans would do, so the binomial theorem does NOT apply.

I think you have misunderstood. The binomial theorem always applies. Any
time you multiply two numbers, both numbers can always be re-written as a
sum of two numbers:

10*5 = (6+4)*(2+3)

So a perfect square can always be re-written in the form where the binomial
theorem applies:

5*5 = (2+3)*(2+3)
25 = 2*2 + 2*3 + 3*2 + 3*3
25 = 4 + 6 + 6 + 9
25 = 25

The binomial theorem is not a part of the algorithm for performing
multiplication. It is part of the analysis of the errors that occur during
multiplication. The actual mechanics of how bits are flipped is irrelevant.

Any floating point number x should be considered as equal to (a+e), where a
is the number actually wanted by the user, and e the error term forced upon
the user by the use of binary floats. (If you're lucky, e=0.) Generally,
both a and e are unknown, but of course their sum is known -- it's just the
float x.

So given a float x, when you square it you get this:

Exact values: a*a = a**2

Float values: x*x = (a+e)(a+e)
= a**2 + 2*a*e + e**2

So the error term has increased from e to (2*a*e+e**2). It is usual to
assume that e**2 is small enough that it underflows to zero, so we have the
error term e increasing to 2*a*e as a fairly simple estimate of the new
error.

In an algorithm of the real implementation,

(a + e) * (a + e)

would be computed as follows:

b := a + e
c := b * b
or
c := b + … + b

[add the value of `b' to the value of `b' (b−1) times, since binary logic
does not support multiplication]

What you probably mean to say is that binary hardware usually implements
multiplication via repeated addition.

http://en.wikipedia.org/wiki/Binary_multiplier

If you don't mean that, I can't imagine what you are trying to say.
Multiplication of numbers exists in any base, binary no less than decimal.

IOW, the error does propagate into the result indeed, but not as you
described. Indeed, thanks to rounding on assignment and multiplication
(i. e., setting register values or IEEE-754 floating-point mantissa and
exponent), the error will be different, probably greater than you compute
here.

There may be other sources of error, particularly when multiplying two
numbers of greatly different magnitudes, but that doesn't invalidate Thomas
Rachel's point (which is only an estimate, of course).

We can see how much error is actually there by using exact arithmetic:

Error in float 1.1:
1/11258999068426240

Error in float 1.1*1.1:
21/112589990684262400

which is slightly more than double e above, and slightly less than our
estimate of 2*a*e = 11/56294995342131200

So we can conclude that, at least for 1.1**2, Python floats are more
accurate than we would expect from a simple application of the binomial
theorem. (For implementations using IEEE doubles.)
 
D

Dennis Lee Bieber

What you probably mean to say is that binary hardware usually implements
multiplication via repeated addition.

http://en.wikipedia.org/wiki/Binary_multiplier
Technically, as shown by that article, binary multiplication is not
what is commonly thought of "repeated addition" (the method used on old
adding machines wherein one hit "+" /n/ times [though one normally
entered a "0" for each place: rather than hit "+" 20 time for 123 * 20,
on would hit 123, 0 "+", enter a 0 {giving 1230}, then hitting 2 "+"]).

In binary most of the subparts are the results of left shifts
(factors of 2); so "repeated" doesn't really apply... Multiple shift/add
OTOH...
 
G

Gelonida N

21/112589990684262400

which is slightly more than double e above, and slightly less than our
estimate of 2*a*e = 11/56294995342131200

So we can conclude that, at least for 1.1**2, Python floats are more
accurate than we would expect from a simple application of the binomial
theorem. (For implementations using IEEE doubles.)[/QUOTE]


The reason why the error is different from the 2*a*e is, that we
encounter two problems.

first problem is, that x = a + e
e exists because a float does have a limited number (let's call it N) of
digits and a has an infinite amount of non zero digits in the binary format.


second problem is, that the result of the multiplication is not

(a+e) * (a+e) but a 'rounded' version of it, because the floating point
representation of the result would require about 2*N digits, whereas
only N digits will be stored in the result.

depending on the rounding which happened (up or down) the error will be
bigger or smaller than the estimated one.
 
T

Terry Reedy

So given a float x, when you square it you get this:

Exact values: a*a = a**2

Float values: x*x = (a+e)(a+e)
= a**2 + 2*a*e + e**2

So the error term has increased from e to (2*a*e+e**2). It is usual to
assume that e**2 is small enough that it underflows to zero, so we have the
error term e increasing to 2*a*e as a fairly simple estimate of the new
error.

And the relative error, which is what is often important, increases from
e/a to 2e/a.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,756
Messages
2,569,540
Members
45,025
Latest member
KetoRushACVFitness

Latest Threads

Top