# To get the accurate value of 1 - 0.999999999999999 ,how to implement the python algorithm ï¼Ÿ

Discussion in 'Python' started by iMath, Oct 8, 2012.

1. ### iMathGuest

To get the accurate value of 1 - 0.999999999999999 ,how to implement the python algorithm ï¼Ÿ
BTW ï¼ŒWindowsâ€™s calculator get the accurate value ï¼Œanyone who knows how to implement it ?
iMath, Oct 8, 2012

2. ### Ulrich EckhardtGuest

Am 08.10.2012 16:07, schrieb iMath:
> To get the accurate value of 1 - 0.999999999999999 ,how to implement the python algorithm ï¼Ÿ

Algorithms are generally language-agnostic, so what is your question

> BTW ï¼ŒWindowsâ€™s calculator get the accurate value ï¼Œanyone who knows how to implement it ?

You should use a library that handles arbitrary-precision floating point
numbers, Python's built-in floating point type corresponds to C's double
type and that is typically a IEEE float, which means a limited
precision. Just search the web for one. If you really want to do it
yourself, you could leverage the fact that Python's integral type has a
dynamic size, so that it can represent numbers with more than the
typical 32 or 64 bits width.

BTW: If this is not a homework question, you should ask much more
specifically. My anwers are intentionally vague in order to not spoil
you the learning effect.

Cheers!

Uli
Ulrich Eckhardt, Oct 8, 2012

3. ### Dave AngelGuest

On 10/08/2012 10:07 AM, iMath wrote:
> To get the accurate value of 1 - 0.999999999999999 ,how to implement the python algorithm ï¼Ÿ
> BTW ï¼ŒWindowsâ€™s calculator get the accurate value ï¼Œanyone who knows how to implement it ?

Windows calculator is an application, not a programming language. Like
all applications, it has to deal with the finite accuracy of the
underlying processor and language, and choose an algorithm that will

The Pentium chip (and its equivalents from AMD), used by Windows
machines and most others, has about 18 digits of accuracy in its binary
floating point math. However, being binary, the data has to be
converted from decimal to binary (when the user types it in) and binary
to decimal (when displaying it). Either of those conversions may have
quantization errors, and it's up to the program to deal with those or
other inaccuracies.

If you subtract two values, either of which may have quantization
errors, and they are quite close, then the apparent error is
magnified. Out of your 18 digits internal accuracy, you now have only

Therefore many programs more concerned with apparent accuracy will
ignore the binary floating point, and do their work in decimal. That
doesn't eliminate calculation errors, but only quantization errors.
That makes the user think he is getting more accuracy than he really is.

Since that seems to be your goal, I suggest you look into the Decimal
class, locating in the stdlib decimal.

import decimal
a = decimal.Decimal(4.3)
print(a)

5.0999999999999996447286321199499070644378662109375

Note that you still seem to have some "error" since the value 4.3 is a binary float, and has already been quantized. If you want to avoid the binary stuff entirely, try going directly from string to Decimal.

b = decimal.Decimal("5.1")
print(b)

5.1

Back to your original contrived example,

c = decimal.Decimal("1.0")
d = decimal.Decimal("0.999999999999999")
print(c-d)

1E-15

The Decimal class has the disadvantage that it's tons slower on any modern machine I know of, but the advantage that you can specify how much precision you need it to use. It doesn't eliminate errors at all, just one class of them.

e = decimal.Decimal("3.0")
print(c/e)

0.3333333333333333333333333333

That of course is the wrong answer. The "right" answer would never stop printing. We still have a finite number of digits.

print(c/e*e)

0.9999999999999999999999999999

"Fixing" this is subject for another lesson, someday.

--

DaveA
Dave Angel, Oct 8, 2012
4. ### Chris AngelicoGuest

On Tue, Oct 9, 2012 at 1:48 AM, Dave Angel <> wrote:
> import decimal
> a = decimal.Decimal(4.3)
> print(a)
>
> 5.0999999999999996447286321199499070644378662109375

Ah, the delights of copy-paste

> The Decimal class has the disadvantage that it's tons slower on any modern machine I know of...

Isn't it true, though, that Python 3.3 has a completely new
implementation of decimal that largely removes this disadvantage?

ChrisA
Chris Angelico, Oct 8, 2012
5. ### Dave AngelGuest

On 10/08/2012 11:00 AM, Chris Angelico wrote:
> On Tue, Oct 9, 2012 at 1:48 AM, Dave Angel <> wrote:
>> import decimal
>> a = decimal.Decimal(4.3)
>> print(a)
>>
>> 5.0999999999999996447286321199499070644378662109375

> Ah, the delights of copy-paste
>
>> The Decimal class has the disadvantage that it's tons slower on any modern machine I know of...

> Isn't it true, though, that Python 3.3 has a completely new
> implementation of decimal that largely removes this disadvantage?
>
> ChrisA

I wouldn't know, I'm on 3.2. However, I sincerely doubt if it's within
a factor of 100 of the speed of the binary float, at least on
pentium-class machines that do binary float in microcode. A dozen years
or so ago, when the IEEE floating point standard was still being formed,
I tried to argue the committee into including decimal in the standard
(which they did much later). Had it been in the standard then, we MIGHT
have had decimal fp on chip as well as binary. Then again, the standard
was roughly based on the already-existing Intel 8087, so maybe it was
just hopeless.

I guess it's possible that for some operations, the cost of the
byte-code interpreter and function lookup, etc. might reduce the
apparent penalty. Has anybody done any timings?

--

DaveA
Dave Angel, Oct 8, 2012
6. ### Chris AngelicoGuest

On Tue, Oct 9, 2012 at 2:13 AM, Dave Angel <> wrote:
> On 10/08/2012 11:00 AM, Chris Angelico wrote:
>> On Tue, Oct 9, 2012 at 1:48 AM, Dave Angel <> wrote:
>>> The Decimal class has the disadvantage that it's tons slower on any modern machine I know of...

>> Isn't it true, though, that Python 3.3 has a completely new
>> implementation of decimal that largely removes this disadvantage?
>>
>> ChrisA

>
> I wouldn't know, I'm on 3.2. However, I sincerely doubt if it's within
> a factor of 100 of the speed of the binary float, at least on
> pentium-class machines that do binary float in microcode. A dozen years
> or so ago, when the IEEE floating point standard was still being formed,
> I tried to argue the committee into including decimal in the standard
> (which they did much later). Had it been in the standard then, we MIGHT
> have had decimal fp on chip as well as binary. Then again, the standard
> was roughly based on the already-existing Intel 8087, so maybe it was
> just hopeless.
>
> I guess it's possible that for some operations, the cost of the
> byte-code interpreter and function lookup, etc. might reduce the
> apparent penalty. Has anybody done any timings?

Try this, from python-dev list:

http://mail.python.org/pipermail/python-dev/2012-September/121832.html

It's not as fast as float, but it sure gives a good account for itself.

ChrisA
Chris Angelico, Oct 8, 2012
7. ### Terry ReedyGuest

On 10/8/2012 11:13 AM, Dave Angel wrote:

>> Isn't it true, though, that Python 3.3 has a completely new
>> implementation of decimal that largely removes this disadvantage?

> I wouldn't know, I'm on 3.2. However, I sincerely doubt if it's within
> a factor of 100 of the speed of the binary float, at least on

>>> import timeit as tt
>>> tt.repeat("float('1.0')-float('0.9999999999')")

[0.6856039948871151, 0.669049830953858, 0.668688006423692]
>>> tt.repeat("Decimal('1.0')-Decimal('0.9999999999')", "from decimal

import Decimal")
[1.3204655578092428, 1.286977575486688, 1.2893188292009938]

>>> tt.repeat("a-b", "a = 1.0; b=0.9999999999")

[0.06100386171601713, 0.044538539999592786, 0.04451548406098027]
>>> tt.repeat("a-b", "from decimal import Decimal as D; a = D('1.0'); b

= D('0.9999999999')")
[0.14685526219517442, 0.12909696344064514, 0.12646059371189722]

A factor of 3, as S. Krah, the cdecimal author, claimed
--
Terry Jan Reedy
Terry Reedy, Oct 9, 2012
8. ### Dave AngelGuest

On 10/08/2012 09:45 PM, Terry Reedy wrote:
> On 10/8/2012 11:13 AM, Dave Angel wrote:
>
>>> Isn't it true, though, that Python 3.3 has a completely new
>>> implementation of decimal that largely removes this disadvantage?

>
>> I wouldn't know, I'm on 3.2. However, I sincerely doubt if it's within
>> a factor of 100 of the speed of the binary float, at least on

>
> >>> import timeit as tt
> >>> tt.repeat("float('1.0')-float('0.9999999999')")

> [0.6856039948871151, 0.669049830953858, 0.668688006423692]
> >>> tt.repeat("Decimal('1.0')-Decimal('0.9999999999')", "from decimal

> import Decimal")
> [1.3204655578092428, 1.286977575486688, 1.2893188292009938]
>
> >>> tt.repeat("a-b", "a = 1.0; b=0.9999999999")

> [0.06100386171601713, 0.044538539999592786, 0.04451548406098027]
> >>> tt.repeat("a-b", "from decimal import Decimal as D; a = D('1.0');

> b = D('0.9999999999')")
> [0.14685526219517442, 0.12909696344064514, 0.12646059371189722]
>
> A factor of 3, as S. Krah, the cdecimal author, claimed

I concede the point. But I was "sincere" in my doubt.

What I'm curious about now is 1) how much the various operators vary in
that 3:1 ratio and 2) how much the overhead portions are using of that
time.

I have to assume that timeit.repeat doesn't count the time spent in its
second argument, right? Because converting a string to a Decimal should
be much faster than converting one to float. But what about the
overhead of eval(), or whatever it uses? Is the "a-b" converted to byte
code just once? Or is it recompiled each time through tje loop?

I have to admit not spending much time in timeit(); I usually end up
timing things with my own loops. So i'd really like to understand how

--

DaveA
Dave Angel, Oct 9, 2012
9. ### Steven D'ApranoGuest

Re: To get the accurate value of 1 - 0.999999999999999 ,how toimplement the python algorithm #?

On Tue, 09 Oct 2012 02:00:04 +1100, Chris Angelico wrote:

> On Tue, Oct 9, 2012 at 1:48 AM, Dave Angel <> wrote:
>> import decimal
>> a = decimal.Decimal(4.3)
>> print(a)
>>
>> 5.0999999999999996447286321199499070644378662109375

>
> Ah, the delights of copy-paste
>
>> The Decimal class has the disadvantage that it's tons slower on any
>> modern machine I know of...

>
> Isn't it true, though, that Python 3.3 has a completely new
> implementation of decimal that largely removes this disadvantage?

Yes. It's blazingly fast: up to 120 times faster than the pure Python
version, and within an order of magnitude of the speed of binary floats:

[steve@ando ~]\$ python3.3 -m timeit -s "x, y = 1001.0, 978.0"
> "x+y-(x/y)**4"

1000000 loops, best of 3: 0.509 usec per loop

[steve@ando ~]\$ python3.3 -m timeit -s "from decimal import Decimal"
> -s "x, y = Decimal(1001), Decimal(978)" "x+y-(x/y)**4"

100000 loops, best of 3: 3.58 usec per loop

Without hardware support, Decimal will probably never be quite as fast as
binary floats, but its fast enough for all but the most demanding needs.

--
Steven
Steven D'Aprano, Oct 9, 2012