Python math is off by .000000000000045

Discussion in 'Python' started by Alec Taylor, Feb 22, 2012.

  1. Alec Taylor

    Alec Taylor Guest

    Alec Taylor, Feb 22, 2012
    #1
    1. Advertising

  2. Alec Taylor

    nn Guest

    On Feb 22, 1:13 pm, Alec Taylor <> wrote:
    > Simple mathematical problem, + and - only:
    >
    > >>> 1800.00-1041.00-555.74+530.74-794.95

    >
    > -60.950000000000045
    >
    > That's wrong.
    >
    > Proofhttp://www.wolframalpha.com/input/?i=1800.00-1041.00-555.74%2B530.74-...
    > -60.95 aka (-(1219/20))
    >
    > Is there a reason Python math is only approximated? - Or is this a bug?
    >
    > Thanks for all info,
    >
    > Alec Taylor


    I get the right answer if I use the right datatype:

    >>> import decimal


    >>> D=decimal.Decimal


    >>> D('1800.00')-D('1041.00')-D('555.74')+D('530.74')-D('794.95')


    Decimal('-60.95')
    nn, Feb 22, 2012
    #2
    1. Advertising

  3. Alec Taylor writes:

    > Simple mathematical problem, + and - only:
    >
    > >>> 1800.00-1041.00-555.74+530.74-794.95

    > -60.950000000000045
    >
    > That's wrong.


    Not by much. I'm not an expert, but my guess is that the exact value
    is not representable in binary floating point, which most programming
    languages use for this. Ah, indeed:

    >>> 0.95

    0.94999999999999996

    Some languages hide the error by printing fewer decimals than they use
    internally.

    > Proof
    > http://www.wolframalpha.com/input/?i=1800.00-1041.00-555.74+530.74-794.95
    > -60.95 aka (-(1219/20))
    >
    > Is there a reason Python math is only approximated? - Or is this a bug?


    There are practical reasons. Do learn about "floating point".

    There is a price to pay, but you can have exact rational arithmetic in
    Python when you need or want it - I folded the long lines by hand
    afterwards:

    >>> from fractions import Fraction
    >>> 1800 - 1041 - Fraction(55574, 100) + Fraction(53074, 100)

    - Fraction(79495, 100)
    Fraction(-1219, 20)
    >>> -1219/20

    -61
    >>> -1219./20

    -60.950000000000003
    >>> float(1800 - 1041 - Fraction(55574, 100) + Fraction(53074, 100)

    - Fraction(79495, 100))
    -60.950000000000003
    Jussi Piitulainen, Feb 22, 2012
    #3
  4. On 2012-02-22, Alec Taylor <> wrote:

    > Simple mathematical problem, + and - only:
    >
    >>>> 1800.00-1041.00-555.74+530.74-794.95

    > -60.950000000000045
    >
    > That's wrong.


    Oh good. We haven't have this thread for several days.

    > Proof
    > http://www.wolframalpha.com/input/?i=1800.00-1041.00-555.74+530.74-794.95
    > -60.95 aka (-(1219/20))
    >
    > Is there a reason Python math is only approximated?


    http://docs.python.org/tutorial/floatingpoint.html

    Python uses binary floating point with a fixed size (64 bit IEEE-754
    on all the platforms I've ever run across). Floating point numbers
    are only approximations of real numbers. For every floating point
    number there is a corresponding real number, but 0% of real numbers
    can be represented exactly by floating point numbers.

    > - Or is this a bug?


    No, it's how floating point works.

    If you want something else, then perhaps you should use rationals or
    decimals:

    http://docs.python.org/library/fractions.html
    http://docs.python.org/library/decimal.html

    --
    Grant Edwards grant.b.edwards Yow! What I want to find
    at out is -- do parrots know
    gmail.com much about Astro-Turf?
    Grant Edwards, Feb 22, 2012
    #4
  5. Alec Taylor

    Tobiah Guest

    > For every floating point
    > number there is a corresponding real number, but 0% of real numbers
    > can be represented exactly by floating point numbers.


    It seems to me that there are a great many real numbers that can be
    represented exactly by floating point numbers. The number 1 is an
    example.

    I suppose that if you divide that count by the infinite count of all
    real numbers, you could argue that the result is 0%.
    Tobiah, Feb 25, 2012
    #5
  6. Alec Taylor

    Tim Wintle Guest

    On Sat, 2012-02-25 at 09:56 -0800, Tobiah wrote:
    > > For every floating point
    > > number there is a corresponding real number, but 0% of real numbers
    > > can be represented exactly by floating point numbers.

    >
    > It seems to me that there are a great many real numbers that can be
    > represented exactly by floating point numbers. The number 1 is an
    > example.
    >
    > I suppose that if you divide that count by the infinite count of all
    > real numbers, you could argue that the result is 0%.


    It's not just an argument - it's mathematically correct.

    The same can be said for ints representing the natural numbers, or
    positive integers.

    However, ints can represent 100% of integers within a specific range,
    where floats can't represent all real numbers for any range (except for
    the empty set) - because there's an infinate number of real numbers
    within any non-trivial range.


    Tim
    Tim Wintle, Feb 25, 2012
    #6
  7. Alec Taylor

    Terry Reedy Guest

    On 2/25/2012 12:56 PM, Tobiah wrote:

    > It seems to me that there are a great many real numbers that can be
    > represented exactly by floating point numbers. The number 1 is an
    > example.


    Binary floats can represent and integer and any fraction with a
    denominator of 2**n within certain ranges. For decimal floats,
    substitute 10**n or more exactly, 2**j * 5**k since if J < k,
    n / (2**j * 5**k) = (n * 2**(k-j)) / 10**k and similarly if j > k.

    --
    Terry Jan Reedy
    Terry Reedy, Feb 25, 2012
    #7
  8. Alec Taylor

    jmfauth Guest

    >>> (2.0).hex()
    '0x1.0000000000000p+1'
    >>> (4.0).hex()

    '0x1.0000000000000p+2'
    >>> (1.5).hex()

    '0x1.8000000000000p+0'
    >>> (1.1).hex()

    '0x1.199999999999ap+0'
    >>>


    jmf
    jmfauth, Feb 25, 2012
    #8
  9. On Sat, 25 Feb 2012 13:25:37 -0800, jmfauth wrote:

    >>>> (2.0).hex()

    > '0x1.0000000000000p+1'
    >>>> (4.0).hex()

    > '0x1.0000000000000p+2'
    >>>> (1.5).hex()

    > '0x1.8000000000000p+0'
    >>>> (1.1).hex()

    > '0x1.199999999999ap+0'
    >>>>
    >>>>

    > jmf


    What's your point? I'm afraid my crystal ball is out of order and I have
    no idea whether you have a question or are just demonstrating your
    mastery of copy and paste from the Python interactive interpreter.



    --
    Steven
    Steven D'Aprano, Feb 25, 2012
    #9
  10. On Sat, Feb 25, 2012 at 2:08 PM, Tim Wintle <> wrote:
    > > It seems to me that there  are a great many real numbers that can be
    > > represented exactly by floating point numbers.  The number 1 is an
    > > example.
    > >
    > > I suppose that if you divide that count by the infinite count of all
    > > real numbers, you could argue that the result is 0%.

    >
    > It's not just an argument - it's mathematically correct.


    ^ this

    The floating point numbers are a finite set. Any infinite set, even
    the rationals, is too big to have "many" floats relative to the whole,
    as in the percentage sense.

    ----

    In fact, any number we can reasonably deal with must have some finite
    representation, even if the decimal expansion has an infinite number
    of digits. We can work with pi, for example, because there are
    algorithms that can enumerate all the digits up to some precision. But
    we can't really work with a number for which no algorithm can
    enumerate the digits, and for which there are infinitely many digits.
    Most (in some sense involving infinities, which is to say, one that is
    not really intuitive) of the real numbers cannot in any way or form be
    represented in a finite amount of space, so most of them can't be
    worked on by computers. They only exist in any sense because it's
    convenient to pretend they exist for mathematical purposes, not for
    computational purposes.

    What this boils down to is to say that, basically by definition, the
    set of numbers representable in some finite number of binary digits is
    countable (just count up in binary value). But the whole of the real
    numbers are uncountable. The hard part is then accepting that some
    countable thing is 0% of an uncountable superset. I don't really know
    of any "proof" of that latter thing, it's something I've accepted
    axiomatically and then worked out backwards from there. But surely
    it's obvious, somehow, that the set of finite strings is tiny compared
    to the set of infinite strings? If we look at binary strings,
    representing numbers, the reals could be encoded as the union of the
    two, and by far most of them would be infinite.


    Anyway, all that aside, the real numbers are kind of dumb.

    -- Devin
    Devin Jeanpierre, Feb 26, 2012
    #10
  11. Alec Taylor

    Terry Reedy Guest

    On 2/25/2012 9:49 PM, Devin Jeanpierre wrote:


    > What this boils down to is to say that, basically by definition, the
    > set of numbers representable in some finite number of binary digits is
    > countable (just count up in binary value). But the whole of the real
    > numbers are uncountable. The hard part is then accepting that some
    > countable thing is 0% of an uncountable superset. I don't really know
    > of any "proof" of that latter thing, it's something I've accepted
    > axiomatically and then worked out backwards from there.


    Informally, if the infinity of counts were some non-zero fraction f of
    the reals, then there would, in some sense, be 1/f times a many reals as
    counts, so the count could be expanded to count 1/f reals for each real
    counted before, and the reals would be countable. But Cantor showed that
    the reals are not countable.

    But as you said, this is all irrelevant for computing. Since the number
    of finite strings is practically finite, so is the number of algorithms.
    And even a countable number of algorithms would be a fraction 0, for
    instance, of the uncountable predicate functions on 0, 1, 2, ... . So we
    do what we actually can that is of interest.

    --
    Terry Jan Reedy
    Terry Reedy, Feb 26, 2012
    #11
  12. Alec Taylor

    jmfauth Guest

    On 25 fév, 23:51, Steven D'Aprano <steve
    > wrote:
    > On Sat, 25 Feb 2012 13:25:37 -0800, jmfauth wrote:
    > >>>> (2.0).hex()

    > > '0x1.0000000000000p+1'
    > >>>> (4.0).hex()

    > > '0x1.0000000000000p+2'
    > >>>> (1.5).hex()

    > > '0x1.8000000000000p+0'
    > >>>> (1.1).hex()

    > > '0x1.199999999999ap+0'

    >
    > > jmf

    >
    > What's your point? I'm afraid my crystal ball is out of order and I have
    > no idea whether you have a question or are just demonstrating your
    > mastery of copy and paste from the Python interactive interpreter.
    >



    It should be enough to indicate the right direction
    for casual interested readers.
    jmfauth, Feb 26, 2012
    #12
  13. Alec Taylor

    John Ladasky Guest

    Curiosity prompts me to ask...

    Those of you who program in other languages regularly: if you visit
    comp.lang.java, for example, do people ask this question about
    floating-point arithmetic in that forum? Or in comp.lang.perl?

    Is there something about Python that exposes the uncomfortable truth
    about practical computer arithmetic that these other languages
    obscure? For of course, arithmetic is surely no less accurate in
    Python than in any other computing language.

    I always found it helpful to ask someone who is confused by this issue
    to imagine what the binary representation of the number 1/3 would be.

    0.011 to three binary digits of precision:
    0.0101 to four:
    0.01011 to five:
    0.010101 to six:
    0.0101011 to seven:
    0.01010101 to eight:

    And so on, forever. So, what if you want to do some calculator-style
    math with the number 1/3, that will not require an INFINITE amount of
    time? You have to round. Rounding introduces errors. The more
    binary digits you use for your numbers, the smaller those errors will
    be. But those errors can NEVER reach zero in finite computational
    time.

    If ALL the numbers you are using in your computations are rational
    numbers, you can use Python's rational and/or decimal modules to get
    error-free results. Learning to use them is a bit of a specialty.

    But for those of us who end up with numbers like e, pi, or the square
    root of 2 in our calculations, the compromise of rounding must be
    accepted.
    John Ladasky, Feb 27, 2012
    #13
  14. Alec Taylor

    Terry Reedy Guest

    On 2/26/2012 7:24 PM, John Ladasky wrote:

    > I always found it helpful to ask someone who is confused by this issue
    > to imagine what the binary representation of the number 1/3 would be.
    >
    > 0.011 to three binary digits of precision:
    > 0.0101 to four:
    > 0.01011 to five:
    > 0.010101 to six:
    > 0.0101011 to seven:
    > 0.01010101 to eight:
    >
    > And so on, forever. So, what if you want to do some calculator-style
    > math with the number 1/3, that will not require an INFINITE amount of
    > time? You have to round. Rounding introduces errors. The more
    > binary digits you use for your numbers, the smaller those errors will
    > be. But those errors can NEVER reach zero in finite computational
    > time.


    Ditto for 1/3 in decimal.
    ....
    0.33333333 to eitht

    > If ALL the numbers you are using in your computations are rational
    > numbers, you can use Python's rational and/or decimal modules to get
    > error-free results.


    Decimal floats are about as error prone as binary floats. One can only
    exact represent a subset of rationals of the form n / (2**j * 5**k). For
    a fixed number of bits of storage, they are 'lumpier'. For any fixed
    precision, the arithmetic issues are the same.

    The decimal module decimals have three advantages (sometimes) over floats.

    1. Variable precision - but there are multiple-precision floats also
    available outside the stdlib.

    2. They better imitate calculators - but that is irrelevant or a minus
    for scientific calculation.

    3. They better follow accounting rules for financial calculation,
    including a multiplicity of rounding rules. Some of these are laws that
    *must* be followed to avoid nasty consequences. This is the main reason
    for being in the stdlib.

    > Learning to use them is a bit of a specialty.


    Definitely true.

    --
    Terry Jan Reedy
    Terry Reedy, Feb 27, 2012
    #14
  15. On Sun, 26 Feb 2012 16:24:14 -0800, John Ladasky wrote:

    > Curiosity prompts me to ask...
    >
    > Those of you who program in other languages regularly: if you visit
    > comp.lang.java, for example, do people ask this question about
    > floating-point arithmetic in that forum? Or in comp.lang.perl?


    Yes.

    http://stackoverflow.com/questions/588004/is-javascripts-math-broken

    And look at the "Linked" sidebar. Obviously StackOverflow users no more
    search the internet for the solutions to their problems than do
    comp.lang.python posters.


    http://compgroups.net/comp.lang.java.programmer/Floating-point-roundoff-error



    --
    Steven
    Steven D'Aprano, Feb 27, 2012
    #15
  16. On 2012-02-27, Steven D'Aprano <> wrote:
    > On Sun, 26 Feb 2012 16:24:14 -0800, John Ladasky wrote:
    >
    >> Curiosity prompts me to ask...
    >>
    >> Those of you who program in other languages regularly: if you visit
    >> comp.lang.java, for example, do people ask this question about
    >> floating-point arithmetic in that forum? Or in comp.lang.perl?

    >
    > Yes.
    >
    > http://stackoverflow.com/questions/588004/is-javascripts-math-broken
    >
    > And look at the "Linked" sidebar. Obviously StackOverflow users no
    > more search the internet for the solutions to their problems than do
    > comp.lang.python posters.
    >
    > http://compgroups.net/comp.lang.java.programmer/Floating-point-roundoff-error


    One might wonder if the frequency of such questions decreases as the
    programming language becomes "lower level" (e.g. C or assembly).

    --
    Grant Edwards grant.b.edwards Yow! World War III?
    at No thanks!
    gmail.com
    Grant Edwards, Feb 27, 2012
    #16
  17. On 02/27/2012 08:02 AM, Grant Edwards wrote:
    > On 2012-02-27, Steven D'Aprano <> wrote:
    >> On Sun, 26 Feb 2012 16:24:14 -0800, John Ladasky wrote:
    >>
    >>> Curiosity prompts me to ask...
    >>>
    >>> Those of you who program in other languages regularly: if you visit
    >>> comp.lang.java, for example, do people ask this question about
    >>> floating-point arithmetic in that forum? Or in comp.lang.perl?

    >>
    >> Yes.
    >>
    >> http://stackoverflow.com/questions/588004/is-javascripts-math-broken
    >>
    >> And look at the "Linked" sidebar. Obviously StackOverflow users no
    >> more search the internet for the solutions to their problems than do
    >> comp.lang.python posters.
    >>
    >> http://compgroups.net/comp.lang.java.programmer/Floating-point-roundoff-error

    >
    > One might wonder if the frequency of such questions decreases as the
    > programming language becomes "lower level" (e.g. C or assembly).


    I think that most of the use cases in C or assembly of math are
    integer-based only. For example, counting, bit-twiddling, addressing
    character cells or pixel coordinates, etc. Maybe when programmers have
    to statically declare a variable type in advance, since the common use
    cases require only integer, that gets used far more, so experiences with
    float happen less often. Some of this could have to do with the fact
    that historically floating point required a special library to do
    floating point math, and since a lot of people didn't have
    floating-point coprocessors back then, most code was integer-only.

    Early BASIC interpreters defaulted to floating point for everything, and
    implemented all the floating point arithmetic internally with integer
    arithmetic, without the help of the x87 processor, but no doubt they did
    round the results when printing to the screen. They also did not have
    very much precision to begin with. Anyone remember Microsoft's
    proprietary floating point binary system and how there were function
    calls to convert back and forth between the IEEE standard?

    Another key thing is that most C programmers don't normally just print
    out floating point numbers without a %.2f kind of notation that properly
    rounds a number.

    Now, of course, every processor has a floating-point unit, and the C
    compilers can generate code that uses it just as easily as integer code.

    No matter what language, or what floating point scheme you use,
    significant digits is definitely important to understand!
    Michael Torrie, Feb 27, 2012
    #17
  18. Alec Taylor

    Ethan Furman Guest

    jmfauth wrote:
    > On 25 fév, 23:51, Steven D'Aprano <steve
    > > wrote:
    >> On Sat, 25 Feb 2012 13:25:37 -0800, jmfauth wrote:
    >>>>>> (2.0).hex()
    >>> '0x1.0000000000000p+1'
    >>>>>> (4.0).hex()
    >>> '0x1.0000000000000p+2'
    >>>>>> (1.5).hex()
    >>> '0x1.8000000000000p+0'
    >>>>>> (1.1).hex()
    >>> '0x1.199999999999ap+0'
    >>> jmf

    >> What's your point? I'm afraid my crystal ball is out of order and I have
    >> no idea whether you have a question or are just demonstrating your
    >> mastery of copy and paste from the Python interactive interpreter.

    >
    > It should be enough to indicate the right direction
    > for casual interested readers.


    I'm a casual interested reader and I have no idea what your post is
    trying to say.

    ~Ethan~
    Ethan Furman, Feb 27, 2012
    #18
  19. On 02/27/2012 10:28 AM, Ethan Furman wrote:
    > jmfauth wrote:
    >> On 25 fév, 23:51, Steven D'Aprano <steve
    >> > wrote:
    >>> On Sat, 25 Feb 2012 13:25:37 -0800, jmfauth wrote:
    >>>>>>> (2.0).hex()
    >>>> '0x1.0000000000000p+1'
    >>>>>>> (4.0).hex()
    >>>> '0x1.0000000000000p+2'
    >>>>>>> (1.5).hex()
    >>>> '0x1.8000000000000p+0'
    >>>>>>> (1.1).hex()
    >>>> '0x1.199999999999ap+0'
    >>>> jmf
    >>> What's your point? I'm afraid my crystal ball is out of order and I have
    >>> no idea whether you have a question or are just demonstrating your
    >>> mastery of copy and paste from the Python interactive interpreter.

    >>
    >> It should be enough to indicate the right direction
    >> for casual interested readers.

    >
    > I'm a casual interested reader and I have no idea what your post is
    > trying to say.


    He's simply showing you the hex (binary) representation of the
    floating-point number's binary representation. As you can clearly see
    in the case of 1.1, there is no finite sequence that can store that.
    You end up with repeating numbers. Just like 1/3, when represented in
    base 10 fractions (x1/10 + x2/100, x3/1000, etc), is a repeating
    sequence, the number base 10 numbers 1.1 or 0.2, or many others that are
    represented by exact base 10 fractions, end up as repeating sequences in
    base 2 fractions. This should help you understand why you get errors
    doing simple things like x/y*y doesn't quite get you back to x.
    Michael Torrie, Feb 28, 2012
    #19
  20. Alec Taylor

    Ethan Furman Guest

    Michael Torrie wrote:
    > He's simply showing you the hex (binary) representation of the
    > floating-point number's binary representation. As you can clearly see
    > in the case of 1.1, there is no finite sequence that can store that.
    > You end up with repeating numbers.


    Thanks for the explanation.

    > This should help you understand why you get errors
    > doing simple things like x/y*y doesn't quite get you back to x.


    I already understood that. I just didn't understand what point he was
    trying to make since he gave no explanation.

    ~Ethan~
    Ethan Furman, Feb 28, 2012
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. chirs
    Replies:
    18
    Views:
    764
    Chris Uppal
    Mar 2, 2004
  2. AciD_X
    Replies:
    4
    Views:
    8,099
    Jonathan Turkanis
    Apr 1, 2004
  3. Mark Healey
    Replies:
    7
    Views:
    1,480
    Tim Prince
    May 22, 2006
  4. Philipp
    Replies:
    9
    Views:
    1,118
    Mark Space
    Jul 23, 2008
  5. VK
    Replies:
    15
    Views:
    1,161
    Dr J R Stockton
    May 2, 2010
Loading...

Share This Page