Why the nonsense number appears?

Y

Yu-Xi Lim

Johnny Lee wrote:
Why are there so many nonsense tails? thanks for your help.

I guess you were expecting 0.039? You first need to understand floating
point numbers:

http://docs.python.org/tut/node16.html

What you see are the effects of representation errors.

The solution is presented here:
http://www.python.org/peps/pep-0327.html

But briefly, it means your code should read:

from decimal import *
t1 = "1130748744"
t2 = "461"
t3 = "1130748744"
t4 = "500"
time1 = t1+"."+t2
time2 = t3+"."+t4
print time1, time2
Decimal(time2) - Decimal(time1)
 
B

Ben O'Steen

Johnny Lee enlightened us with:

Because if the same reason you can't write 1/3 in decimal:

http://docs.python.org/tut/node16.html

Sybren
--
The problem with the world is stupidity. Not saying there should be a
capital punishment for stupidity, but why don't we just take the
safety labels off of everything and let the problem solve itself?
Frank Zappa


I think that the previous poster was asking something different. I think
he was asking something like this:

If
0.039

Then why:
0.0389995574951

It appears Yu-Xi Lim beat me to the punch. Using decimal as opposed to
float sorts out this error as floats are not built to handle the size of
number used here.

Ben
 
S

Sybren Stuvel

Ben O'Steen enlightened us with:
I think that the previous poster was asking something different.

It all boils down to floating point inprecision.
If

0.039

Then why:

0.0389995574951

It's easier to explain in decimals. Just assume you only have memory
to keep three decimals. 12345678910.500 is internally stored as
something like 1.23456789105e10. Strip that to three decimals, and you
have 1.234e10. In that case, t1 - t2 = 1.234e10 - 1.234e10 = 0.
Using decimal as opposed to float sorts out this error as floats are
not built to handle the size of number used here.

They can handle the size just fine. What they can't handle is 1/1000th
precision when using numbers in the order of 1e10.

Sybren
 
B

Ben O'Steen

Ben O'Steen enlightened us with:

They can handle the size just fine. What they can't handle is 1/1000th
precision when using numbers in the order of 1e10.

I used the word 'size' here incorrectly, I intended to mean 'length'
rather than numerical value. Sorry for the confusion :)
 
S

Steve Horsley

Ben said:
I used the word 'size' here incorrectly, I intended to mean 'length'
rather than numerical value. Sorry for the confusion :)

Sybren is right. The problem is not the length or the size, it's
the fact that 0.039 cannot be represented exactly in binary, in
just the same way that 1/3 cannot be represented exactly in
decimal. They both give recurring numbers. If you truncate those
recurring numbers to a finite number of digits, you lose
precision. And this shows up when you convert the inaccurate
number from binary into decimal representation where an exact
representation IS possible.

Steve
 
D

Dan Bishop

Steve said:
Sybren is right. The problem is not the length or the size, it's
the fact that 0.039 cannot be represented exactly in binary, in
just the same way that 1/3 cannot be represented exactly in
decimal. They both give recurring numbers. If you truncate those
recurring numbers to a finite number of digits, you lose
precision. And this shows up when you convert the inaccurate
number from binary into decimal representation where an exact
representation IS possible.

That's A source of error, but it's only part of the story. The
double-precision binary representation of 0.039 is 5620492334958379 *
2**(-57), which is in error by 1/18014398509481984000. By contrast,
Johnny Lee's answer is in error by 9/262144000, which is more than 618
billion times the error of simply representing 0.039 in floating point
-- a loss of 39 bits.

The problem here is catastrophic cancellation.

1130748744.500 ~= 4742703982051328 * 2**(-22)
1130748744.461 ~= 4742703981887750 * 2**(-22)

Subtracting gives 163578 * 2**(-22), which has only 18 significant bits.
 
S

Steve Horsley

Dan said:
That's A source of error, but it's only part of the story. The
double-precision binary representation of 0.039 is 5620492334958379 *
2**(-57), which is in error by 1/18014398509481984000. By contrast,
Johnny Lee's answer is in error by 9/262144000, which is more than 618
billion times the error of simply representing 0.039 in floating point
-- a loss of 39 bits.

The problem here is catastrophic cancellation.

1130748744.500 ~= 4742703982051328 * 2**(-22)
1130748744.461 ~= 4742703981887750 * 2**(-22)

Subtracting gives 163578 * 2**(-22), which has only 18 significant bits.

Hmm. Good point.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,014
Latest member
BiancaFix3

Latest Threads

Top