Bug in floating-point addition: is anyone else seeing this?

M

Mark Dickinson

On SuSE 10.2/Xeon there seems to be a rounding bug for
floating-point addition:

dickinsm@weyl:~> python
Python 2.5 (r25:51908, May 25 2007, 16:14:04)
[GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2
Type "help", "copyright", "credits" or "license" for more information.10000000000000000.0

The last result here should be 9999999999999998.0,
not 10000000000000000.0. Is anyone else seeing this
bug, or is it just a quirk of my system?

Mark
 
D

Diez B. Roggisch

Mark said:
On SuSE 10.2/Xeon there seems to be a rounding bug for
floating-point addition:

dickinsm@weyl:~> python
Python 2.5 (r25:51908, May 25 2007, 16:14:04)
[GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2
Type "help", "copyright", "credits" or "license" for more information.10000000000000000.0

The last result here should be 9999999999999998.0,
not 10000000000000000.0. Is anyone else seeing this
bug, or is it just a quirk of my system?

It is working under OSX:

(TG1044)mac-dir:~/projects/artnology/Van_Abbe_RMS/Van-Abbe-RMS deets$ python
Python 2.5.1 (r251:54869, Apr 18 2007, 22:08:04)
[GCC 4.0.1 (Apple Computer, Inc. build 5367)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Welcome to rlcompleter2 0.96

But under linux, I get the same behavior:

Python 2.5.1 (r251:54863, May 2 2007, 16:56:35)
[GCC 4.1.2 (Ubuntu 4.1.2-0ubuntu4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Welcome to rlcompleter2 0.96


So - seems to me it's a linux-thing. I don't know enough about
IEEE-floats to make any assumptions on the reasons for that.

Diez
 
B

bukzor

On SuSE 10.2/Xeon there seems to be a rounding bug for
floating-point addition:

dickinsm@weyl:~> python
Python 2.5 (r25:51908, May 25 2007, 16:14:04)
[GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2
Type "help", "copyright", "credits" or "license" for more information.>>> a = 1e16-2.
10000000000000000.0

The last result here should be 9999999999999998.0,
not 10000000000000000.0. Is anyone else seeing this
bug, or is it just a quirk of my system?

Mark

I see it too
 
M

Marc Christiansen

Mark Dickinson said:
On SuSE 10.2/Xeon there seems to be a rounding bug for
floating-point addition:

dickinsm@weyl:~> python
Python 2.5 (r25:51908, May 25 2007, 16:14:04)
[GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2
Type "help", "copyright", "credits" or "license" for more information.10000000000000000.0

The last result here should be 9999999999999998.0,
not 10000000000000000.0. Is anyone else seeing this
bug, or is it just a quirk of my system?

On my system, it works:

Python 2.5.2 (r252:60911, May 21 2008, 18:49:26)
[GCC 4.1.2 (Gentoo 4.1.2 p1.0.2)] on linux2
Type "help", "copyright", "credits" or "license" for more information. 9999999999999998.0

Marc
 
M

Mark Dickinson

On my system, it works:

 Python 2.5.2 (r252:60911, May 21 2008, 18:49:26)
 [GCC 4.1.2 (Gentoo 4.1.2 p1.0.2)] on linux2
 Type "help", "copyright", "credits" or "license" for more information.
 >>> a = 1e16 - 2.; a
 9999999999999998.0
 >>> a + 0.9999
 9999999999999998.0

Marc

Thanks for all the replies! It's good to know that it's not just
me. :)

After a bit (well, quite a lot) of Googling, it looks as though this
might be known problem with gcc on older Intel processors: those using
an x87-style FPU instead of SSE2 for floating-point. This gcc
'bug' looks relevant:

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323

Now that I've got confirmation I'll open a Python bug report: it's
not clear how to fix this, or whether it's worth fixing, but it
seems like something that should be documented somewhere...

Thanks again, everyone!

Mark
 
D

Dave Parker

10000000000000000.0

Shouldn't both of them give 9999999999999999.0?

I wrote the same program under Flaming Thunder:

Set a to 10^16-2.0.
Writeline a+0.999.
Writeline a+0.9999.

and got:

9999999999999998.999
9999999999999998.9999

I then set the precision down to 16 decimal digits to emulate Python:

Set realdecimaldigits to 16.
Set a to 10^16-2.0.
Writeline a+0.999.
Writeline a+0.9999.

and got:

9999999999999999.0
9999999999999999.0
 
J

Jerry Hill

Shouldn't both of them give 9999999999999999.0?

My understand is no, not if you're using IEEE floating point.
I wrote the same program under Flaming Thunder:

Set a to 10^16-2.0.
Writeline a+0.999.
Writeline a+0.9999.

and got:

9999999999999998.999
9999999999999998.9999

You can get the same results by using python's decimal module, like this:
 
D

Dave Parker

My understand is no, not if you're using IEEE floating point.

Yes, that would explain it. I assumed that Python automatically
switched from hardware floating point to multi-precision floating
point so that the user is guaranteed to always get correctly rounded
results for +, -, *, and /, like Flaming Thunder gives. Correct
rounding and accurate results are fairly crucial to mathematical and
scientific programming, in my opinion.
 
D

Diez B. Roggisch

Dave said:
Yes, that would explain it. I assumed that Python automatically
switched from hardware floating point to multi-precision floating
point so that the user is guaranteed to always get correctly rounded
results for +, -, *, and /, like Flaming Thunder gives. Correct
rounding and accurate results are fairly crucial to mathematical and
scientific programming, in my opinion.

Who says that rounding on base 10 is more correct than rounding on base 2?

And in scientific programming, speed matters - which is why e.g. the
cell-processor shall grow a double-precision float ALU. And generally
supercomputers use floats, not arbitrary precision BCD or even rationals.


Diez
 
C

Chris Mellon

Yes, that would explain it. I assumed that Python automatically
switched from hardware floating point to multi-precision floating
point so that the user is guaranteed to always get correctly rounded
results for +, -, *, and /, like Flaming Thunder gives. Correct
rounding and accurate results are fairly crucial to mathematical and
scientific programming, in my opinion.
--

If you're going to use every post and question about Python as an
opportunity to pimp your own pet language you're going irritate even
more people than you have already.
 
D

Dan Upton

Yes, that would explain it. I assumed that Python automatically
switched from hardware floating point to multi-precision floating
point so that the user is guaranteed to always get correctly rounded
results for +, -, *, and /, like Flaming Thunder gives. Correct
rounding and accurate results are fairly crucial to mathematical and
scientific programming, in my opinion.

However, this is not an issue of language correctness, it's an issue
of specification and/or hardware. If you look at the given link, it
has to do with the x87 being peculiar and performing 80-bit
floating-point arithmetic even though that's larger than the double
spec. I assume this means FT largely performs floating-point
arithmetic in software rather than using the FP hardware (unless of
course you do something crazy like compiling to SW on some machines
and HW on others depending on whether you trust their functional
units).

The fact is, sometimes it's better to get it fast and be good enough,
where you can use whatever methods you want to deal with rounding
error accumulation. When accuracy is more important than speed of
number crunching (and don't argue to me that your software
implementation is faster than, or probably even as fast as, gates in
silicon) you use packages like Decimal.

Really, you're just trying to advertise your language again.
 
D

Dave Parker

If you're going to use every post and question about Python as an
opportunity to pimp your own pet language you're going irritate even
more people than you have already.

Actually, I've only posted on 2 threads that were questions about
Python -- this one, and the one about for-loops where the looping
variable wasn't needed. I apologize if that irritates you. But maybe
some Python users will be interested in Flaming Thunder if only to
check the accuracy of the results that they're getting from Python,
like I did on this thread. I think most people will agree that having
two independent programs confirm a result is a good thing.
 
D

Dave Parker

The fact is, sometimes it's better to get it fast and be good enough,
where you can use whatever methods you want to deal with rounding
error accumulation.

I agree.

I also think that the precision/speed tradeoff should be under user
control -- not at the whim of the compiler writer. So, for example,
if a user says:

Set realdecimaldigits to 10.

then it's okay to use hardware double precision, but if they say:

Set realdecimaldigits to 100.

then it's not. The user should always be able to specify the
precision and the rounding mode, and the program should always provide
correct results to those specifications.
 
C

Chris Mellon

Actually, I've only posted on 2 threads that were questions about
Python -- this one, and the one about for-loops where the looping
variable wasn't needed. I apologize if that irritates you. But maybe
some Python users will be interested in Flaming Thunder if only to
check the accuracy of the results that they're getting from Python,
like I did on this thread. I think most people will agree that having
two independent programs confirm a result is a good thing.
--

Please don't be disingenuous. You took the opportunity to pimp your
language because you could say that you did this "right" and Python
did it "wrong". When told why you got different results (an answer you
probably already knew, if you know enough about IEEE to do the
auto-conversion you alluded to) you treated it as another opportunity
to (not very subtly) imply that Python was doing the wrong thing. I'm
quite certain that you did this intentionally and with full knowledge
of what you were doing, and it's insulting to imply otherwise.

You posted previously that you wrote a new language because you were
writing what you wanted every other language to be. This is very
similar to why Guido wrote Python and I wish you the best of luck. He
was fortunate enough that the language he wanted also happened to be
the language that lots of other people wanted. You don't seem to be so
fortunate, and anti-social behavior on newsgroups dedicated to other
languages is unlikely to change that. You're not the first and you
won't be the last.
 
D

Dave Parker

When told why you got different results (an answer you
probably already knew, if you know enough about IEEE to do the
auto-conversion you alluded to) ...

Of course I know a lot about IEEE, but you are assuming that I also
know a lot about Python, which I don't. I assumed Python was doing
the auto-conversion, too, because I had heard that Python supported
arbitrary precision math. Jerry Hill explained that you had to load a
separate package to do it.
you treated it as another opportunity
to (not very subtly) imply that Python was doing the wrong thing.

This person who started this thread posted the calculations showing
that Python was doing the wrong thing, and filed a bug report on it.

If someone pointed out a similar problem in Flaming Thunder, I would
agree that Flaming Thunder was doing the wrong thing.

I would fix the problem a lot faster, though, within hours if
possible. Apparently this particular bug has been lurking on Bugzilla
since 2003: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323
 
D

Diez B. Roggisch

Dave said:
I agree.

I also think that the precision/speed tradeoff should be under user
control -- not at the whim of the compiler writer. So, for example,
if a user says:

Set realdecimaldigits to 10.

then it's okay to use hardware double precision, but if they say:

Set realdecimaldigits to 100.

then it's not. The user should always be able to specify the
precision and the rounding mode, and the program should always provide
correct results to those specifications.

Which is exactly what the python decimal module does.

Diez
 
D

Dave Parker

Which is exactly what the python decimal module does.

Thank you (and Jerry Hill) for pointing that out. If I want to check
Flaming Thunder's results against an independent program, I'll know to
use Python with the decimal module.
 
B

bukzor

Thank you (and Jerry Hill) for pointing that out. If I want to check
Flaming Thunder's results against an independent program, I'll know to
use Python with the decimal module.

Utterly shameless.
 
C

Carl Banks

Yes, that would explain it. I assumed that Python automatically
switched from hardware floating point to multi-precision floating
point so that the user is guaranteed to always get correctly rounded
results for +, -, *, and /, like Flaming Thunder gives. Correct
rounding and accurate results are fairly crucial to mathematical and
scientific programming, in my opinion.

Having done much mathematical and scientific prorgamming in my day, I
would say your opinion is dead wrong.

The crucial thing is not to slow down the calculations with useless
bells and whistles. Scientists and engineers are smart enough to use
more precision than we need, and we don't really need that much. For
instance, the simulations I run at work all use single precision (six
decimal digits) even though double precision is allowed.


Carl Banks
 
D

Dave Parker

The crucial thing is not to slow down the calculations with useless
bells and whistles.

Are you running your simulations on a system that does or does not
support the "useless bell and whistle" of correct rounding? If not,
how do you prevent regression towards 0?

For example, one of the things that caused the PS3 to be in 3rd place
behind the Wii and XBox 360 is that to save a cycle or two, the PS3
cell core does not support rounding of single precision results -- it
truncates them towards 0. That led to horrible single-pixel errors in
the early demos I saw, which in term helped contribute to game release
delays, which has turned into a major disappointment for Sony.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,756
Messages
2,569,535
Members
45,008
Latest member
obedient dusk

Latest Threads

Top