-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Michael said:
Unfortunately, there is no easy solution to this problem. Here is a
catalog of often proposed solutions and why they do not work:
1 (proposed by doug meyer in this thread) Always use
(x-y).abs < Float::EPSILON
as a test for equality.
This won't work because the rounding error easily can get bigger than
Float::EPSILON, especially when dealing with numbers that are bigger
than unity. e.g.
y = 100.1 + 0.3
y - 100.4 # => -1.421e-14, while Float::EPSILON = 2.22e-16
2 Always use (x-y).abs < (x.abs + y.abs) * Float::EPSILON) as a test
for equality.
Better than the first proposal, but won't work if the rounding error
gets too large after a complex computation.
In addition, (1) and (2) suffer from the problem that x==y and y==z do
not imply x==z.
3 Use Bigdezimal
This only shifts the problem a few decimal places down, and tests for
equality will fail as with the normal floats.
4 Use Rationals
Works if you only have to deal with rational operations. But doesn't
solve the following
x = sqrt(2)
y = x + 1
x + 0.2 == y - 0.8 # => false
In addition, rational arithmetic can produce huge numbers pretty fast,
and this will slow down computations enormously.
5 Use a symbolic math package
This could in theory solve the issue with equality, but in practice there
is no way to decide that two symbolic representations of a number are the
same, like
1 / (sqrt(2) - 1) == sqrt(2) + 1
Also, very, very slow.
6 Use interval arithmetic
Gives you strict bounds on your solution, but can't answer x==y.
Summing up, when using floating point arithmetic there is no one true way.
There is no substitute for understanding numbers and analyzing your
problem.
Well ... OK ... but ...
This whole floating-point thing comes up here on a weekly basis, and
I'll bet it comes up on all the other language mailing lists too. No
matter how many times you repeat this, no matter how many web sites
explaining floating point arithmetic you point people to, etc., you are
still going to get people who don't know how it works and have
expectations that aren't realistic. An awful lot of calculators have
been built using decimal arithmetic just because there are a few less
"anomalies" that need to be explained.
People like me who do number crunching for a living know all this stuff
inside and out. I actually learned the basics of scientific computing in
scaled fixed-point arithmetic, and it's only been in recent years (since
the Pentium, in fact) that just about every computer you're likely to
touch has had floating point hardware. Before that, you were likely to
be dealing with slow and inaccurate libraries emulating the hardware
unless you were in a scientific research environment. And it's also been
only a few more years since nearly all new architectures supported
(mostly) the IEEE floating point standard.
Before that, it was chaos -- most 32-bit floating point arithmetic was
unusable except for data storage, the reigning supercomputers had
floating point units optimized for speed at the expense of correctness,
you actually had to pay for good math libraries and whole books of
garbage number crunching algorithms were popular best-sellers. In short,
even the folks who knew very well how it *should* be done made both
necessary compromises and serious mistakes. It took some brave souls
like William Kahan several years to get some of the more obvious garbage
out of "common practice".
So give the newbies a break on this issue -- the professionals have only
been doing it mostly right since about 1990.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Mozilla -
http://enigmail.mozdev.org
iD8DBQFG179D8fKMegVjSM8RAnaZAJ0X16UuHOEvWc5iZDurg7f607xr8QCfed+C
FG+18FnY10HxP+8t6R/62bM=
=jJ+X
-----END PGP SIGNATURE-----