0.06 == 0.06 returns false in Ruby?

J

Jason G.

Hi

I wrote a simple test program, basically the program asks the user to
enter floats. the entered values shall be used as keys for Hash (the
values are irrelavent).

The program tries to find the lowest untaken key/float (floats are used
as keys).

please see attachment.

please run the prog, and enter the following:
0.01 - OK
0.02 - OK
0.03 - OK
0.04 - OK
0.05 - OK
0.06 - Problem

Can anyone explain to me what's going on here?

thanks
jason

Attachments:
http://www.ruby-forum.com/attachment/193/test.rb
 
R

Ravil Bayramgalin

2007/8/31 said:
There is also a class called BigDecimal (or something like that) if
you want to have really accurate numbers and no floating-point errors.

+1
 
M

Michael Ulm

Peña wrote:
--snip--
irb(main):010:0> (0.05+0.01) - 0.06
=> 6.93889390390723e-18

as mentioned by Dan, careful on comparing floats. And as to any
precision subject, there is what we call significant digits..

this floating problem is a faq and is very surprising on such a
very high language such as ruby. can we address this? maybe create
flag like $EPSILON=0 or something, or may flag to revert to rational
or bigdeci like $FLOAT_PROCESSOR=RATIONAL...

Unfortunately, there is no easy solution to this problem. Here is a
catalog of often proposed solutions and why they do not work:

1 (proposed by doug meyer in this thread) Always use
(x-y).abs < Float::EPSILON
as a test for equality.

This won't work because the rounding error easily can get bigger than
Float::EPSILON, especially when dealing with numbers that are bigger
than unity. e.g.
y = 100.1 + 0.3
y - 100.4 # => -1.421e-14, while Float::EPSILON = 2.22e-16

2 Always use
(x-y).abs < (x.abs + y.abs) * Float::EPSILON)
as a test for equality.

Better than the first proposal, but won't work if the rounding error
gets too large after a complex computation.
In addition, (1) and (2) suffer from the problem that x==y and y==z do
not imply x==z.

3 Use Bigdezimal

This only shifts the problem a few decimal places down, and tests for
equality will fail as with the normal floats.

4 Use Rationals

Works if you only have to deal with rational operations. But doesn't
solve the following
x = sqrt(2)
y = x + 1
x + 0.2 == y - 0.8 # => false
In addition, rational arithmetic can produce huge numbers pretty fast,
and this will slow down computations enormously.

5 Use a symbolic math package

This could in theory solve the issue with equality, but in practice there
is no way to decide that two symbolic representations of a number are the
same, like
1 / (sqrt(2) - 1) == sqrt(2) + 1
Also, very, very slow.

6 Use interval arithmetic

Gives you strict bounds on your solution, but can't answer x==y.


Summing up, when using floating point arithmetic there is no one true way.
There is no substitute for understanding numbers and analyzing your problem.

HTH,

Michael
 
M

M. Edward (Ed) Borasky

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Michael said:
Unfortunately, there is no easy solution to this problem. Here is a
catalog of often proposed solutions and why they do not work:

1 (proposed by doug meyer in this thread) Always use
(x-y).abs < Float::EPSILON
as a test for equality.

This won't work because the rounding error easily can get bigger than
Float::EPSILON, especially when dealing with numbers that are bigger
than unity. e.g.
y = 100.1 + 0.3
y - 100.4 # => -1.421e-14, while Float::EPSILON = 2.22e-16

2 Always use (x-y).abs < (x.abs + y.abs) * Float::EPSILON) as a test
for equality.

Better than the first proposal, but won't work if the rounding error
gets too large after a complex computation.
In addition, (1) and (2) suffer from the problem that x==y and y==z do
not imply x==z.

3 Use Bigdezimal

This only shifts the problem a few decimal places down, and tests for
equality will fail as with the normal floats.

4 Use Rationals

Works if you only have to deal with rational operations. But doesn't
solve the following
x = sqrt(2)
y = x + 1
x + 0.2 == y - 0.8 # => false
In addition, rational arithmetic can produce huge numbers pretty fast,
and this will slow down computations enormously.
5 Use a symbolic math package

This could in theory solve the issue with equality, but in practice there
is no way to decide that two symbolic representations of a number are the
same, like
1 / (sqrt(2) - 1) == sqrt(2) + 1
Also, very, very slow.

6 Use interval arithmetic

Gives you strict bounds on your solution, but can't answer x==y.


Summing up, when using floating point arithmetic there is no one true way.
There is no substitute for understanding numbers and analyzing your
problem.

Well ... OK ... but ...

This whole floating-point thing comes up here on a weekly basis, and
I'll bet it comes up on all the other language mailing lists too. No
matter how many times you repeat this, no matter how many web sites
explaining floating point arithmetic you point people to, etc., you are
still going to get people who don't know how it works and have
expectations that aren't realistic. An awful lot of calculators have
been built using decimal arithmetic just because there are a few less
"anomalies" that need to be explained.

People like me who do number crunching for a living know all this stuff
inside and out. I actually learned the basics of scientific computing in
scaled fixed-point arithmetic, and it's only been in recent years (since
the Pentium, in fact) that just about every computer you're likely to
touch has had floating point hardware. Before that, you were likely to
be dealing with slow and inaccurate libraries emulating the hardware
unless you were in a scientific research environment. And it's also been
only a few more years since nearly all new architectures supported
(mostly) the IEEE floating point standard.

Before that, it was chaos -- most 32-bit floating point arithmetic was
unusable except for data storage, the reigning supercomputers had
floating point units optimized for speed at the expense of correctness,
you actually had to pay for good math libraries and whole books of
garbage number crunching algorithms were popular best-sellers. In short,
even the folks who knew very well how it *should* be done made both
necessary compromises and serious mistakes. It took some brave souls
like William Kahan several years to get some of the more obvious garbage
out of "common practice".

So give the newbies a break on this issue -- the professionals have only
been doing it mostly right since about 1990. :)

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFG179D8fKMegVjSM8RAnaZAJ0X16UuHOEvWc5iZDurg7f607xr8QCfed+C
FG+18FnY10HxP+8t6R/62bM=
=jJ+X
-----END PGP SIGNATURE-----
 
D

Daniel DeLorme

Michael said:
Unfortunately, there is no easy solution to this problem. Here is a
catalog of often proposed solutions and why they do not work:

2 Always use (x-y).abs < (x.abs + y.abs) * Float::EPSILON) as a test
for equality.

Better than the first proposal, but won't work if the rounding error
gets too large after a complex computation.
In addition, (1) and (2) suffer from the problem that x==y and y==z do
not imply x==z.

But it would fix 99% of problems. It would be worth it just for the sake
of reducing those questions on the list :p

Seriously though, since floating point calculations are approximative to
start with, what would be wrong with making them more intuitive
approximations? IMHO Float should use the above algorithm for == and
reserve normal floating point arithmetic for eql?

Daniel
 
D

Dan Zwell

Daniel said:
But it would fix 99% of problems. It would be worth it just for the sake
of reducing those questions on the list :p

Seriously though, since floating point calculations are approximative to
start with, what would be wrong with making them more intuitive
approximations? IMHO Float should use the above algorithm for == and
reserve normal floating point arithmetic for eql?

Daniel

Michael wrote very convincingly that there is no simple solution that
will work in all cases. I'm convinced, at least. If there is no solution
that works 100% of the time, we can't give the illusion that there is.
To do so would be to teach bad programming practices to newcomers, and
that's not fair.

The current "==" in okay because it works the way a moderately
experienced programmer would expect. A perfect "==" that could deal with
floats would be even better, but we aren't gonna get that. A "==" that
seems like magic and almost always works is really pretty dangerous.

Dan
 
B

Bertram Scharpf

Hi,

Am Freitag, 31. Aug 2007, 17:03:38 +0900 schrieb Daniel DeLorme:
But it would fix 99% of problems. It would be worth it just for the sake of
reducing those questions on the list :p

Seriously though, since floating point calculations are approximative to
start with, what would be wrong with making them more intuitive
approximations? IMHO Float should use the above algorithm for == and
reserve normal floating point arithmetic for eql?

I alway liked it to be forced to decide what are countable
dimensions and what are continuous ones. This made my
programming style much clearer.

Bertram
 
R

Robert Klemme

2007/8/31 said:
But it would fix 99% of problems. It would be worth it just for the sake
of reducing those questions on the list :p

Seriously though, since floating point calculations are approximative to
start with, what would be wrong with making them more intuitive
approximations? IMHO Float should use the above algorithm for == and
reserve normal floating point arithmetic for eql?

And who decides about the size of epsilon and which algorithm to
choose? There is no one size fits all answer to that hence leaving
Float#== the way it is (i.e. compare for exact identical values) is
the only viable option. Otherwise you would soon see similar
questions on the list, i.e., "how come it sometimes works and
sometimes it doesn't". And they become more difficult to answer as
the cases are likely less easy to spot / explain.

Kind regards

robert
 
P

Peña, Botp

From: Robert Klemme [mailto:[email protected]]=20
# And who decides about the size of epsilon and which algorithm to
# choose? There is no one size fits all answer to that hence leaving
# Float#=3D=3D the way it is (i.e. compare for exact identical values) =
is
# the only viable option. Otherwise you would soon see similar
# questions on the list, i.e., "how come it sometimes works and
# sometimes it doesn't". And they become more difficult to answer as
# the cases are likely less easy to spot / explain.

indeed :(

maybe i'm not "realistic" but i thought=20

0.05 + 0.01 =3D=3D 0.06 =3D> false

was "unrealistic" enough for a simple and plain 2 decimal arithmetic. =
Even people w zero-know on computers would laugh about it (yes, try =
explaining it to your wife or kids, eg).=20

For simple math (eg those dealing w money):
i can live w slowness in simple math (quite a paradox if you ask me).=20
i can live w 1/3 =3D=3D 0.3333333333 =3D>false
or that sqrt(2) =3D=3D 1.1414213562 =3D> false
i use bigdecimal. bigdecimal handles 0.05 + 0.01 =3D=3D 0.06 =3D> =
true

For complex math,
i can live w slowness (no question there).=20
bigdecimal easily handles sqrt(2) at 100 digits: 2.sqrt(100) =3D> =
#<BigDecimal:b7d74094,'0.1414213562 3730950488 0168872420 9698078569 =
6718753769 4807317667 9737990732 4784621070 3885038753 4327641572 =
7350138462 309122925E1',124(124)
so yes, i still use bigdecimal for complex math.

so regardless, of whether its simple or complex (or "highly precise" or =
not), i use bigdecimal. counting ang looping otoh has no problem w me =
since fixnum/bignum handles this flawlessly. (Also, note that big RDBMS =
like oracle and postgresql use BCD and fixed pt math for numerics).=20

So, my question probably is (maybe this could be addressed to Matz): How =
can i make ruby use a particular arithmetic, like bigdecimal eg, so that =
literals like 1.05, and operations like 1+1.01 are now handled as =
bigdecimals.

thank you and kind regards -botp
 
R

Robert Dober

From: Robert Klemme [mailto:[email protected]]
# And who decides about the size of epsilon and which algorithm to
# choose? There is no one size fits all answer to that hence leaving
# Float#=3D=3D the way it is (i.e. compare for exact identical values) is
# the only viable option. Otherwise you would soon see similar
# questions on the list, i.e., "how come it sometimes works and
# sometimes it doesn't". And they become more difficult to answer as
# the cases are likely less easy to spot / explain.

indeed :(

maybe i'm not "realistic" but i thought

0.05 + 0.01 =3D=3D 0.06 =3D> false

was "unrealistic" enough for a simple and plain 2 decimal arithmetic. Eve=
n people w zero-know on computers would laugh about it (yes, try explaining=
it to your wife or kids, eg).
This seems indeed a very valid argument at first sight.
But I feel that it is not the case.
Ruby has Integers and Floats, it does not have Fixed Digit Decimals
that is all which is to discuss here, as long as we discuss Floats
Michael is just dead right, now saying that we should have something
which delivers
0.05 + 0.01 =3D=3D 0.06 =3D> true
is a slightly different issue.
Personally I do not miss it because if I want decimals to be precise
to n digits I will just multiply by 10**n, at least the precision is
clear than.
But that is a matter of taste, I guess.
Cheers
Robert
For simple math (eg those dealing w money):
i can live w slowness in simple math (quite a paradox if you ask me).
i can live w 1/3 =3D=3D 0.3333333333 =3D>false
or that sqrt(2) =3D=3D 1.1414213562 =3D> false
i use bigdecimal. bigdecimal handles 0.05 + 0.01 =3D=3D 0.06 =3D> true

For complex math,
i can live w slowness (no question there).
bigdecimal easily handles sqrt(2) at 100 digits: 2.sqrt(100) =3D> #<B=
igDecimal:b7d74094,'0.1414213562 3730950488 0168872420 9698078569 671875376=
9 4807317667 9737990732 4784621070 3885038753 4327641572 7350138462 3091229=
25E1',124(124)
so yes, i still use bigdecimal for complex math.

so regardless, of whether its simple or complex (or "highly precise" or n=
ot), i use bigdecimal. counting ang looping otoh has no problem w me since =
fixnum/bignum handles this flawlessly. (Also, note that big RDBMS like orac=
le and postgresql use BCD and fixed pt math for numerics).
So, my question probably is (maybe this could be addressed to Matz): How =
can i make ruby use a particular arithmetic, like bigdecimal eg, so that li=
terals like 1.05, and operations like 1+1.01 are now handled as bigdecimals=
 
R

Robert Klemme

2007/8/31 said:
From: Robert Klemme [mailto:[email protected]]
# And who decides about the size of epsilon and which algorithm to
# choose? There is no one size fits all answer to that hence leaving
# Float#=3D=3D the way it is (i.e. compare for exact identical values) is
# the only viable option. Otherwise you would soon see similar
# questions on the list, i.e., "how come it sometimes works and
# sometimes it doesn't". And they become more difficult to answer as
# the cases are likely less easy to spot / explain.

indeed :(

maybe i'm not "realistic" but i thought

0.05 + 0.01 =3D=3D 0.06 =3D> false

was "unrealistic" enough for a simple and plain 2 decimal arithmetic. Eve=
n people w zero-know on computers would laugh about it (yes, try explaining=
it to your wife or kids, eg).

That's probably the exact reason why not your wife or kids write
software but people who are (hopefully) experts. :) If you study
computer sciences you'll typically hit the topic of numeric issues at
some point.
For simple math (eg those dealing w money):
i can live w slowness in simple math (quite a paradox if you ask me).
i can live w 1/3 =3D=3D 0.3333333333 =3D>false
or that sqrt(2) =3D=3D 1.1414213562 =3D> false
i use bigdecimal. bigdecimal handles 0.05 + 0.01 =3D=3D 0.06 =3D> true

For complex math,
i can live w slowness (no question there).
bigdecimal easily handles sqrt(2) at 100 digits: 2.sqrt(100) =3D> #<B=
igDecimal:b7d74094,'0.1414213562 3730950488 0168872420 9698078569 671875376=
9 4807317667 9737990732 4784621070 3885038753 4327641572 7350138462 3091229=
25E1',124(124)
so yes, i still use bigdecimal for complex math.

so regardless, of whether its simple or complex (or "highly precise" or n=
ot), i use bigdecimal. counting ang looping otoh has no problem w me since =
fixnum/bignum handles this flawlessly. (Also, note that big RDBMS like orac=
le and postgresql use BCD and fixed pt math for numerics).
So, my question probably is (maybe this could be addressed to Matz): How =
can i make ruby use a particular arithmetic, like bigdecimal eg, so that li=
terals like 1.05, and operations like 1+1.01 are now handled as bigdecimals=
 
D

Davor

Hi,
Try converting to strings e.g:
irb(main):004:0> (0.05 + 0.01).to_s =3D=3D 0.06.to_s
=3D> true

BR
Davor

2007/8/31, Pe=F1a, Botp <[email protected]>:




From: Robert Klemme [mailto:[email protected]]
# And who decides about the size of epsilon and which algorithm to
# choose? There is no one size fits all answer to that hence leaving
# Float#=3D=3D the way it is (i.e. compare for exact identical values) = is
# the only viable option. Otherwise you would soon see similar
# questions on the list, i.e., "how come it sometimes works and
# sometimes it doesn't". And they become more difficult to answer as
# the cases are likely less easy to spot / explain.
maybe i'm not "realistic" but i thought
0.05 + 0.01 =3D=3D 0.06 =3D> false
was "unrealistic" enough for a simple and plain 2 decimal arithmetic. E=
ven people w zero-know on computers would laugh about it (yes, try explaini=
ng it to your wife or kids, eg).
That's probably the exact reason why not your wife or kids write
software but people who are (hopefully) experts. :) If you study
computer sciences you'll typically hit the topic of numeric issues at
some point.
ue
<BigDecimal:b7d74094,'0.1414213562 3730950488 0168872420 9698078569 6718753=
769 4807317667 9737990732 4784621070 3885038753 4327641572 7350138462 30912=
2925E1',124(124)not), i use bigdecimal. counting ang looping otoh has no problem w me sinc=
e fixnum/bignum handles this flawlessly. (Also, note that big RDBMS like or=
acle and postgresql use BCD and fixed pt math for numerics).w can i make ruby use a particular arithmetic, like bigdecimal eg, so that =
literals like 1.05, and operations like 1+1.01 are now handled as bigdecima=
ls.
 
K

Kyle Schmitt

There is also a class called BigDecimal (or something like that) if
you want to have really accurate numbers and no floating-point errors.

A warning on things like BigDecimal.
Unless I'm mistaken it's still stored in twos compliment, which means
you'll still end up with the same sort of floating point problems
(albeit further down), and the same numbers that you can't express
exactly with a normal float, can't be expressed exactly with a
BigDecimal.

You'd think some fixed point math libraries would help, but be
careful, because many of those _also_ store in twos compliment.

--Kyle
 
M

M. Edward (Ed) Borasky

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Bertram said:
Hi,

Am Freitag, 31. Aug 2007, 17:03:38 +0900 schrieb Daniel DeLorme:

I alway liked it to be forced to decide what are countable
dimensions and what are continuous ones. This made my
programming style much clearer.

Bertram

Well ... since *everything* in computing is countable ... :)

But seriously, real computing on real digital machines involves
translation from infinite and/or continuous semantics to very large
finite discrete processes. About the only thing that's truly infinite is
the time it takes to complete

while true do
end

But there are some techniques not terribly well known that improve on
floating point arithmetic as normally implemented in hardware. They're
too expensive for mass-market hardware, so they're usually implemented
in software. Interval arithmetic has already been mentioned, but there
are some others. Try "A New Approach to Scientific Computation" by
Kulisch and Miranker.

Most of these are historical curiosities these days because of the
widespread distribution of high-performance open-source libraries for
exact and symbolic computation, such as GiNaC, GMP, CLN, etc. And it's
pretty easy using SWIG to interface one or more of these to Ruby.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFG2DJg8fKMegVjSM8RAqz5AJ9iT+1hEDwtuYht2pnp/cnup/B2QwCgzIte
BwjwjradiTPNAV4h8LX/bj8=
=wT6P
-----END PGP SIGNATURE-----
 
M

M. Edward (Ed) Borasky

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Kyle said:
A warning on things like BigDecimal.
Unless I'm mistaken it's still stored in twos compliment, which means
you'll still end up with the same sort of floating point problems
(albeit further down), and the same numbers that you can't express
exactly with a normal float, can't be expressed exactly with a
BigDecimal.

You'd think some fixed point math libraries would help, but be
careful, because many of those _also_ store in twos compliment.

--Kyle
IIRC BigDecimal is in fact stored in (pregnant pause) Binary Coded
Decimal. But I should check that.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFG2DKp8fKMegVjSM8RAsTQAKCNicl4roW3uh7LWLERNMxbtL0dnQCdHTAY
/Y2wWjyVegtTECn6Ur32v30=
=hAqx
-----END PGP SIGNATURE-----
 
M

M. Edward (Ed) Borasky

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Robert said:
2007/8/31 said:
From: Robert Klemme [mailto:[email protected]]
# And who decides about the size of epsilon and which algorithm to
# choose? There is no one size fits all answer to that hence leaving
# Float#== the way it is (i.e. compare for exact identical values) is
# the only viable option. Otherwise you would soon see similar
# questions on the list, i.e., "how come it sometimes works and
# sometimes it doesn't". And they become more difficult to answer as
# the cases are likely less easy to spot / explain.

indeed :(

maybe i'm not "realistic" but i thought

0.05 + 0.01 == 0.06 => false

was "unrealistic" enough for a simple and plain 2 decimal arithmetic. Even people w zero-know on computers would laugh about it (yes, try explaining it to your wife or kids, eg).

That's probably the exact reason why not your wife or kids write
software but people who are (hopefully) experts. :) If you study
computer sciences you'll typically hit the topic of numeric issues at
some point.

Actually, unless you're a (hard) science or engineering major, you
probably won't. Numerical analysis/methods aren't really considered part
of "computer science". Computer science is mostly about *discrete*
mathematics, data structures, programming languages and their
interpreters and compilers, etc. And you probably won't get it in a
"software engineering" program either. Applied mathematics is your best
shot, I think.


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFG2DP18fKMegVjSM8RAhnIAKDO4mMSDAh/pJ1BSe4ICFj4tD+9vACfdgh5
CZvGLjrSlH4QVf6sEG2b4Lc=
=XRUf
-----END PGP SIGNATURE-----
 
M

M. Edward (Ed) Borasky

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Dan said:
Michael wrote very convincingly that there is no simple solution that
will work in all cases. I'm convinced, at least. If there is no solution
that works 100% of the time, we can't give the illusion that there is.
To do so would be to teach bad programming practices to newcomers, and
that's not fair.

The current "==" in okay because it works the way a moderately
experienced programmer would expect. A perfect "==" that could deal with
floats would be even better, but we aren't gonna get that. A "==" that
seems like magic and almost always works is really pretty dangerous.

Dan

Yeah ... even people like me who have spent several decades in this
branch of computing need to be reminded of these things occasionally.
What *I* wish Ruby had for scientific number-crunching is built-in
floating point arrays, rather than having to pass potential large
objects into and out of C language libraries to get number crunching done.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFG2DTf8fKMegVjSM8RAudJAJ9wOyvmdtgSRgVVUpbXem6nLUpfKQCdEspD
YgYD17sas15mJA/Q1kGMwJQ=
=kZx0
-----END PGP SIGNATURE-----
 
M

M. Edward (Ed) Borasky

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Ken said:
Neither one can represent 1/3 or sqrt(2) exactly.
Which is why there are libraries and packages like GiNaC, CLN, GMP,
Singular, Maxima, Axiom, ...

I personally think Ruby's libraries for exact arithmetic are better than
those in the other scripting languages -- I don't think anyone else has
BigDecimal, Rational, Complex, Matrix, "mathn" and Bignums built in or
part of the *standard* libraries. The only thing that's missing, as I
noted earlier, is the ability to declare a physically contiguous block
of RAM as a vector of floating point or complex numbers and operate on
them as such. For that, you need to go to an external package like NArray.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD4DBQFG2Ept8fKMegVjSM8RAiqFAJi/iWlXdhu/4UNm8y3WV+g6tlaCAKC5PDDS
yWzL3Cnk6zkYnMcA/Rl4dw==
=bW0a
-----END PGP SIGNATURE-----
 
R

Robert Klemme

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Robert said:
2007/8/31 said:
From: Robert Klemme [mailto:[email protected]]
# And who decides about the size of epsilon and which algorithm to
# choose? There is no one size fits all answer to that hence leaving
# Float#== the way it is (i.e. compare for exact identical values) is
# the only viable option. Otherwise you would soon see similar
# questions on the list, i.e., "how come it sometimes works and
# sometimes it doesn't". And they become more difficult to answer as
# the cases are likely less easy to spot / explain.

indeed :(

maybe i'm not "realistic" but i thought

0.05 + 0.01 == 0.06 => false

was "unrealistic" enough for a simple and plain 2 decimal arithmetic.Even people w zero-know on computers would laugh about it (yes, try explaining it to your wife or kids, eg).
That's probably the exact reason why not your wife or kids write
software but people who are (hopefully) experts. :) If you study
computer sciences you'll typically hit the topic of numeric issues at
some point.

Actually, unless you're a (hard) science or engineering major, you
probably won't. Numerical analysis/methods aren't really considered part
of "computer science". Computer science is mostly about *discrete*
mathematics, data structures, programming languages and their
interpreters and compilers, etc. And you probably won't get it in a
"software engineering" program either. Applied mathematics is your best
shot, I think.

Of course I can speak only for Germany but I had a mandatory lecture on
numerical analysis during my CS studies. IIRC it was even during the
lower grade phase (dunno the proper English wording), i.e. within the
first two years. But then again, the "Diplom" is probably considered
roughly equivalent to a major. Somehow I thought numerical effects were
so basic and common that they are mentioned in all other places as well.
Thanks for pointing this out.

Kind regards

robert
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,904
Latest member
HealthyVisionsCBDPrice

Latest Threads

Top