M
Matthias Wächter
Folks,
While we had a discussion around the magic number 0.06, I tried to
output arbitrary floating point numbers and was puzzled about the
imprecision of my ruby executable (or a library it uses for this
task) on my Windows machine (Athlon XP, Ruby 1.8.6 win32 installer
from ruby-lang.org):
C:\>ruby --version
ruby 1.8.6 (2007-03-13 patchlevel 0) [i386-mswin32]
C:\>irb
irb(main):001:0> "%.60f" % 0.1
=> "0.100000000000000010000000000000000000000000000000000000000000"
irb(main):002:0> 0.1.to_s
=> "0.1"
I mean, irrespective of wrongly outputting 0.1 as "0.01" using
to_s, it cannot even correctly calculate the number using floating
point printf-like string substitution!
When I let my Athlon64 calculate the same thing (Ruby 1.8.6 on
gentoo, 64 bit), it outputs the following which is correct:
$ ruby --version
ruby 1.8.6 (2007-03-13 patchlevel 0) [x86_64-linux]
$ irb
irb(main):001:0> "%.60f" % 0.1
=> "0.100000000000000005551115123125782702118158340454101562500000"
Is this a 32-bit-ism? Using the old 1.8.4 cygwin binary which came
along with cygwin a long time ago, I get the following output which
is not correct but a lot more precise than the output from 1.8.6-25:
$ /usr/bin/ruby.exe --version
ruby 1.8.4 (2005-12-24) [i386-cygwin]
$ irb
irb(main):001:0> "%.60f" % 0.1
=> "0.100000000000000005551115123125782702118158000000000000000000"
Just for comparison: All versions output the same if I give them the
precise binary representation of the number:
irb(main):001:0> [-4,-5,-8,-9,-12,-13,-16,-17,-20,-21,-24,-25,
-28,-29,-32,-33,-36,-37,-40,-41,-44,-45,-48,-49,-52,-53,-55
].inject(BigDecimal("0")) {|sum,ex| sum+BigDecimal.new("2.0")**ex}
=> #<BigDecimal:4bd9e90,'0.1000000000 0000000555 1115123125
7827021181 5834045410 15625E0',56(96)>
So, result:
1. My AMD64-based self-compiled version works correct. Absolute error=0
2. The cygwin-based ruby from 2005 seems to put more effort into the
calculation but produces an error of about 3.4e-43. Note that for
the exact number 0.01, the correct floating point representation has
an error of <2.0**-57 which is about 6.9e-18, so we can consider
this error as "OK" although I specifically requested 60 decimal
digits which it could not supply.
3. The pure 32 bit windows version distributed by ruby-lang.org
makes the output completely wrong. It produces an error of 1e-17
which is more than the floating point representation introduces --
for no good reason! While I specifically asked for 60 digits
(whereas 55 would be sufficient) I got only 17.
Note that _any_ floating point number (based on IEEE 754) can be
_exactly_ represented using decimal notation using "enough" digits
-- which are here 55.
Is there something I did wrong? Maybe there are some constants i can
tweak in the Float class so I get more precise values?
- Matthias
While we had a discussion around the magic number 0.06, I tried to
output arbitrary floating point numbers and was puzzled about the
imprecision of my ruby executable (or a library it uses for this
task) on my Windows machine (Athlon XP, Ruby 1.8.6 win32 installer
from ruby-lang.org):
C:\>ruby --version
ruby 1.8.6 (2007-03-13 patchlevel 0) [i386-mswin32]
C:\>irb
irb(main):001:0> "%.60f" % 0.1
=> "0.100000000000000010000000000000000000000000000000000000000000"
irb(main):002:0> 0.1.to_s
=> "0.1"
I mean, irrespective of wrongly outputting 0.1 as "0.01" using
to_s, it cannot even correctly calculate the number using floating
point printf-like string substitution!
When I let my Athlon64 calculate the same thing (Ruby 1.8.6 on
gentoo, 64 bit), it outputs the following which is correct:
$ ruby --version
ruby 1.8.6 (2007-03-13 patchlevel 0) [x86_64-linux]
$ irb
irb(main):001:0> "%.60f" % 0.1
=> "0.100000000000000005551115123125782702118158340454101562500000"
Is this a 32-bit-ism? Using the old 1.8.4 cygwin binary which came
along with cygwin a long time ago, I get the following output which
is not correct but a lot more precise than the output from 1.8.6-25:
$ /usr/bin/ruby.exe --version
ruby 1.8.4 (2005-12-24) [i386-cygwin]
$ irb
irb(main):001:0> "%.60f" % 0.1
=> "0.100000000000000005551115123125782702118158000000000000000000"
Just for comparison: All versions output the same if I give them the
precise binary representation of the number:
irb(main):001:0> [-4,-5,-8,-9,-12,-13,-16,-17,-20,-21,-24,-25,
-28,-29,-32,-33,-36,-37,-40,-41,-44,-45,-48,-49,-52,-53,-55
].inject(BigDecimal("0")) {|sum,ex| sum+BigDecimal.new("2.0")**ex}
=> #<BigDecimal:4bd9e90,'0.1000000000 0000000555 1115123125
7827021181 5834045410 15625E0',56(96)>
So, result:
1. My AMD64-based self-compiled version works correct. Absolute error=0
2. The cygwin-based ruby from 2005 seems to put more effort into the
calculation but produces an error of about 3.4e-43. Note that for
the exact number 0.01, the correct floating point representation has
an error of <2.0**-57 which is about 6.9e-18, so we can consider
this error as "OK" although I specifically requested 60 decimal
digits which it could not supply.
3. The pure 32 bit windows version distributed by ruby-lang.org
makes the output completely wrong. It produces an error of 1e-17
which is more than the floating point representation introduces --
for no good reason! While I specifically asked for 60 digits
(whereas 55 would be sufficient) I got only 17.
Note that _any_ floating point number (based on IEEE 754) can be
_exactly_ represented using decimal notation using "enough" digits
-- which are here 55.
Is there something I did wrong? Maybe there are some constants i can
tweak in the Float class so I get more precise values?
- Matthias