Ruby can't subtract ?

C

Christopher Dicely

The problem is that the 754 representation has finite precision.

Well, the problem isn't that. The problem is that the IEEE 754 (1985)
provides only binary floating point representations, when many common
problem domains deal almost exclusively with values that have finite
(and short) exact representations in base 10, which may or may not
have finite representations in base 2. IEEE 754 (2008) addresses this
with decimal floating point representations and operations. As
IEEE-754 (2008) is implemented more widely, it will be less likely
that arbitrary precision decimal libraries with poor performance will
be needed to do simple tasks that don't require many (base-10) digits
of precision, but do require precise calculations with base-10
numbers.
It's also worth noting that most floating point hardware is not
anywhere close to 754 compliant even though most FPUs do use the
standard number formats (at least for single and double precision).

AFAIK, neither IEEE 754 (1985) nor IEEE 754 (2008) requires that an
implementation be pure-hardware,and essentially-complete
implementations of IEEE 754 (1985) existed before it was a standard,
and complete implementations of both IEEE 754 (1985) and IEEE 754
(2008) exist now, including both pure-hardware and hardware+software
implementations of both.
 
R

Rick DeNatale

BigDecimal actually works with decimal numbers, which are a subset of
rational numbers; Rational does precise math with rational numbers.

I'm afraid that this statement might be confusing some folks, because,
em, it ain't exactly true.

Mathematically we have

Integers
which have an infinite number of values between -Infinity, -1, 0, 1,
.. Infinity
Real Numbers
which include the integers but have an infinite number of values
between each integer
Rational Numbers
which are a subset of the real numbers which can be expressed as a
fraction with integer numerator and divisor
Irrational Numbers
which are the other Real numbers, this includes numbers like Pi, and e.

In computers we have

integers
which represent some subset of the mathematical integers which can
be represented in some small number of bits
floating point numbers
which represent numbers in a form of scientific notation, a number
in some base (usually 2 or 10), which usually is assumed to have a
radix point preceding it), and an exponent represented by a signed
integer using a small number of bits which represents how may
digits/bits to shift the radix point right or left.

Most floating points representations use a binary base, so the radix
point marks the division between the bits which represent the part of
the number greater than 0, and those which represent a binary
fraction. But the representation can also be decimal, with each digit
taking four bits, the radix point represents the decimal point, and
moves left and right in multiples of a digit.

Now floats have a few problems:

1) They trade off precision for range. For values near zero (when
the exponent is zero), the last bit represents a rather small
increment,
but If I need to represent say 100.0002, then for a binary float
I need at least 4 bits to represent that 100 part, so the least
significant bit has a
value which is 2^4 times bigger than in the representation of
0.0002. So as the exponent increases, the smallest difference I can
represent
gets bigger, and if I add two floating point number I can only
preserve as many fractional digits as the larger number can represent.
2) Depending on the base, certain fractional values can't be exactly
represented, this is easier to describe for a base 10 float. For
example
the value 1/3, even though it is a Rational, can't be exactly
represented as a decimal float, since the fractional part is
333333333.... with an
infinite number of 3 digits needed to exactly represent the value.

So floating point numbers, whether binary, decimal or some other base
have an infinite number of un-representable real numbers, both
rationals and irrationals. You can change the parameters of this
problem by changing the base and increasing the number of digits, but
you can't get away from it completely.

Ruby tackes the integer vs Integer problem by having Fixnums produce
Bignums when necessary, Bignums have an alternative representation
without a fixed number of bits.

It also has the Rational class which represents a mathematical
rational number as a numerator and demominator as a reduced Fraction.
This allows mathematical rationals to be represented at some cost in
performance. Whenever a Rational is involved in an arithmetic
operation the result needs to be reduced, which involves calculating
the greatest common divisor. The DateTime uses a Rational to
represent a point in time as a julian 'day' with the fraction
representing the part of the day since midnight, and benchmarking code
doing DateTime manipulations almost invariable reveals that 99% of the
time is spend doing gcd calculations.

But floats are floats, and BigDecimals are just Ruby's implementation
of base 10 floats.

For monetary calculations, the best approach is usually to use
integers and scale them, so for US currency you might use the number
of cents as an integer (Fixnum/Bignum depending on the budget <G>). Or
in cases where it's needed in some binary fraction of a cent.
--
Rick DeNatale

Blog: http://talklikeaduck.denhaven2.com/
Twitter: http://twitter.com/RickDeNatale
WWR: http://www.workingwithrails.com/person/9021-rick-denatale
LinkedIn: http://www.linkedin.com/in/rickdenatale
 
G

George Neuner

Well, the problem isn't that. The problem is that the IEEE 754 (1985)
provides only binary floating point representations, when many common
problem domains deal almost exclusively with values that have finite
(and short) exact representations in base 10, which may or may not
have finite representations in base 2. IEEE 754 (2008) addresses this
with decimal floating point representations and operations. As
IEEE-754 (2008) is implemented more widely, it will be less likely
that arbitrary precision decimal libraries with poor performance will
be needed to do simple tasks that don't require many (base-10) digits
of precision, but do require precise calculations with base-10
numbers.

True, but my point is that the base conversion is not necessarily the
source of the imprecision. The base 10 number may, in fact, have a
finite base 2 representation which does not happen to fit into any
available hardware format.

With respect to 754(2008), I do not know of any CPU manufacturer which
as plans to implement the decimal formats in hardware. There is a
couple of CPUs which already have binary128 (and most are expected
to), and AMD has announced support for binary16 ... but, so far, no
one is talking about decimal anything.

However, use of the 754 bit formats does not mean that the standard is
being followed with respect to functionality.

AFAIK, neither IEEE 754 (1985) nor IEEE 754 (2008) requires that an
implementation be pure-hardware,and essentially-complete
implementations of IEEE 754 (1985) existed before it was a standard,
and complete implementations of both IEEE 754 (1985) and IEEE 754
(2008) exist now, including both pure-hardware and hardware+software
implementations of both.

Again true, the standard does not specify where the functions are to
be implemented ... Apple's SANE, for example, was a pure software
implementation, Intel's x87 and SSE are the closest to being pure
hardware implementations.

However, most CPUs that use 754 number bit formats do not implement
their functionality according to the standard. Although some
functions can be implemented in software, certain things - in
particular rounding and denormalization handling - cannot be 'fixed'
by software to be standard conforming if the underlying hardware does
not cooperate.

George
 
G

George Neuner

Interesting! I wasn't aware of that. Why is that? Do they just leave
out operations or are HW vendors actually cutting corners and digressing
from the prescribed algorithms / results?

Actually the primary reason for deviating from the standard is to
achieve better performance. The standard algorithms are designed to
correctly handle a lot of corner cases that most users will never
encounter in practice. Many manufacturers have chosen to make
expected normal use as fast as possible at the expense of handling the
corner cases incorrectly.

Many CPUs do not implement all the standard rounding modes. Some use
projective affinity where the standard specifies affine infinity
(makes a difference whether +inf == -inf) and most do not support
gradual denormalization (underflow) but simply pin the result to zero
when underflow occurs.

What confuses people is that most CPUs now use (at least) IEEE-754
single and double precision bit formats ... because of that many
people conclude erroneously that the CPU is performing math according
to the 754 standard.

George
 
R

Robert Klemme

Actually the primary reason for deviating from the standard is to
achieve better performance. The standard algorithms are designed to
correctly handle a lot of corner cases that most users will never
encounter in practice. Many manufacturers have chosen to make
expected normal use as fast as possible at the expense of handling the
corner cases incorrectly.

Many CPUs do not implement all the standard rounding modes. Some use
projective affinity where the standard specifies affine infinity
(makes a difference whether +inf == -inf) and most do not support
gradual denormalization (underflow) but simply pin the result to zero
when underflow occurs.

What confuses people is that most CPUs now use (at least) IEEE-754
single and double precision bit formats ... because of that many
people conclude erroneously that the CPU is performing math according
to the 754 standard.

George, thanks for the elaborate and interesting explanation!

Kind regards

robert
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,756
Messages
2,569,535
Members
45,008
Latest member
obedient dusk

Latest Threads

Top