Where? It specifies 53 bits of precision for doubles, and this is 15.95
decimal digits. In Table 3 it also seems to support M = 10^17-1 before
garbage digits occur in conversion to decimal.
The existence of the figures doesn't make them *significant*. The
conversion to decimal is *not* where the garbage comes from.
First off, let me say I'll have to look up the "12 digit" thing - I
learned that long ago and I don't have the document handy to provide a
cite just now. Historically the IEEE double precision format was
intended to provide functionality similar to that of the best
electronic calculators available at the time which had 10 displayed
digits and carried 12..13 significant digits internally.
WRT "garbage" ... the 754 document deals only with the storage formats
and operations on values. Nowhere does it explain the rationale for
ignoring the low bits of a value - for that you need to rely on
knowledge of hardware, the computation process and mathematical
intuition.
1) Most decimal values cannot be represented exactly in base 2.
Excepting the infrequent cases in which the value can be exactly
represented *within the confines of the format*, you have to assume
that most values will have at least 1 non-significant bit in their
mantissa. Applying a binary function to a pair of values each having
one non-significant bit results in a value with two non-significant
bits. Applying a transcendental function may result in several bits
becoming non-significant [e.g., logarithms]. The number of
non-significant bits in the result can not be less than the maximum
number of non-significant bits in the operand(s) and may be more
depending on the function.
2) Rounding and renormalizing intermediate results progressively
poisons the least significant bits of each successive calculation
involving them.
3) Storing intermediate results to memory may result in truncation and
loss of any extended precision available to the hardware. When
truncated values are later reused at extended precision, their
meaningless extended bits contaminate the computation making the
extended bits of the new result meaningless.
4) The FPU on some computers is actually 32-bit or 64-bit. Some
computers have no FP hardware at all. See #1.
Java (like most languages) makes no guarantee that intermediate values
preserve precision available to hardware. Register pressure in a
complicated expression may force intermediate values to be written
back to memory ... whether such values preserve the hardware's
precision is up to the JVM developer. If the hardware actually
provides less precision than an IEEE-754 double, the JVM developer has
to choose between being non-compliant or providing FP through software
emulation.
There's a reasonably non-technical paper that discusses the
convergence of mathematics and computer hardware called: "What Every
Computer Scientist Should Know About Floating-Point Arithmetic" by
David Goldberg. It can explain better than I can the rationale for
caution with floating point.
George