That is quite enough to "deserve" follow-up articles from me expressing
my opinions where they differ from what you wrote.
I think what bothers me is when it gets into nit-picking territory. In
particular, we're discussing operator overloading translating in a
well-defined, well-specified way into specific method calls.
Particular calling code, plus the same methods implemented with
semantically-identical code on the relevant data types, would behave
consistently across all compliant javacs and JVMs, and you could
always boil it down eventually to IEEE semantics. And use strictfp if
you really felt it necessary. For most situations getting an answer
bracketed to within a certain distance of the exact ideal answer will
suffice.
You might be interested in the JScience third-party library.
http://jscience.org/
is the main Web site. There is a "Real" class that represents an error-
bracketed real number, and if the start of a calculation uses
approximations whose error range is known to contain the exact ideal
input value, the output will be a Real object whose error range
contains the exact ideal output value. Of course if the algorithm is
numerically unstable or just very long that error range could be much
wider than the input range, or a NaN might emerge -- meaning that
subtle algorithm problems like that even get exposed. The cost is
speed, as under the hood every operation is actually done on each of
TWO parallel bignums.
As an example, it represents 8 +/- 2 internally as the interval [6,
10]. Add this to 5 +/- 1 or [4, 6] and it gets [10, 16], or 13 +/- 3.
(A Gaussian error version would be nice, where independent errors sum
to the square root of the sum of the squares, admittedly, but that
breaks the guarantee that the genuine answer is somewhere in the
output range and gives you instead a percent confidence; this sort of
error summing is nonetheless quite common in my experience in
engineering and science.)
Some operations end up using FOUR parallel operations mind you -- as
it uses the internal interval endpoints in all four combinations of
low, low; low, high; high, low; and high, high instead of only low,
low and high, high or low, high and high, low in cases less
straightforward than ordinary addition or subtraction respectively.
This is even slower than just using BigDecimal-alikes and praying the
error hasn't grown too large. On the other hand unneeded precision
that is swamped by the known error magnitude is dropped for a
performance boost (it will use fewer mantissa bits to represent each
endpoint as the calculation progresses and the interval widens as a
percentage of value magnitude).
So, I think you might find JScience very interesting indeed if
scientific computation and really accurate numerics is your craving
here.