Are we going to talk about mathematics here, or computer arithmetic.
Computer arithmetic is a branch of mathematics.
As I said, I think the tendency to ping-pong between topics is
part of the "failure to communicate." I agree with you 100%
that one can define an arithmetic, in the purely mathematical
sense, for floating point numbers.
Not only can---it's been done, and it's what you use when you do
floating point arithmetic.
I also agree that such an arithmetic can be made very precise,
at least as long as you stick to the same hardware and same
computer language types (e.g., float or double). What we seem
to disagree on is whether such an arithmetic has any value
whatsoever, outside of a Comp. Sci. classroom.
Without it, you can't get useful results from floating point
arithmetic. Unless you're using it, you have no business using
floating point arithmetic, since you don't know what your
results mean.
As Giuliano and I have both pointed out, in the real world
people don't use computers (you might prefer "don't usually
use computers) to calculate practical things that apply to ...
um ... the real world. You yourself have said that this is
exactly what you're doing -- using fp numbers to represent
financial values. The very instant you associate the
arithmetic of fp numbers with the arithmetic of financial
data, you have crossed the line into the real world. In this
world, the fp number is only an approximation of the
real-world number.
Not "only". The original floating point number may be an
approximation of some real-world number (in my case, a decimal
fraction with five places after the decimal). The final results
may also be interpreted as such. But that doesn't make
floating point arithmetic decimal arithmetic. Floating point
arithmetic continues to be floating point arithmetic, and obey
the rules of floating point arithmetic, and not those of decimal
arithmetic (or real arithmetic, if your original abstraction is
real numbers).
Then we'd better dang well fix C++.
With regards to undefined behavior in a single threaded
environment, I agree. But the standardization committee
doesn't. With regards to threading, I don't think it's even
possible.
Repeatable behavior is not just a desire. It's an absolute
requirement. If your computer program, or the system that you
run it on, doesn't give repeatable results, it's broken, plain
and simple.
Then every system in the world is broken, because none of them
give truely repeatable results. (If /dev/random gave repeatable
results, I'd consider it broken.) A physical system can never
give repeatable results.
But I think we're straying from the subject. My comment was in
parentheses, because it wasn't concerned with floating point
arithmetic, but aleas due to timing issues, etc. (A real time
system will generally have different behavior depending on
whether event a occurs before or after event b. And when the
two events occur at the same time, or very close to the same
time, whether event a or event b is considered to occur first is
more or less random.)
Is that true? If that's the case, then /dev/random is broken
as well.
Just the opposite. The whole point of /dev/random is to
generate a truely random value, for programs which need such.
Or, to be more accurate, one must use it with great care. Many
scientific and engineering problems require the use of random
numbers to simulate noise in real, physical systems. Those
pseudorandom numbers are generated, amazingly enough, by
random number generators (RNGs). Ironically, those RNGs
_MUST_ be deterministic in what they do. Given a certain seed,
the RNG must always produce the same sequence of fp numbers.
If it didn't, it would be impossible to validate the software.
/dev/random may be Ok for dealing cards in Freecell, but not
in serious software.
Actually, it's used in some very serious software; there are
times when the computability (and even the possibility of
deterministically repeating a sequence) are undesirable. Things
like generating a one-time key in cryptographic applications,
for example.
As for the keyboard press example, that's a contrived case,
and I think you know it.
It's a case from the real world.
You're not computing anything in the computer at all: You're
simple measuring the operator at the keyboard. The only result
you get is that people are unpredictable. I think we already
knew that.
The result I'm getting is that physical systems are
unpredicatable. People admittedly more than most, but you have
some unpredicatability anytime a physical system is involved.
(None of which has anything to do with how to use floating point
arithmetic, of course. It's completely a side issue.)
Again, I think you have to distinguish between number theory
and computer implentations.
And what do you think underlies IEEE floating point, if it isn't
number theory?
It's also quite true that one can implement rational numbers
as rational numbers; i.e., compute the numerator and
denominator of each fraction. However, more often than not, we
use fp numbers to represent both rationals and integers. The
three kinds (four, if you count complex -- six, if you allow
for strange things like complex rationals and complex
integers) end up being calculated in the same way, and
according to the same rules.
No, they all have their own rules, and properties which hold for
one don't necessarily hold for others. In particular, (a+b)+c
== a+(b+c) for real numbers (and rationals, and integers), but
not for floating point. If this property is essential for your
algorithm to work, then it won't work with floating point
numbers.
I think we just said the same thing.
I don't think so. You seem to be trying to treat floating point
as if they were real numbers, with some error factor. They're
not; they form an arithmetic of their own, with different rules
and properties than real numbers.
[...]
What you just said is strictly true, but irrelevant. Since
you've made such a point that the rules of fp arithmetic is
not the same as those for real numbers, integers, and
rationals, you should be aware that every floating point
number is a rational number.
So. Every integer is a real number, too. Every real is a
complex number as well, but reals and complex numbers have
different properties.
You're ignoring the essential point: the properties of real
arithmetic don't hold for floating point arithmetic. Other
properties do, however, and if you're using floating point
arithmetic, you need to understand those properties.
[...]
It would not seem to be the case. If it were true, we wouldn't
be holding this discussion.
It's easily verified. Look at the amount of literature
concerning the mathematical theory behind floating point
arithmetic, and the papers analysing different algorithms. It's
true that there are still some things for which the best
algorithm isn't known, but the underlying principles of how
floating point arithmetic works, and the rules which apply to
it, are established. (They do require a high degree of
mathematical competence to understand, and so aren't presented
in vulgarizations. That doesn't mean that they aren't
understood, however.)
Yes, you said that. Yet, a bit further down your msg, you talk
about iteration, square roots, exp(x), and other things,
_NONE_ of which can be represented exactly.
The result I want can be represented exactly. The law specifies
very precisely what it must be. Even when exponants and such
are involved.
Again, we seem to agree. Real numbers are not the same as fp
numbers. As long as we recognize that they are not, we're all
happy. The problem arises because we are using computer
implementations of fp arithmetic to represent variables that
are continuous, rather than discrete.
Sort of. The real problem arises because programmers expect
properties to hold which don't, because floating point numbers
are not real numbers.
Yes, indeed they do. Which is why testing them for equality
is a Bad Idea.
No. That's why doing anything with them without understanding
how they work is a bad idea. There are times when you test for
equality, and times when you don't. Just avoiding tests for
equality, and replacing them with range tests (within an
epsilon) is a sure recepe for disaster.
[...]
Many years ago, I was asked to help rescue an accounting
program for the engineering firm I worked for. They had hired
a guy (for not much money, I think) to write the program, and
he chose Fortran. He, too, was using fp numbers to represent
monetary values. He, too, was computing things like compound
interest. And he, too, was rounding the calculations in a
manner that he thought was safe. But despite his best efforts,
he was not able to make the program produce the same results
that our customer (NASA) got using COBOL. Every week, a penny
would pop up, either positive or negative, due to rounding. So
every week, the accountants had to double-check the
computations by hand, and add or subtract a penny here or
there.
We looked the program over pretty carefully, and finally
concluded that it was unfixable. In the end, the company ended
up spending a lot more money to have the program rewritten in
COBOL.
Accounting laws define exactly how rounding is supposed to work.
In terms of decimal arithmetic. Obviously, it's a lot simpler
to implement if you have an abstraction which implements decimal
arithmetic directly. But it can also be made to work using
floating point, if you know what you're doing. (Not float, of
course---seven digits decimal precision isn't enough for much of
anything.)
[...]
Of course I can. It's sloppy programming.
Why? It's the established best practice, and it provably works.
It's using a real number to transmit information that is
fundamentally Boolean in nature (e.g., has the value of this
element been given a value?). People use sentinel numbers to
save having to pass boolean flags around. I've done it too,
but the better solution is to use a boolean to represent a
logical condition.
A lot depends on context. I'm a great fan of Fallible; I use it
all the time. But if you're dealing with large matrixes of
values, it can introduce unacceptable overhead.