* Tobias:
Unless x and y are special values, with modern floating point
implementations this is guaranteed. Floating point division, with a
modern floating point implementation, isn't inherently inexact: it
just rounds the result to some specific number of binary digits.
Hence x/x, with normal non-zero numbers, produces exactly 1.
I had a funny experience that allows me to say that the number of
binarydigits is not so specific, but can vary unexpectedly =)
I had this compare predicate which gave me funny assertions when used
incombination with particular optimizations:
(unchecked code, just to give the idea)
struct Point {
double a, b;
};
bool anglesorter(Point a, Point b) {
return atan2(a.y, a.x) < atan2(b.y, b.x);
}
which, passed to a sort function, lead to assertions when a point in
the sequence was replicated. The assertion (inserted by microsoft as a
checkfor the predicates) has this condition:
pred(a, b) && !pred(b, a)
which is quite funny. After disassembling i found out that one of the
two atan2 was kept in a x87 register (80-bit), while the other was
stored inmemory (64-bit precision), so the result of
anglesorter(a, a)
would give true for some values of a.
Now, this doesn't look the same as the OP's example, but I wouldn't
trust the floating point optimizations again: what happens if foo gets
inlined, its parameters are into extended precision register, but the
result ofmax, for some odd reason, does get trimmed to
double-precision?
I must admit I don't like the alternative version too:
x/z == 1.0
becomes
fabs(x/z-1.0) < numeric_limits<double>::epsilon()
or, avoiding divisions:
fabs(x-z) < numeric_limits<double>::epsilon()*z
because they are both unreadable.
I'd try to avoid the need of such a compare in the first
place.