This code for the comparison of fp types is taken from the C FAQ.
Any problems using it in a macro?
/* compare 2 doubles for equality */
#define DBL_ISEQUAL(a,b) (((fabs)(((a)-(b)))<=(DBL_EPSILON)*(fabs)((a))))
You will often come across some very good advice that it's unwise to
compare floating point values for equality. The reason is that numbers
that are calculated in two different ways that, mathematically, should
produce exactly the same value (such as 1.0 and 3.0*(1.0/3.0), usually
do not do so if calculated using floating point. That's mainly because
of floating point round-off: at best, a 64-bit floating point format can
represent exactly a maximum of 2^64 different floating point numbers;
real floating point formats represent significantly less that that
number of different values. The best you can hope for from a floating
point calculation is that the result will be one of the two
representable values that is closest to the mathematically correct
value; in many cases the best achievable result has several times that
amount of error. In pathological cases, the uncertainty can be infinite.
The right way to deal with this problem is to compare values with a
comparison tolerance:
#define COMP_TOL(a, b, tol) (fabs((a)-(b)) < (tol))
Your macro is equivalent to setting tol to DBL_EPSILON*fabs(a). That is,
unfortunately, a value that is generally too small to avoid having
exactly the same problems as a direct equality comparison.
The right value to use for 'tol' depends upon how a and b were
calculated; there's a sophisticated science to the estimation of
uncertainties in calculated values, called "propagation of errors". What
I'm about to say is no more than a simplified version of one of the most
basic aspects of that science..
If there's any measurement uncertainty in either a or b, call it sigma_a
and sigma_b, respectively, and those uncertainties are uncorrelated with
each other, then the uncertainty in the difference between a and b is
sqrt(sigma_a*sigma_a + sigma_b*sigma_b)). The C standard library
contains a function that simplifies that calculation: hypot(sigma_a,
sigma_b). If floating point round-off is the ONLY reason for uncertainty
in the value of a, then sigma_a can be approximated by multiplier *
DBL_EPSILON * fabs(a); the value of multiplier is never smaller than
1.0, and gets larger as the complexity of the calculations used to
determine the value of 'a' increases.
Your macro is equivalent to implementing this concept with the
assumptions that sigma_b = 0, implying that b is absolutely accurate,
and that the uncertainty in the value of 'a' is due entirely to only a
single floating point roundoff. This is an incredibly unlikely case.
Do the same issues involved in comparing 2 fp types for equality
apply to comparing a float to zero? E.g. is if(x == 0.)
considered harmful?
The only value that compares exactly equal to 0 is 0 itself; if there's
any uncertainty in the value of x at all, such a comparison will almost
always turn out to be false, even when x is calculated from values that
mathematically should have produced a 0 (such as 1.0-3.0*(1.0/3.0)).
Only do a comparison to 0 without a tolerance if you're certain that x
is exactly equal to the value you're interested in comparing.