Hi friends:
Machine epsilon is the maximum relative error of the chosen rounding
procedure
#include <stdio.h>
#include <math.h>
#include <float.h>
int main(void)
{
double a = 0.1;
double b = 0.1;
a += 1.0;
a -= 1.0;
printf("a == b = %s\n", a == b ? "equals" : "unequals");
printf("a == b = %s\n", fabs(a - b) < DBL_EPSILON ? "equals" :
"unequals");
return 0;
}
Is this a good method for test the equality?
Not particularly.
First of all, it's not a test for equality, that's simply done in C by
the == operator. It's a test for near-equality. It's seldom correct to
test floating point values for equality with each other; it's sometimes
appropriate to compare them for approximate equality, but you should
always be sure that it actually is appropriate.
The general form for conducting approximate equality tests is
fabs(a-b) < epsilon
where epsilon will, in general, have different values for different
comparisons.
You correctly described machine epsilon as describing a relative error.
That means that you should not use DBL_EPSILON directly. If the only
source of error in you calculations was a single floating point roundoff
error, you should scale it according to the numbers being compared, by
multiplying it by fabs(a) or fabs(b), whichever is larger.
If multiple floating point errors are involved (as, for instance, with
a+=1.0 and a-=1.0 in your example), then the calculation of the
appropriate value for epsilon gets more complicated. If any of the
quantities you compare are measurements, or numbers calculated from
measurements, measurement error is likely to be much larger than
floating point round-off error.
There's a subject called "propagation of errors" or "propagation of
uncertainty), which is essentially devoted to determining the correct
way of calculating the appropriate value to use for epsilon in such
contexts. The corresponding wikipedia page is at least as good a place
to start learning about that subject as any other.