J
John
Hi,
I encountered a strange problem while debugging C code for a
Windows-based application in LabWindows CVI V5.5, which led me to
write the test code below. I tried this code with a different compiler
and got the same erroneous result on two different PCs (with OS Win98
& Win98SE), so it appears to be a problem with ANSI C. I thought that
negative double variables could be compared as easily and *reliably*
as integers, but apparently not?
#include <ansi_c.h>
void main (void)
{
double a = -2.0, b = -2.0;
if (a > b)
printf("a is greater than b because a is %f and b is %f\n", a, b);
else
printf("a is not greater than b because a is %f and b is %f\n", a,
b);
a -= 0.01; // decrease value of a by 0.01
a += 0.01; // restore original value of a by increasing it by 0.01
if (a > b)
printf("a is greater than b because a is %f and b is %f\n", a, b);
else
printf("a is not greater than b because a is %f and b is %f\n", a,
b);
}
The output as copied from the emulated DOS window is:
a is not greater than b because a is -2.000000 and b is -2.000000
a is greater than b because a is -2.000000 and b is -2.000000
If I decrement and then increment a by 0.001, everything is fine, so
it doesn't look like there is a problem with the small magnitude of
the fractions.
I would be grateful for any solutions or suggestions to this problem
so that I can process *all* fractions correctly.
Thanks in advance,
John.
I encountered a strange problem while debugging C code for a
Windows-based application in LabWindows CVI V5.5, which led me to
write the test code below. I tried this code with a different compiler
and got the same erroneous result on two different PCs (with OS Win98
& Win98SE), so it appears to be a problem with ANSI C. I thought that
negative double variables could be compared as easily and *reliably*
as integers, but apparently not?
#include <ansi_c.h>
void main (void)
{
double a = -2.0, b = -2.0;
if (a > b)
printf("a is greater than b because a is %f and b is %f\n", a, b);
else
printf("a is not greater than b because a is %f and b is %f\n", a,
b);
a -= 0.01; // decrease value of a by 0.01
a += 0.01; // restore original value of a by increasing it by 0.01
if (a > b)
printf("a is greater than b because a is %f and b is %f\n", a, b);
else
printf("a is not greater than b because a is %f and b is %f\n", a,
b);
}
The output as copied from the emulated DOS window is:
a is not greater than b because a is -2.000000 and b is -2.000000
a is greater than b because a is -2.000000 and b is -2.000000
If I decrement and then increment a by 0.001, everything is fine, so
it doesn't look like there is a problem with the small magnitude of
the fractions.
I would be grateful for any solutions or suggestions to this problem
so that I can process *all* fractions correctly.
Thanks in advance,
John.