Problem comparing a double with 0

Discussion in 'C++' started by =?iso-8859-1?Q?Juli=E1n?= Albo, Jul 2, 2003.

1. =?iso-8859-1?Q?Juli=E1n?= AlboGuest

Hello.

This test:

#include <stdio.h>

int main ()
{
double a= -1.0e-120;
if (a < 0.0)
printf ("%g < 0\n", a);
if (a > 0.0)
printf ("%g > 0\n", a);
if (a == 0.0)
printf ("%g == 0\n", a);
}

Compiled with gcc 3.3 as a C program gives -1e-120 < 0, but compiled as
C++ gives me -1e-120 == 0.

I suspect that is a gcc problem, but is some workaround possible? How
can I reliably compute the sign of a?

Regards.

=?iso-8859-1?Q?Juli=E1n?= Albo, Jul 2, 2003

2. Victor BazarovGuest

"JuliÃ¡n Albo" <> wrote...
> This test:
>
> #include <stdio.h>
>
> int main ()
> {
> double a= -1.0e-120;
> if (a < 0.0)
> printf ("%g < 0\n", a);
> if (a > 0.0)
> printf ("%g > 0\n", a);
> if (a == 0.0)
> printf ("%g == 0\n", a);
> }
>
> Compiled with gcc 3.3 as a C program gives -1e-120 < 0, but compiled as
> C++ gives me -1e-120 == 0.
>
> I suspect that is a gcc problem, but is some workaround possible? How
> can I reliably compute the sign of a?

As far as the sign is concerned, you're doing it right. If g++
can't generate right code for such a simple comparison or for such
a simple initialisation, it's a bug in the compiler. If you need
a work-around, you should probably ask in gnu.g++.help.

You _could_ test the high-order bit for the sign, and that should
always work for IEEE doubles. Of course, testing a bit works only
on unsigned integral types, so you'd have to extract the top byte.
And then the endianness comes into play... It's not portable, IOW.

Try looking at the assembly code it generates to make sure whether
it's a bug, and possibly to see what workaround you could apply.

Victor

Victor Bazarov, Jul 2, 2003

3. =?iso-8859-1?Q?Juli=E1n?= AlboGuest

Victor Bazarov escribiÃ³:

> > #include <stdio.h>
> >
> > int main ()
> > {
> > double a= -1.0e-120;
> > if (a < 0.0)
> > printf ("%g < 0\n", a);
> > if (a > 0.0)
> > printf ("%g > 0\n", a);
> > if (a == 0.0)
> > printf ("%g == 0\n", a);
> > }
> >
> > Compiled with gcc 3.3 as a C program gives -1e-120 < 0, but compiled as
> > C++ gives me -1e-120 == 0.
> >
> > I suspect that is a gcc problem, but is some workaround possible? How
> > can I reliably compute the sign of a?

>
> As far as the sign is concerned, you're doing it right. If g++
> can't generate right code for such a simple comparison or for such
> a simple initialisation, it's a bug in the compiler. If you need
> a work-around, you should probably ask in gnu.g++.help.
> You _could_ test the high-order bit for the sign, and that should
> always work for IEEE doubles. Of course, testing a bit works only
> on unsigned integral types, so you'd have to extract the top byte.
> And then the endianness comes into play... It's not portable, IOW.
>
> Try looking at the assembly code it generates to make sure whether
> it's a bug, and possibly to see what workaround you could apply.

Thak you for your suggestions. The workaround I found is using instead
of the literal 0.0 a variable with the value 0 and no const. If the
variable is const, the problem remains.

Regards.

=?iso-8859-1?Q?Juli=E1n?= Albo, Jul 2, 2003