Portably replacing -0.0 with 0.0

T

Thomas Jahns

So my program has the following function at the moment to replace -0.0 with +0.0
in a function argument:

static inline double
sign_flat(double v)
{
if (v == 0.0)
return 0.0;
return v;
}

But when compiling with the Intel Compiler icc at optimization level -O2 or
higher the above still produces -0.0. This raises two questions:

1. is the compiler free to convert above function into a NOP?

2. does anyone know a work-around or even more suitable formulation to achieve
the same? Preferably not some integer operations using signbit since our main
system is a Power6 which has some issues with FPU depending on ALU results and
vice versa. I've done some searching but the Internet is flooded with people
trying to wrap their head around the fact that -0.0 exists.

1. is actually the less interesting question since even when the answer was no
I still want my code to work with the Intel compiler.

Regards, Thomas
 
E

Eric Sosman

So my program has the following function at the moment to replace -0.0 with +0.0
in a function argument:

static inline double
sign_flat(double v)
{
if (v == 0.0)
return 0.0;
return v;
}

But when compiling with the Intel Compiler icc at optimization level -O2 or
higher the above still produces -0.0. This raises two questions:

1. is the compiler free to convert above function into a NOP?

I am not a language lawyer, but I think not. 5.2.4.2.2p2
tells us that the value of "minus zero" is zero, and since zero
is equal to zero the `if' should be satisfied if `v' is of either
form.
2. does anyone know a work-around or even more suitable formulation to achieve
the same? Preferably not some integer operations using signbit since our main
system is a Power6 which has some issues with FPU depending on ALU results and
vice versa. I've done some searching but the Internet is flooded with people
trying to wrap their head around the fact that -0.0 exists.

A couple things you might try:

return v + 0.0; // Hey, it *might* work ...

if (v >= 0.0) // Should be true for minus zero
v = copysign(v, 1.0);
return v;

Since we already suspect an erroneous implementation, though,
there are few guarantees. Maybe an Intel forum would have more
information.
 
E

Eric Sosman

So my program has the following function at the moment to replace -0.0
with +0.0
in a function argument:
[...]

Another thought just occurred to me: How certain are you
that the problematic value is actually minus zero, and not
some small-magnitude non-zero negative number?

double v = -DBL_MIN;
printf("v = %.25f\n", v);

.... should print `v = -0.0000000000000000000000000'.
 
J

James Kuyper

Did you mean `if (v == -0.0)' ?

(yes, this is not the same in IEEE754 ; see
http://en.wikipedia.org/wiki/Signed_zero)

It doesn't make any difference; as it says on that page: "According to
the IEEE 754 standard, negative zero and positive zero should compare as
equal with the usual (numerical) comparison operators, like the ==
operators of C and Java."

To determine whether v is a negative zero, you must use if(v==0.0 &&
copysign(1.0,v)<0.0). However, in this context that's just a waste of
time, if v is a positive zero, it doesn't matter whether sign_flat()
returns v or 0.0, so there's no need to distinguish it from a negative zero.
 
G

glen herrmannsfeldt

Thomas Jahns said:
So my program has the following function at the moment to
replace -0.0 with +0.0
in a function argument:
static inline double
sign_flat(double v)
{
if (v == 0.0)
return 0.0;
return v;
}
But when compiling with the Intel Compiler icc at optimization
level -O2 or higher the above still produces -0.0.
This raises two questions:
1. is the compiler free to convert above function into a NOP?

As well as I know it, it is usual for compilers to be less
strict on floating point rules at higher optimization levels.

One answer is to use -O0 to turn off such optimizations.

I believe it is legal C to convert to a no-op, but maybe not the
strict IEEE floating point rules. But C doesn't require IEEE
floating point, or strict adherence to it. Your compiler may offer
that as an option.

It might help to know why your are trying to get rid of -0.0.
Note that it is required to compare equal using comparison
operators.

-- glen
 
T

Tim Rentsch

Thomas Jahns said:
So my program has the following function at the moment to
replace -0.0 with +0.0 in a function argument:

static inline double
sign_flat(double v)
{
if (v == 0.0)
return 0.0;
return v;
}

But when compiling with the Intel Compiler icc at optimization
level -O2 or higher the above still produces -0.0. This raises
two questions:

1. is the compiler free to convert above function into a NOP?

Strictly speaking, no, assuming the implementation supports signed
zeros. On implementations that do not have signed zeros, this
function may be compiled as if it were the identity function (only
I can't tell from the Standard whether signaling NaN's might be an
exception to that).
2. does anyone know a work-around or even more suitable formulation
to achieve the same? Preferably not some integer operations using
signbit since our main system is a Power6 which has some issues
with FPU depending on ALU results and vice versa. I've done some
searching but the Internet is flooded with people trying to wrap
their head around the fact that -0.0 exists.

If the implementation isn't conforming, you're stuck (but see
also below).

If the implementation has only unsigned zeros, then most likely
problems with "negative zero" aren't going to come up, but if
they do I don't think there's anything to be done about it,
because of 5.2.4.2.2 p4.

I'm pretty sure the Standard is meant to allow only signed floating
point zeros or unsigned floating point zeros, not both (ie, in any
particular implementation). If it is allowed to have both, and one
is unfortunate enough to have deal with such a beast, then all I
can say is, Good luck on that! :)

Getting back to your quesstion.. if <math.h> is available,
you could use the copysign() function:

double
sign_flat_A( double x ){
return x ? x : copysign( x, 1. );
}

If <math.h> isn't available, but the implementation supports
complex types, you can make use of real-to-complex conversion rules
to get an other-than-negative zero:

double
sign_flat_B( double x ){
return x ? x : (union {_Complex double z; double d[2];}){ 0.0 }.d[1];
}

If <math.h> isn't available, and the implementation doesn't support
complex types, the last resort is to make use of initialization
rules to produce an other-than-negative zero:

double
sign_flat_C( double x ){
extern const double const_double_zero;
return x ? x : const_double_zero;
}

/* ... and somewhere ... */

const double const_double_zero; /* positive or unsigned zero! */

This last method is probably the most reliable. Too bad
it's also likely the slowest.

Incidentally, I tried Eric Sosman's suggestion of using

return x + 0.0;

and was amused to see that it worked (on my system here,
naturally), but that

return x - 0.0;

did not. A downside of approaches like this is that they might
have unexpected behaviors when dealing with infinities or NaN's
(especially signaling NaN's).
 
T

Tim Rentsch

Eric Sosman said:
I am not a language lawyer, but I think not. 5.2.4.2.2p2
tells us that the value of "minus zero" is zero, and since zero
is equal to zero the `if' should be satisfied if `v' is of either
form.

I agree with the conclusion, but not the reasoning (specifically
the reasoning about why negative zero must == positive zero).
The text in 5.2.4.2.2 describes the characteristics of floating
point types to be able to talk about what ranges of values they
can represent, but it doesn't deal with how operations work. If
'z' is a positive zero and 'n' is a negative zero, then 'z == n'
is true because the mathematical values are equal, ie, 0 and -0
are the no different when considered as real numbers. The
equality of positive zero and negative zero has to hold whether
or not floating point numbers are represented in the form of
5.2.4.2.2p2, which indeed they may not be. The model is used to
describe the range of values, but it doesn't define the values.
 
G

glen herrmannsfeldt

(snip on negative zero)
I agree with the conclusion, but not the reasoning (specifically
the reasoning about why negative zero must == positive zero).
The text in 5.2.4.2.2 describes the characteristics of floating
point types to be able to talk about what ranges of values they
can represent, but it doesn't deal with how operations work. If
'z' is a positive zero and 'n' is a negative zero, then 'z == n'
is true because the mathematical values are equal, ie, 0 and -0
are the no different when considered as real numbers. The
equality of positive zero and negative zero has to hold whether
or not floating point numbers are represented in the form of
5.2.4.2.2p2, which indeed they may not be. The model is used to
describe the range of values, but it doesn't define the values.

The other reason is that users expect it.

As far as I know, there are two ways of dealing with negative
zero in hardware, both for floating point and non-twos-complement
fixed point:

1) Always compare them equal.
2) Never generate negative zero as the result of an
arithmetic operation.

In the latter case, if you generate one, for example using
bitwise operators, it may propagate and cause surprising results.

The former case will surprise users of bitwise operators, such
as ~x==x being true for x==0 on ones complement hardware.

-- glen
 
T

Tim Rentsch

glen herrmannsfeldt said:
(snip on negative zero)
I agree with the conclusion, but not the reasoning (specifically
the reasoning about why negative zero must == positive zero).
The text in 5.2.4.2.2 describes the characteristics of floating
point types to be able to talk about what ranges of values they
can represent, but it doesn't deal with how operations work. If
'z' is a positive zero and 'n' is a negative zero, then 'z == n'
is true because the mathematical values are equal, ie, 0 and -0
are the no different when considered as real numbers. The
equality of positive zero and negative zero has to hold whether
or not floating point numbers are represented in the form of
5.2.4.2.2p2, which indeed they may not be. The model is used to
describe the range of values, but it doesn't define the values.

The other reason is that users expect it. [snip]

It is a reason, but it's a reason for an answer to
a different question.
 
F

Fred J. Tydeman

So my program has the following function at the moment to replace -0.0 with +0.0
in a function argument:

static inline double
sign_flat(double v)
{
if (v == 0.0)
return 0.0;
return v;
}

But when compiling with the Intel Compiler icc at optimization level -O2 or
higher the above still produces -0.0. This raises two questions:

2. does anyone know a work-around or even more suitable formulation to achieve
the same? Preferably not some integer operations using signbit since our main
system is a Power6 which has some issues with FPU depending on ALU results and
vice versa. I've done some searching but the Internet is flooded with people
trying to wrap their head around the fact that -0.0 exists.

Several things to try:
add 'volatile' to 'double v'
change 'return 0.0' to 'return fabs(v)' or 'v = fabs(v)'

My experience in testing many C compilers is adding 'volatile' is the only way to
turn off "optimizations".
---
Fred J. Tydeman Tydeman Consulting
(e-mail address removed) Testing, numerics, programming
+1 (775) 287-5904 Vice-chair of PL22.11 (ANSI "C")
Sample C99+FPCE tests: http://www.tybor.com
Savers sleep well, investors eat well, spenders work forever.
 
T

Tim Rentsch

Robert Wessel said:
I believe a somewhat similar situation arises on implementations that
support unnormalized* FP numbers. You can then have multiple bit
patterns for the same number (for example, you might have 1.2E+3 and
0.12E+4), but they must compare equal.

Right, this case is exactly analogous.
*Not to be confused with IEEE denormals.

The Standard calls such things 'subnormal floating-point values'
(or just 'subnormals'). As of 2008, IEEE 754-2008 also uses
the term 'subnormal' rather than 'denormal'.
 
T

Thomas Jahns

Another thought just occurred to me: How certain are you
that the problematic value is actually minus zero, and not
some small-magnitude non-zero negative number?

I already checked that in the debugger before posting here.

Thomas
 
T

Thomas Jahns

1. is the compiler free to convert above function into a NOP?

summarizing the input from various posters I guess no is the valid answer here,
but see below.
2. does anyone know a work-around or even more suitable formulation to achieve
the same? Preferably not some integer operations using signbit since our main
system is a Power6 which has some issues with FPU depending on ALU results and
vice versa. I've done some searching but the Internet is flooded with people
trying to wrap their head around the fact that -0.0 exists.

Turns out the Intel compiler actually does what I want, once I add

-fp-model precise

to the compiler flags. One could probably argue no end why -fp-model fast=1 is
the default and why this is only another installment of me running into one or
another of the problems this default causes, but I guess one purpose of this
thread is to put out another bit of information for others suffering from
similar problems.

Regards, Thomas
 
J

James Kuyper

summarizing the input from various posters I guess no is the valid answer here,
but see below.


Turns out the Intel compiler actually does what I want, once I add

-fp-model precise

to the compiler flags. One could probably argue no end why -fp-model fast=1 is
the default and why this is only another installment of me running into one or
another of the problems this default causes, but I guess one purpose of this
thread is to put out another bit of information for others suffering from
similar problems.

I'm curious - you never explained why you needed to replace negative
zeroes with positive zeroes. There are certainly cases where it makes a
difference, but for the most part they're contrived situations, often
involving code that inappropriately magnifies small differences in an
input number into large differences in the results. For instance, the
difference between a positive and a negative zero can change the value
returned by atan2() by almost 2*pi. However, in most cases, code that
uses such results should treat angles that differ by almost 2*pi as
being, in fact, very close to each other.
What are you doing with these negative zeros that makes it important to
convert them to positive zeros?
 
T

Thomas Jahns

I'm curious - you never explained why you needed to replace negative
zeroes with positive zeroes. There are certainly cases where it makes a
difference, but for the most part they're contrived situations, often
involving code that inappropriately magnifies small differences in an
input number into large differences in the results. For instance, the
difference between a positive and a negative zero can change the value
returned by atan2() by almost 2*pi. However, in most cases, code that
uses such results should treat angles that differ by almost 2*pi as
being, in fact, very close to each other.
What are you doing with these negative zeros that makes it important to
convert them to positive zeros?

I'm testing a floating point file storage format (GRIB) of which I know how many
bits etc. are preserved. I massage the double input to match this (i.e. cut off
extra precision) and write it to file. Afterwards I run a test if the contents
read in from the file still match the input. Since the file format library maps
negative zero to positive, I need to get rid of negative zero first, since the
comparison is byte-for-byte via checksum.

Thomas
 
T

Tim Rentsch

Thomas Jahns said:
I'm testing a floating point file storage format (GRIB) of which I
know how many bits etc. are preserved. I massage the double input
to match this (i.e. cut off extra precision) and write it to file.
Afterwards I run a test if the contents read in from the file still
match the input. Since the file format library maps negative zero
to positive, I need to get rid of negative zero first, since the
comparison is byte-for-byte via checksum.

This suggests to me that you might want to use memcmp() and
memcpy() rather than floating point operations.
 
F

Fred K

On 07/22/13 13:46, James Kuyper wrote: > I'm curious - you never explained why you needed to replace negative > zeroes with positive zeroes. There are certainly cases where it makes a > difference, but for the most part they're contrived situations, often > involving code that inappropriately magnifies small differences in an > input number into large differences in the results. For instance, the > difference between a positive and a negative zero can change the value > returned by atan2() by almost 2*pi. However, in most cases, code that > uses such results should treat angles that differ by almost 2*pi as > being, in fact, very close to each other. > What are youdoing with these negative zeros that makes it important to > convert them to positive zeros? I'm testing a floating point file storage format (GRIB) of which I know how many bits etc. are preserved. I massage the double input to match this (i.e. cut off extra precision) and write it to file. Afterwards I run a test if the contents read in from the file still match the input. Since the file format library maps negative zero to positive, I need toget rid of negative zero first, since the comparison is byte-for-byte via checksum. Thomas

Why can't you use:
if ( readInValue==0 && internalValue==0 ) {
compare=true;
} else {
compare = myNormalComparer( readInValue, internalValue );
}
 
P

Phil Carmody

Thomas Jahns said:
So my program has the following function at the moment to replace -0.0 with +0.0
in a function argument:

static inline double
sign_flat(double v)
{
if (v == 0.0)
return 0.0;
return v;
}

But when compiling with the Intel Compiler icc at optimization level -O2 or
higher the above still produces -0.0. This raises two questions:

1. is the compiler free to convert above function into a NOP?

When it comes to floating point, there's way too much flexibility, alas.
However, the above looks like it has quite unambiguous abstract machine
behaviour that should be mimicked at all optimisation levels. "0.0 compares
equal to -0.0" is not justification to conclude "0.0 is equivalent to -0.0",
IMHO. Put this function in a separate module, and compile it with optimisation
reduced.
2. does anyone know a work-around or even more suitable formulation to achieve
the same? Preferably not some integer operations using signbit since our main
system is a Power6 which has some issues with FPU depending on ALU results and
vice versa. I've done some searching but the Internet is flooded with people
trying to wrap their head around the fact that -0.0 exists.

Nothing more weird about + and - 0.0s than there is about + and - infinities.

I think you'll just have to write something that's harder for it to optimise.
Have you tried constructs like ``if(v == -v) return -v;''
1. is actually the less interesting question since even when the answer was no
I still want my code to work with the Intel compiler.

Go hunting in the optimisation flags for something that might disable this
optimisation.

Phil
 
K

Ken Brody

When it comes to floating point, there's way too much flexibility, alas.
However, the above looks like it has quite unambiguous abstract machine
behaviour that should be mimicked at all optimisation levels. "0.0 compares
equal to -0.0" is not justification to conclude "0.0 is equivalent to -0.0",
IMHO. Put this function in a separate module, and compile it with optimisation
reduced.
[...]

Well, he wants it to be inlined if possible, so making a separate module
precludes that ability.

Assuming that the others are correct that the original code can't be
optimized to a no-op, then it might be a bug in the optimizer not
special-casing 0.0 in:

if ( foo == bar )
return bar;
else
return foo;

Is there any reason not to make the code clearer in its intent, even if just
to human readers, and make the comparison to negative zero?

static inline double
sign_flat(double v)
{
if (v == -0.0)
return 0.0;
return v;
}
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,904
Latest member
HealthyVisionsCBDPrice

Latest Threads

Top