Question on gcc -O3 optimization (bug or feature)

C

compiler_newbie

Hi,

I have the following code that is compiled on gcc422 (linux) with -
O3 option. I see that one of the branch is discarded, even though it
is not a dead branch. Is this a bug or a feature?

#include <stdlib.h>
#include <stdio.h>

void show_issue(int number)
{
if( number < 0 ) {
printf("%d is a negative number \n", number);
number = - number;
printf("After conversion: %d\n", number);
if( number < 0 ) {
/* ################ with gcc422 -O3, this branch is
discarded ############ */
printf("Must be INT_MIN since negating has no effect\n");
}
}
else {
printf("%d is positive number\n", number);
}
}


int main()
{
int i;
for(i = 30; i < 32; i++ ) {
show_issue( 1 << i );
}
return 0;
}
 
W

Walter Roberson

compiler_newbie said:
I have the following code that is compiled on gcc422 (linux) with -
O3 option. I see that one of the branch is discarded, even though it
is not a dead branch. Is this a bug or a feature?
#include <stdlib.h>
#include <stdio.h>

void show_issue(int number)
{
if( number < 0 ) {
printf("%d is a negative number \n", number);
number = - number;
printf("After conversion: %d\n", number);
if( number < 0 ) {
/* ################ with gcc422 -O3, this branch is
discarded ############ */
printf("Must be INT_MIN since negating has no effect\n");
}
}
else {
printf("%d is positive number\n", number);
}
}

The result of integer overflow with the unary negation operator
is undefined in the C standard, so the compiler is free to do whatever
it feels like for that case.
 
B

Ben Pfaff

compiler_newbie said:
void show_issue(int number)
{
if( number < 0 ) {
printf("%d is a negative number \n", number);
number = - number;
printf("After conversion: %d\n", number);
if( number < 0 ) {
/* ################ with gcc422 -O3, this branch is
discarded ############ */

It is discarded because signed arithmetic overflow yields
undefined behavior.

If you want to get the behavior that you expect, you can use the
-fwrapv option:

`-fwrapv'
This option instructs the compiler to assume that signed arithmetic
overflow of addition, subtraction and multiplication wraps around
using twos-complement representation. This flag enables some
optimizations and disables others. This option is enabled by
default for the Java front-end, as required by the Java language
specification.
 
C

compiler_newbie

It is discarded because signed arithmetic overflow yields
undefined behavior.


Thanks for your responses and the mention of the compiler flag that
supports the "classic" behaviour.
 
S

Serve Lau

compiler_newbie said:
Thanks for your responses and the mention of the compiler flag that
supports the "classic" behaviour.

it looks strange to me to first do an operation and then after that check if
the operation should've been done.
Why not just check before and code the expected behaviour yourself??
 
D

Dik T. Winter

> void show_issue(int number)
> {
> if( number < 0 ) { ....
> number = - number; ....
> if( number < 0 ) {
> /* ################ with gcc422 -O3, this branch is
> discarded ############ */
> printf("Must be INT_MIN since negating has no effect\n");

No. If number equals INT_MIN and -INT_MIN is not an integer it is
undefined behaviour (it might signal overflow).
 
K

Keith Thompson

Serve Lau said:
it looks strange to me to first do an operation and then after that
check if the operation should've been done.
Why not just check before and code the expected behaviour yourself??

Really?

Consider this:

long long x = <some value>, y = <some other value>;
if (<multiplication would overflow>) {
printf("%lld * %lld would overflow\n");
}
else {
printf("%lld * %lld = %lld\n", x * y);
}

(I chose long long to preclude using a longer type to hold an
intermediate result.)

How would you complete the above code so its output is always correct?

I have no doubt that it's possible to do this, but it's going to take
at least several lines of code, and it's likely to be more expensive
than the multiplication itself. (And I'm too lazy to work it out
myself.) Imagine doing that for every arithmetic operation in your
program.

On the other hand, most CPUs provide a straightforward way to attempt
a multiplication and determine after the fact whether it overflowed.
Unfortunately, due to the variety of CPUs out there, it wasn't
practical for C to provide a standard way to do this.
 
S

Serve Lau

Keith Thompson said:
Really?

Consider this:

long long x = <some value>, y = <some other value>;
if (<multiplication would overflow>) {
printf("%lld * %lld would overflow\n");
}
else {
printf("%lld * %lld = %lld\n", x * y);
}

you're right, there are situations where it wouldnt make much sense however
in the case of x = -x its easy and cleaner to check before.

or take

x = x / y;

if (program_crashed())
{
printf("y was zero\n");
x = 0;
}

wouldnt make much sense either right.
 
K

Keith Thompson

Serve Lau said:
you're right, there are situations where it wouldnt make much sense
however in the case of x = -x its easy and cleaner to check before.

Checking that with full portability is slightly trickier than what I'm
guessing you're thinking of. It overflows if and only if
(LLONG_MIN < -LLONG_MAX && x == LLONG_MIN)
and you have to adjust it depending on the type of x (with no help
from the compiler if you get it wrong).
or take

x = x / y;

if (program_crashed())
{
printf("y was zero\n");
x = 0;
}

wouldnt make much sense either right.

It would make a great deal of sense if, rather than program_crashed(),
the language provided a way to detect any kind of arithmetic error.
For example, in Ada:

begin
X := X / Y;
exception
when Constraint_Error =>
-- handle overflow
end;

And I think you're forgetting a possibility. Assuming, again,
that x and y are of type long long, the division can overflow on a
two's-complement system if x == LLONG_MIN and y == -1. That's just
the kind of thing that programmer's can easily forget, and that
compilers can't forget.

Of course floating-point division can easily overflow if the right
operand has a magnitude less than 1.0.

In practice, most programmers, most of the time, just don't bother
doing these checks. We instead use types that we *think* (or hope)
are big enough to hold whatever values we need. It's almost the
numeric equivalent of gets() with an array that we *think* is big
enough to hold whatever the user enters (except that we have a bit
more control over the input).

*If* C provided a way to detect numeric overflow after the fact
without invoking undefined behavior, it could be a lot easier to
write safer software in C -- and if the checks were built into the
language, the compiler could potentially eliminate a lot of them,
in cases where it can prove that a given expression can't overflow.
As programmers, we'd have to apply that reasoning to every expression
in a program, and most of us aren't compulsive enough to do this
and get it right every single time.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,743
Messages
2,569,478
Members
44,899
Latest member
RodneyMcAu

Latest Threads

Top