More bit shifting issues

B

Boltar

I seem to be having yet more wierd issue with bit shifting. It seems
the following code doesnt do anything under gcc (ie it returns -1 as
both results). Anyone know why? Is it another language definition or
CPU issue?

main()
{
printf("%d\n",(int)0xFFFFFFFF >> 1);
printf("%d\n",(int)-1 >> 1);
}

B2003
 
R

Richard

Boltar said:
I seem to be having yet more wierd issue with bit shifting. It seems
the following code doesnt do anything under gcc (ie it returns -1 as
both results). Anyone know why? Is it another language definition or
CPU issue?

main()
{
printf("%d\n",(int)0xFFFFFFFF >> 1);
printf("%d\n",(int)-1 >> 1);
}

B2003

Try using unsigned values and report back ...
 
R

Richard Bos

Boltar said:
printf("%d\n",(int)0xFFFFFFFF >> 1);
printf("%d\n",(int)-1 >> 1);

Right-shifting a negative value results in an implementation-defined
value. As a first guess, your implementation does an arithmetic right
shift where you expected a logical one. Both results are allowed.

Richard
 
C

Chris Dollin

Boltar said:
I seem to be having yet more wierd issue with bit shifting. It seems
the following code doesnt do anything under gcc (ie it returns -1 as
both results). Anyone know why? Is it another language definition or
CPU issue?

main()
{
printf("%d\n",(int)0xFFFFFFFF >> 1);
printf("%d\n",(int)-1 >> 1);
}

Shift-right of negative values is implementation-defined; you're likely
to get one of arithmetic right shift or logical right shift. (Some machines
have/had only one of these and would have to synthesise the other, which
would be Woefully Inefficient.)

It's wise to restrict ones bitwise operations to unsigned types, since
these can't have tricksy sign bits to cut you with their nasty sharp
edges.

--
"The whole apparatus had the look of having been put /Jack of Eagles/
together with the most frantic haste a fanatically
careful technician could muster."

Hewlett-Packard Limited registered no:
registered office: Cain Road, Bracknell, Berks RG12 1HN 690597 England
 
M

Martin Ambuhl

Boltar said:
I seem to be having yet more wierd issue with bit shifting. It seems
the following code doesnt do anything under gcc (ie it returns -1 as
both results).

It does something, namely to retain the signedness of your negative
values. It didn't have to. It could have shifted in zeros instead of
replicating the sign bit.
Anyone know why? Is it another language definition or
CPU issue?

It is a mistake on your part. See the code below.
main()
{
printf("%d\n",(int)0xFFFFFFFF >> 1);
printf("%d\n",(int)-1 >> 1);
}


#include <stdio.h> /* mha: needed for the variadic
function printf */

int /* mha: main returns an int, say so */ main(void)
{
printf("right shifting a negative value has no portably\n"
"defined meaning, so the output of the next two\n"
"printf statements is up for grabs:\n");
printf(" ((int)0xFFFFFFFF >> 1) = %d\n", (int) 0xFFFFFFFF >> 1);
printf(" ((int)-1 >> ) = %d\n\n", (int) -1 >> 1);
printf("Note the difference in the following:\n");
printf(" (0xFFFFFFFF >> 1) = %u\n", 0xFFFFFFFF >> 1);
printf(" (int)(0xFFFFFFFF >> 1) = %d\n", (int) (0xFFFFFFFF >> 1));
return 0; /* mha: even if you can get away
without it, it is poor programming
practice not to return a value from
a function returning a value */
}



right shifting a negative value has no portably
defined meaning, so the output of the next two
printf statements is up for grabs:
((int)0xFFFFFFFF >> 1) = -1
((int)-1 >> ) = -1

Note the difference in the following:
(0xFFFFFFFF >> 1) = 2147483647
(int)(0xFFFFFFFF >> 1) = 2147483647
 
B

Boltar

Right-shifting a negative value results in an implementation-defined
value. As a first guess, your implementation does an arithmetic right
shift where you expected a logical one. Both results are allowed.

I always assumed bit shifting was just a shift of the register bits
left or right and whether the value was being treated as signed or
unsigned was completely irrelevant. Obviously not. :eek:(

B2003
 
B

Boltar

values. It didn't have to. It could have shifted in zeros instead of
replicating the sign bit.

Thats what I would have expected. Its a shift , not a mathematic
operation, the sign should be irrelevant and the value should just be
treated as 32 seperate bits to be moved as appropriate. Why would
anyone design this operation to preserve the sign? I don't understand.

B2003
 
B

Bartc

Boltar said:
Thats what I would have expected. Its a shift , not a mathematic
operation, the sign should be irrelevant and the value should just be
treated as 32 seperate bits to be moved as appropriate. Why would
anyone design this operation to preserve the sign? I don't understand.

Right-shifting used to be an efficient way of dividing integer values by
powers of two.

The sign propagation is necessary to make it work with twos-complement
negative numbers.

A processor often has a choice of right-shift operation (signed/unsigned)
and you might be able to choose the one by shifting an unsigned value
instead.
 
B

Boltar

Right-shifting used to be an efficient way of dividing integer values by
powers of two.

But not any longer , any modern CPU has a div or mul type instruction.
The sign propagation is necessary to make it work with twos-complement
negative numbers.

So why don't the same rules apply to the &, | and ^ operators then?
For example why doesn't (int)-1 ^ 0xFFFFFFFF still give -1 instead of
zero?

B2003
 
R

Richard

Boltar said:
But not any longer , any modern CPU has a div or mul type instruction.


So why don't the same rules apply to the &, | and ^ operators then?
For example why doesn't (int)-1 ^ 0xFFFFFFFF still give -1 instead of
zero?

B2003

Because its a bitwise operator and not a word operator. There is no
notion of "sign" with single bits being or'd, and'ed or xor'ed.
 
L

lawrence.jones

Boltar said:
I always assumed bit shifting was just a shift of the register bits
left or right and whether the value was being treated as signed or
unsigned was completely irrelevant. Obviously not. :eek:(

There are two different forms of right shift that most processors
support: in logical shift, which is what you expected, the new bits
shifted in are always zero; in arithmetic shift, which is what you got,
the new bits are copies of the sign bit. Both have their uses, which is
why most processors provide both. Which one the C implementation
chooses to use for >> is up to the implementor.

-Larry Jones

If I get a bad grade, it'll be YOUR fault for not doing the work for me!
-- Calvin
 
K

Keith Thompson

Boltar said:
But not any longer , any modern CPU has a div or mul type instruction.

Yes, but the div or mul instruction might still be slower than an
equivalent shift. More to the point, modern compilers are capable of
generating a shift instruction for multiplication or division by a
power of 2, *if* it makes the code faster and *if* it can prove that
the result is equivalent.
 
P

Philip Potter

Boltar said:
But not any longer , any modern CPU has a div or mul type instruction.

Not in the embedded world. The processor I work with most frequently has
only optional support for mul and div.
 
A

Andrey Tarasevich

Boltar said:
...
Thats what I would have expected. Its a shift , not a mathematic
operation, the sign should be irrelevant and the value should just be
treated as 32 seperate bits to be moved as appropriate. Why would
anyone design this operation to preserve the sign? I don't understand.
...

In C89/90 it is recommended (and in C99 it is required) that the
integral division '/' rounds its results towards '0'. In many cases it
is useful to have a modulo-correct version of integral division
operator, i.e. operator that always rounds towards negative infinity.
This is what sign-preserving shift does for power-of-2 division on 2's
complement platforms.

(Doesn't look like much of a rationale, but for what it's worth...)

Of course, since the behavior is implementation-defined, one cannot rely
on this behavior as being portable.
 
C

CBFalconer

Boltar said:
I seem to be having yet more wierd issue with bit shifting. It
seems the following code doesnt do anything under gcc (ie it
returns -1 as both results). Anyone know why? Is it another
language definition or CPU issue?

main() {
printf("%d\n",(int)0xFFFFFFFF >> 1);
printf("%d\n",(int)-1 >> 1);
}

I suspect this covers it:

6.5 Expressions

.... snip ...

[#4] Some operators (the unary operator ~, and the binary
operators <<, >>, &, ^, and |, collectively described as
bitwise operators) are required to have operands that have
integer type. These operators return values that depend on
the internal representations of integers, and have
implementation-defined and undefined aspects for signed
types.
 
R

robertwessel2

But not any longer , any modern CPU has a div or mul type instruction.


Not at all. IPF, and many RISCs, for example, do not have an integer
divide (or often an FP one either). They often support a divide
approximation instruction, which you then need to follow with a few
iterations of Newton-Raphson to actually generate a quotient.

In any event, left shifts are typically a fair bit faster than
multiplies (often several times), although there are CPUs with
exceptional fast multiplies and exceptionally slow shifters where
that's not true, and right shifts are invariably *much* faster than
actual divisions (often 20-50 times).
 
B

Boltar

In any event, left shifts are typically a fair bit faster than
multiplies (often several times), although there are CPUs with
exceptional fast multiplies and exceptionally slow shifters where
that's not true, and right shifts are invariably *much* faster than
actual divisions (often 20-50 times).

Well that begs the question of why the cpu integer div instruction
doesn't simply check the divisors least significant bit to see if its
set to zero (even) and if it is then use bit shifting to get the
result. Same for multiply.

B2003
 
B

Bartc

Boltar said:
Well that begs the question of why the cpu integer div instruction
doesn't simply check the divisors least significant bit to see if its
set to zero (even) and if it is then use bit shifting to get the
result. Same for multiply.

Probably because it's a *lot* easier to have separate instructions for
shift, which are essential anyway. And it would have to check there is only
one bit set.

Anyway most cpus must have a shift-right-logical which does exactly what you
want. It's up to your C implementation whether that is invoked by
right-shifting an unsigned value.
 
R

robertwessel2

Bit shifting can't replace division unless the divisor is a power
of two, not just even.

"Simply check" is likely to take extra time.  Also, it may be a
low-percentage optimization that just doesn't get used that much,
in the opinion of the CPU designer, probably backed up by a bunch
of statistics on code in real programs.


Even worse, it will screw up the pipeline on modern/large CPUs. At
least for multipliers, these tend to be very fixed - IOW, it's five
cycles through the multiplier, no matter the operands. Making that
variable length requires a whole bunch of other stuff to be added, and
may make it much more difficult to pipeline several sequential
multiplications (IOW, the multiplier may take five cycles, but it can
start a new multiplication every cycle).

Now older/smaller CPUs often implement mulch smaller multipliers
(often the hardware equivalent of shift-and-add), and on those, early
out is often implemented (IOW, when you run out of one bits in the
multiplier, you can stop), and the shift cycle is sometimes faster for
zero bits in the multiplier). But those take many cycles in any
event, and making them take a variable number is often not a big deal.

General purpose dividers are a huge pain, and while some variability
is not uncommon in full hardware divider implementations, for the many
CPUs where there is no hardware divide (just approximate plus Newton-
Raphson), I doubt there's a useful way you could implement such a
thing without requiring a conditional branch in the divide routine
(which would be horrible). Again, iterative dividers on small
implementations often do have some sorts of optimizations, and some do
fast shift past low zero bit in the divisor (basically the division
equivalent of early-out).

But in short, at least for multipliers on large implementations, this
would likely slow general multiplications down far more than you'd
gain from a few special cases going faster. And dividers are going to
be an even bigger mess.

Probably a better optimization is to provide short multiplier/divisor
operations. Say up to eight bits. Those would cover a lot of actual
operations, and a 32x8 multiplier could be made to run rather faster
than a full 32x32 one.
 
K

Keith Thompson

Boltar said:
Well that begs the question of why the cpu integer div instruction
doesn't simply check the divisors least significant bit to see if its
set to zero (even) and if it is then use bit shifting to get the
result. Same for multiply.

(Others have already mentioned that you need a power of 2, not just a
multiple of 2, but that doesn't invalidate the idea.)

A plausible explanation is that chip designers assume that most
multiplications and divisions by constant powers of 2 will be
optimized to shifts by the compiler. Thus such a hardware check will
benefit only cases where a run-time value just happens to be a power
of 2, and will likely slow down *all* multiplications and divisions.

On the other hand, I'm no hardware expert. Perhaps such a test can be
done in parallel without slowing anything down. If so, perhaps it's
already being done (it would be invisible to user code, or nearly so).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,262
Messages
2,571,059
Members
48,769
Latest member
Clifft

Latest Threads

Top