signed and unsigned types

B

Bilgehan.Balban

Hi,

I have a basic question on signed and unsigned integers. Consider the
following code:


#define SOME_ADDR 0x10000000
// Some context
{
unsigned int *x = (unsigned int *)(SOME_ADDR);
*x = ( 1 << 10 );
}

Here, how would ( 1 << 10 ) be interpreted in terms of sign? My
compiler does not give any warnings for a case like (1 << 10) assigned
to an unsigned int, however, it does say, "result of operation out of
range" for an assignment like (1 << 31). My interpretation of it was
that, in (1 << 31), "1" is signed by default, and shifting it 31 bits
overflows the type because [31] is the sign bit, and this is the cause
of the warning. But why does it not warn for the former case? Is the
sign determined by the lvalue?

Finally, a bit off-topic but, does a cast between signed and unsigned
values generate (perhaps a handful of) instructions for converting
two's complement signed and unsigned notation?

Thanks,
Bahadir
 
E

Eric Sosman

Hi,

I have a basic question on signed and unsigned integers. Consider the
following code:


#define SOME_ADDR 0x10000000
// Some context
{
unsigned int *x = (unsigned int *)(SOME_ADDR);
*x = ( 1 << 10 );
}

Here, how would ( 1 << 10 ) be interpreted in terms of sign?

Exactly as it would in any other context: it is the
positive value 1024, with type `int' (aka `signed int').
The business with `x' (including the dubious initialization)
is irrelevant to the evaluation of `1 << 10'.
My
compiler does not give any warnings for a case like (1 << 10) assigned
to an unsigned int, however, it does say, "result of operation out of
range" for an assignment like (1 << 31). My interpretation of it was
that, in (1 << 31), "1" is signed by default, and shifting it 31 bits
overflows the type because [31] is the sign bit, and this is the cause
of the warning. But why does it not warn for the former case? Is the
sign determined by the lvalue?

First, the compiler is being helpful in emitting the
warning; it is not required to do so. Left-shifts that
attempt to promote a one-bit into the sign position
yield what is known as "undefined behavior," meaning that
the C Standard washes its hands of your program and refuses
to say anything more about what might happen. The compiler
has noticed that `1 << 31' strays into this dangerous
territory, and warns you that you may find dragons there.

Second, there's nothing at all wrong with `1 << 10':
it yields 1024, always, and is perfectly well-defined.
There's no reason for the compiler to grouse about it.
Of course, a compiler is permitted to issue any warnings
it wants -- it can complain about the way you indent or
about the spellnig in your comments -- but the compiler is
not required to issue diagnostics for valid code, and the
writers presumably felt that doing so would be unwelcome.
Finally, a bit off-topic but, does a cast between signed and unsigned
values generate (perhaps a handful of) instructions for converting
two's complement signed and unsigned notation?

It might, it might not. Everything depends on the
characteristics of the underlying hardware: the compiler
must emit instructions to produce the defined effect, but
what those instructions are (if there are any) differs
from one system to another.
 
A

Alex Fraser

[snip]
Finally, a bit off-topic but, does a cast between signed and unsigned
values generate (perhaps a handful of) instructions for converting
two's complement signed and unsigned notation?

N869 (the last public draft of the C99 standard) says this:

6.3.1.3 Signed and unsigned integers

[#1] When a value with integer type is converted to another
integer type other than _Bool, if the value can be
represented by the new type, it is unchanged.

[#2] Otherwise, if the new type is unsigned, the value is
converted by repeatedly adding or subtracting one more than
the maximum value that can be represented in the new type
until the value is in the range of the new type.

[#3] Otherwise, the new type is signed and the value cannot
be represented in it; the result is implementation-defined.

Knowing this, the sizes of types used by a compiler, and the instruction set
of the target processor, you should have some idea of what code is generated
for conversions covered by the first two paragraphs - typically (depending
on the types) either none at all, zero extension, or sign extension.

For obvious reasons, you would do well to avoid relying on the result of
conversions covered by the third paragraph, but if two's complement
representation is used for signed integers the result is typically like
converting to the corresponding unsigned type, then reinterpreting the bits
as if they represented a signed value.

Alex
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,009
Latest member
GidgetGamb

Latest Threads

Top