Signed Char representation in C language

  • Thread starter Shivanand Kadwadkar
  • Start date
S

Shivanand Kadwadkar

---------------------------------------------------------------
#include<stdio.h>
int main()
{
signed char i=128;

printf("i =%d and signed char size =%d byte",i,sizeof(signed char));
}
-----------------------------------------------------------------------
According to me it should work like following way

Since char is 1 byte in length above char is represented as 1000 0000
in binary

i thought When i print i it will be 128 or -0/0

as a output of above program i got i=-128 and singed char size= 1 byte

I dont understand how -128 is represented in 8 bits and why compiler
is detecting it as -128 why not 128
 
I

Ike Naar

---------------------------------------------------------------
#include<stdio.h>
int main()
{
signed char i=128;

printf("i =%d and signed char size =%d byte",i,sizeof(signed char));
}
-----------------------------------------------------------------------
According to me it should work like following way

Since char is 1 byte in length above char is represented as 1000 0000
in binary

i thought When i print i it will be 128 or -0/0

as a output of above program i got i=-128 and singed char size= 1 byte

I dont understand how -128 is represented in 8 bits and why compiler
is detecting it as -128 why not 128

Check the range of values that can be stored in a signed char on
your machine (SCHAR_MIN and SCHAR_MAX from <limits.h>).
It's very likely that in your situation SCHAR_MIN=-128 and SCHAR_MAX=127,
and the value 128 falls outside that range.
If your machine uses 2s complement representation for numbers,
then the 8-bit pattern 10000000 corresponds to the value -128.

http://en.wikipedia.org/wiki/2s_complement
 
S

Shivanand Kadwadkar

Check the range of values that can be stored in a signed char on
your machine (SCHAR_MIN and SCHAR_MAX from <limits.h>).
It's very likely that in your situation SCHAR_MIN=-128 and SCHAR_MAX=127,
and the value 128 falls outside that range.
If your machine uses 2s complement representation for numbers,
then the 8-bit pattern 10000000 corresponds to the value -128.

http://en.wikipedia.org/wiki/2s_complement

Thanks for the comment.

now i understood how it works.

initially my understanding was left most bit was only used to
represent sign not considered as a part of number.
 
T

Thad Smith

---------------------------------------------------------------
#include<stdio.h>
int main()
{
signed char i=128;

printf("i =%d and signed char size =%d byte",i,sizeof(signed char));
}
-----------------------------------------------------------------------
According to me it should work like following way

Since char is 1 byte in length above char is represented as 1000 0000
in binary

Assuming that signed char is 8 bits, the initialization results in an
implementation-defined value (since 128 cannot be represented in an 8-bit signed
char) being stored in i. Reinterpreting those 8 bits of 128 as an unsigned char
with 2's complement notation is common, resulting in a value of -128, assuming
SCHAR_MIN = -128.

When it is printed, the value in i is promoted to int with the same value before
being passed to printf.
 
S

Seebs

According to me it should work like following way

You are very confused.

First off, it is not the C language that defines representations, it's
the processor.
i thought When i print i it will be 128 or -0/0

What do you think "-0" means?
as a output of above program i got i=-128 and singed char size= 1 byte
I dont understand how -128 is represented in 8 bits and why compiler
is detecting it as -128 why not 128

What actually happened is your program is wrong -- you tried to store a
value in a signed integer type that didn't fit, so you got whatever the
compiler happened to feel like doing. It looks as though your system uses
what's called "twos complement" arithmetic. The simplest way to understand
this is that the topmost bit of an 8-bit integer has the value -128. So
-1 is written as 0b11111111, because 0b01111111 would be 127, 0b10000000
would be -128, and 127 + -128 = -1. When you supplied a value outside the
range of the type (which can't represent 128), the compiler decided to just
shove the bits in and hope for the best, leaving you with an object with
the value -128. When you passed this to printf, it was automatically promoted
to int, which had no effect on its value because -128 can be represented as
an int, and then printed.

-s
 
T

Tim Rentsch

Seebs said:
You are very confused.

First off, it is not the C language that defines representations, it's
the processor.


What do you think "-0" means?



What actually happened is your program is wrong -- you tried to store a
value in a signed integer type that didn't fit, so you got whatever the
compiler happened to feel like doing.

Hopefully he got whatever the required document specifying
implementation-defined behavior says he will get. If he
gets anything else the implementation is not conforming.
 
S

Seebs

Hopefully he got whatever the required document specifying
implementation-defined behavior says he will get. If he
gets anything else the implementation is not conforming.

Hmm. My vague memory is that the implementation is allowed to define that
the out of range to signed value conversion is undefined behavior. As long
as they define it. :)

-s
 
K

Keith Thompson

Seebs said:
Hmm. My vague memory is that the implementation is allowed to define that
the out of range to signed value conversion is undefined behavior. As long
as they define it. :)

There's no need to depend on vague memory when the standard is
available.

C99 6.3.1.3p3:

Otherwise, the new type is signed and the value cannot be
represented in it; either the result is implementation-defined
or an implementation-defined signal is raised.

It's possible that raising the "implementation-defined signal"
could result in undefined behavior, but that would be a fairly
nasty thing for an implementation to do.
 
S

Seebs

There's no need to depend on vague memory when the standard is
available.

No need, but a good reason: Recalling things without looking them up is
a much better way of building memory than re-reading them. So for instance,
if you're studying for a test, quizzes do much, much, more good than
re-reading the text.
C99 6.3.1.3p3:
Otherwise, the new type is signed and the value cannot be
represented in it; either the result is implementation-defined
or an implementation-defined signal is raised.
It's possible that raising the "implementation-defined signal"
could result in undefined behavior, but that would be a fairly
nasty thing for an implementation to do.

Okay, say it raises SIGTHERE_WAS_SIGNED_OVERFLOW. What can you do about
this?
void donothing(int sig) {
/* do nothing */
}

int main(void) {
int i = INT_MAX;
int j;

signal(SIGTHERE_WAS_SIGNED_OVERFLOW, donothing);
j = i + 2;
/* now what? */
return 0;
}

Since an implementation-defined signal was raised, the implementation does
not need to define the result. I have no information as to whether a value
was stored in j, or if so, what that value was. I don't know whether it
might be a trap representation.

From the point of view of someone writing portable code, this definition comes
out very close to "the behavior is undefined", because I cannot predict what
value I'll get, or whether I'll even get a value. I could check for the
overflow by adding a sig_atomic_t overflow_happened = 0, so I guess I could
do:

j = i + 2;
if (overflow_happened)
j = 0;

and now I know that j is either 0 or some value, which is better, but...
I guess in practice, for code that's otherwise-portable (and thus not
trying to trap a signal which might not even exist on other platforms),
it comes down to "and then your program might get aborted", which is pretty
close in practice to "the behavior is undefined". You have to avoid it or
risk stuff going horribly wrong.

-s
 
T

Tim Rentsch

Seebs said:
There's no need to depend on vague memory when the standard is
available.

No need, but a good reason: Recalling things without looking them up is
a much better way of building memory than re-reading them. So for instance,
if you're studying for a test, quizzes do much, much, more good than
re-reading the text.
C99 6.3.1.3p3:
Otherwise, the new type is signed and the value cannot be
represented in it; either the result is implementation-defined
or an implementation-defined signal is raised.
It's possible that raising the "implementation-defined signal"
could result in undefined behavior, but that would be a fairly
nasty thing for an implementation to do.

Okay, say it raises SIGTHERE_WAS_SIGNED_OVERFLOW. What can you do about
this? [snip elaboration]

Can you name even one implementation that uses signalling
on out-of-range conversion and that the OP has used with
greater than 0.01% probability? If not then it would be
better to give an answer along the lines of implementation-
defined, perhaps with a clarifying footnote for the
signalling case.

Come to think of it, does anyone know of _any_ implementation
that uses signalling on out-of-range conversion? I'm sure
I don't.
 
I

Ike Naar

Come to think of it, does anyone know of _any_ implementation
that uses signalling on out-of-range conversion? I'm sure
I don't.

There certainly have been in the past. E.g. on Burrougs large
systems (later: Unisys A series), integers (39 value bits, one sign
bit) are implemented as a subset of (48-bit) floatingpoint values
(integers have a zero exponent part). Integer overflow generates
a floatingpoint result. The NTGR instruction normalizes a floatingpoint
value as an integer and generates a fault interrupt if it exceeds
the limits of integer representation (+/- 2^39-1)..
 
H

Hans Vlems

There certainly have been in the past. E.g. on Burrougs large
systems (later: Unisys A series), integers (39 value bits, one sign
bit) are implemented as a subset of (48-bit) floatingpoint values
(integers have a zero exponent part). Integer overflow generates
a floatingpoint result. The NTGR instruction normalizes a floatingpoint
value as an integer and generates a fault interrupt if it exceeds
the limits of integer representation (+/- 2^39-1)..

Not in the past Ike, the MCP is still alive. It runs mostly on
emulators
nowadays though there are still some A series in production.
Hans
 
T

Tim Rentsch

Ike Naar said:
There certainly have been in the past. E.g. on Burrougs large
systems (later: Unisys A series), integers (39 value bits, one sign
bit) are implemented as a subset of (48-bit) floatingpoint values
(integers have a zero exponent part). Integer overflow generates
a floatingpoint result. The NTGR instruction normalizes a floatingpoint
value as an integer and generates a fault interrupt if it exceeds
the limits of integer representation (+/- 2^39-1)..

That is interesting but not quite on-point. What's being asked
about is not what some hardware does but what a C implementation
does. Also, the particular case in question is the case of
conversion from unsigned integer to signed integer. The source
operand is already represented as an unsigned integer, with no
fractional part; floating point is not involved, and there is no
possibility of overflow. So the behavior of an NTGR instruction
is unlikely to be pertinent in answering this question.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,042
Latest member
icassiem

Latest Threads

Top