Signed Char representation in C language

Discussion in 'C Programming' started by Shivanand Kadwadkar, Jan 1, 2011.

  1. ---------------------------------------------------------------
    #include<stdio.h>
    int main()
    {
    signed char i=128;

    printf("i =%d and signed char size =%d byte",i,sizeof(signed char));
    }
    -----------------------------------------------------------------------
    According to me it should work like following way

    Since char is 1 byte in length above char is represented as 1000 0000
    in binary

    i thought When i print i it will be 128 or -0/0

    as a output of above program i got i=-128 and singed char size= 1 byte

    I dont understand how -128 is represented in 8 bits and why compiler
    is detecting it as -128 why not 128
    Shivanand Kadwadkar, Jan 1, 2011
    #1
    1. Advertising

  2. Shivanand Kadwadkar

    Ike Naar Guest

    On 2011-01-01, Shivanand Kadwadkar <> wrote:
    > ---------------------------------------------------------------
    > #include<stdio.h>
    > int main()
    > {
    > signed char i=128;
    >
    > printf("i =%d and signed char size =%d byte",i,sizeof(signed char));
    > }
    > -----------------------------------------------------------------------
    > According to me it should work like following way
    >
    > Since char is 1 byte in length above char is represented as 1000 0000
    > in binary
    >
    > i thought When i print i it will be 128 or -0/0
    >
    > as a output of above program i got i=-128 and singed char size= 1 byte
    >
    > I dont understand how -128 is represented in 8 bits and why compiler
    > is detecting it as -128 why not 128


    Check the range of values that can be stored in a signed char on
    your machine (SCHAR_MIN and SCHAR_MAX from <limits.h>).
    It's very likely that in your situation SCHAR_MIN=-128 and SCHAR_MAX=127,
    and the value 128 falls outside that range.
    If your machine uses 2s complement representation for numbers,
    then the 8-bit pattern 10000000 corresponds to the value -128.

    http://en.wikipedia.org/wiki/2s_complement
    Ike Naar, Jan 1, 2011
    #2
    1. Advertising

  3. On Jan 1, 5:32 pm, Ike Naar <> wrote:
    > On 2011-01-01, Shivanand Kadwadkar <> wrote:
    >
    >
    >
    >
    >
    > > ---------------------------------------------------------------
    > > #include<stdio.h>
    > > int main()
    > > {
    > > signed char i=128;

    >
    > > printf("i =%d  and signed char size =%d byte",i,sizeof(signed char));
    > > }
    > > -----------------------------------------------------------------------
    > > According to me it should work like following way

    >
    > > Since char is 1 byte in length above char is represented as 1000 0000
    > > in binary

    >
    > > i thought When i print i it will be 128 or -0/0

    >
    > > as a output of above program i got i=-128 and singed char size= 1 byte

    >
    > > I dont understand how -128 is represented in 8 bits and why compiler
    > > is detecting it as -128 why not 128

    >
    > Check the range of values that can be stored in a signed char on
    > your machine (SCHAR_MIN and SCHAR_MAX from <limits.h>).
    > It's very likely that in your situation SCHAR_MIN=-128 and SCHAR_MAX=127,
    > and the value 128 falls outside that range.
    > If your machine uses 2s complement representation for numbers,
    > then the 8-bit pattern 10000000 corresponds to the value -128.
    >
    > http://en.wikipedia.org/wiki/2s_complement


    Thanks for the comment.

    now i understood how it works.

    initially my understanding was left most bit was only used to
    represent sign not considered as a part of number.
    Shivanand Kadwadkar, Jan 1, 2011
    #3
  4. Shivanand Kadwadkar

    Thad Smith Guest

    On 1/1/2011 2:17 AM, Shivanand Kadwadkar wrote:
    > ---------------------------------------------------------------
    > #include<stdio.h>
    > int main()
    > {
    > signed char i=128;
    >
    > printf("i =%d and signed char size =%d byte",i,sizeof(signed char));
    > }
    > -----------------------------------------------------------------------
    > According to me it should work like following way
    >
    > Since char is 1 byte in length above char is represented as 1000 0000
    > in binary


    Assuming that signed char is 8 bits, the initialization results in an
    implementation-defined value (since 128 cannot be represented in an 8-bit signed
    char) being stored in i. Reinterpreting those 8 bits of 128 as an unsigned char
    with 2's complement notation is common, resulting in a value of -128, assuming
    SCHAR_MIN = -128.

    When it is printed, the value in i is promoted to int with the same value before
    being passed to printf.

    --
    Thad
    Thad Smith, Jan 1, 2011
    #4
  5. Shivanand Kadwadkar

    Seebs Guest

    On 2011-01-01, Shivanand Kadwadkar <> wrote:
    > According to me it should work like following way


    You are very confused.

    First off, it is not the C language that defines representations, it's
    the processor.

    > i thought When i print i it will be 128 or -0/0


    What do you think "-0" means?

    > as a output of above program i got i=-128 and singed char size= 1 byte


    > I dont understand how -128 is represented in 8 bits and why compiler
    > is detecting it as -128 why not 128


    What actually happened is your program is wrong -- you tried to store a
    value in a signed integer type that didn't fit, so you got whatever the
    compiler happened to feel like doing. It looks as though your system uses
    what's called "twos complement" arithmetic. The simplest way to understand
    this is that the topmost bit of an 8-bit integer has the value -128. So
    -1 is written as 0b11111111, because 0b01111111 would be 127, 0b10000000
    would be -128, and 127 + -128 = -1. When you supplied a value outside the
    range of the type (which can't represent 128), the compiler decided to just
    shove the bits in and hope for the best, leaving you with an object with
    the value -128. When you passed this to printf, it was automatically promoted
    to int, which had no effect on its value because -128 can be represented as
    an int, and then printed.

    -s
    --
    Copyright 2010, all wrongs reversed. Peter Seebach /
    http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
    http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
    I am not speaking for my employer, although they do rent some of my opinions.
    Seebs, Jan 1, 2011
    #5
  6. Shivanand Kadwadkar

    Tim Rentsch Guest

    Seebs <> writes:

    > On 2011-01-01, Shivanand Kadwadkar <> wrote:
    >> According to me it should work like following way

    >
    > You are very confused.
    >
    > First off, it is not the C language that defines representations, it's
    > the processor.
    >
    >> i thought When i print i it will be 128 or -0/0

    >
    > What do you think "-0" means?
    >
    >> as a output of above program i got i=-128 and singed char size= 1 byte

    >
    >> I dont understand how -128 is represented in 8 bits and why compiler
    >> is detecting it as -128 why not 128

    >
    > What actually happened is your program is wrong -- you tried to store a
    > value in a signed integer type that didn't fit, so you got whatever the
    > compiler happened to feel like doing.


    Hopefully he got whatever the required document specifying
    implementation-defined behavior says he will get. If he
    gets anything else the implementation is not conforming.
    Tim Rentsch, Jan 2, 2011
    #6
  7. Shivanand Kadwadkar

    Seebs Guest

    On 2011-01-02, Tim Rentsch <> wrote:
    > Seebs <> writes:
    >> What actually happened is your program is wrong -- you tried to store a
    >> value in a signed integer type that didn't fit, so you got whatever the
    >> compiler happened to feel like doing.


    > Hopefully he got whatever the required document specifying
    > implementation-defined behavior says he will get. If he
    > gets anything else the implementation is not conforming.


    Hmm. My vague memory is that the implementation is allowed to define that
    the out of range to signed value conversion is undefined behavior. As long
    as they define it. :)

    -s
    --
    Copyright 2010, all wrongs reversed. Peter Seebach /
    http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
    http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
    I am not speaking for my employer, although they do rent some of my opinions.
    Seebs, Jan 2, 2011
    #7
  8. Seebs <> writes:
    > On 2011-01-02, Tim Rentsch <> wrote:
    >> Seebs <> writes:
    >>> What actually happened is your program is wrong -- you tried to store a
    >>> value in a signed integer type that didn't fit, so you got whatever the
    >>> compiler happened to feel like doing.

    >
    >> Hopefully he got whatever the required document specifying
    >> implementation-defined behavior says he will get. If he
    >> gets anything else the implementation is not conforming.

    >
    > Hmm. My vague memory is that the implementation is allowed to define that
    > the out of range to signed value conversion is undefined behavior. As long
    > as they define it. :)


    There's no need to depend on vague memory when the standard is
    available.

    C99 6.3.1.3p3:

    Otherwise, the new type is signed and the value cannot be
    represented in it; either the result is implementation-defined
    or an implementation-defined signal is raised.

    It's possible that raising the "implementation-defined signal"
    could result in undefined behavior, but that would be a fairly
    nasty thing for an implementation to do.

    --
    Keith Thompson (The_Other_Keith) <http://www.ghoti.net/~kst>
    Nokia
    "We must do something. This is something. Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
    Keith Thompson, Jan 2, 2011
    #8
  9. Shivanand Kadwadkar

    Seebs Guest

    On 2011-01-02, Keith Thompson <> wrote:
    > There's no need to depend on vague memory when the standard is
    > available.


    No need, but a good reason: Recalling things without looking them up is
    a much better way of building memory than re-reading them. So for instance,
    if you're studying for a test, quizzes do much, much, more good than
    re-reading the text.

    > C99 6.3.1.3p3:


    > Otherwise, the new type is signed and the value cannot be
    > represented in it; either the result is implementation-defined
    > or an implementation-defined signal is raised.


    > It's possible that raising the "implementation-defined signal"
    > could result in undefined behavior, but that would be a fairly
    > nasty thing for an implementation to do.


    Okay, say it raises SIGTHERE_WAS_SIGNED_OVERFLOW. What can you do about
    this?
    void donothing(int sig) {
    /* do nothing */
    }

    int main(void) {
    int i = INT_MAX;
    int j;

    signal(SIGTHERE_WAS_SIGNED_OVERFLOW, donothing);
    j = i + 2;
    /* now what? */
    return 0;
    }

    Since an implementation-defined signal was raised, the implementation does
    not need to define the result. I have no information as to whether a value
    was stored in j, or if so, what that value was. I don't know whether it
    might be a trap representation.

    From the point of view of someone writing portable code, this definition comes
    out very close to "the behavior is undefined", because I cannot predict what
    value I'll get, or whether I'll even get a value. I could check for the
    overflow by adding a sig_atomic_t overflow_happened = 0, so I guess I could
    do:

    j = i + 2;
    if (overflow_happened)
    j = 0;

    and now I know that j is either 0 or some value, which is better, but...
    I guess in practice, for code that's otherwise-portable (and thus not
    trying to trap a signal which might not even exist on other platforms),
    it comes down to "and then your program might get aborted", which is pretty
    close in practice to "the behavior is undefined". You have to avoid it or
    risk stuff going horribly wrong.

    -s
    --
    Copyright 2010, all wrongs reversed. Peter Seebach /
    http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
    http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
    I am not speaking for my employer, although they do rent some of my opinions.
    Seebs, Jan 2, 2011
    #9
  10. Shivanand Kadwadkar

    Tim Rentsch Guest

    Seebs <> writes:

    > On 2011-01-02, Keith Thompson <> wrote:
    >> There's no need to depend on vague memory when the standard is
    >> available.

    >
    > No need, but a good reason: Recalling things without looking them up is
    > a much better way of building memory than re-reading them. So for instance,
    > if you're studying for a test, quizzes do much, much, more good than
    > re-reading the text.
    >
    >> C99 6.3.1.3p3:

    >
    >> Otherwise, the new type is signed and the value cannot be
    >> represented in it; either the result is implementation-defined
    >> or an implementation-defined signal is raised.

    >
    >> It's possible that raising the "implementation-defined signal"
    >> could result in undefined behavior, but that would be a fairly
    >> nasty thing for an implementation to do.

    >
    > Okay, say it raises SIGTHERE_WAS_SIGNED_OVERFLOW. What can you do about
    > this? [snip elaboration]


    Can you name even one implementation that uses signalling
    on out-of-range conversion and that the OP has used with
    greater than 0.01% probability? If not then it would be
    better to give an answer along the lines of implementation-
    defined, perhaps with a clarifying footnote for the
    signalling case.

    Come to think of it, does anyone know of _any_ implementation
    that uses signalling on out-of-range conversion? I'm sure
    I don't.
    Tim Rentsch, Jan 4, 2011
    #10
  11. Shivanand Kadwadkar

    Ike Naar Guest

    On 2011-01-04, Tim Rentsch <> wrote:
    > Come to think of it, does anyone know of _any_ implementation
    > that uses signalling on out-of-range conversion? I'm sure
    > I don't.


    There certainly have been in the past. E.g. on Burrougs large
    systems (later: Unisys A series), integers (39 value bits, one sign
    bit) are implemented as a subset of (48-bit) floatingpoint values
    (integers have a zero exponent part). Integer overflow generates
    a floatingpoint result. The NTGR instruction normalizes a floatingpoint
    value as an integer and generates a fault interrupt if it exceeds
    the limits of integer representation (+/- 2^39-1)..
    Ike Naar, Jan 4, 2011
    #11
  12. Shivanand Kadwadkar

    Hans Vlems Guest

    On 4 jan, 11:55, Ike Naar <> wrote:
    > On 2011-01-04, Tim Rentsch <> wrote:
    >
    > > Come to think of it, does anyone know of _any_ implementation
    > > that uses signalling on out-of-range conversion?  I'm sure
    > > I don't.

    >
    > There certainly have been in the past. E.g. on Burrougs large
    > systems (later: Unisys A series), integers (39 value bits, one sign
    > bit) are implemented as a subset of (48-bit) floatingpoint values
    > (integers have a zero exponent part). Integer overflow generates
    > a floatingpoint result. The NTGR instruction normalizes a floatingpoint
    > value as an integer and generates a fault interrupt if it exceeds
    > the limits of integer representation (+/- 2^39-1)..


    Not in the past Ike, the MCP is still alive. It runs mostly on
    emulators
    nowadays though there are still some A series in production.
    Hans
    Hans Vlems, Jan 4, 2011
    #12
  13. Shivanand Kadwadkar

    Tim Rentsch Guest

    Ike Naar <> writes:

    > On 2011-01-04, Tim Rentsch <> wrote:
    >> Come to think of it, does anyone know of _any_ implementation
    >> that uses signalling on out-of-range conversion? I'm sure
    >> I don't.

    >
    > There certainly have been in the past. E.g. on Burrougs large
    > systems (later: Unisys A series), integers (39 value bits, one sign
    > bit) are implemented as a subset of (48-bit) floatingpoint values
    > (integers have a zero exponent part). Integer overflow generates
    > a floatingpoint result. The NTGR instruction normalizes a floatingpoint
    > value as an integer and generates a fault interrupt if it exceeds
    > the limits of integer representation (+/- 2^39-1)..


    That is interesting but not quite on-point. What's being asked
    about is not what some hardware does but what a C implementation
    does. Also, the particular case in question is the case of
    conversion from unsigned integer to signed integer. The source
    operand is already represented as an unsigned integer, with no
    fractional part; floating point is not involved, and there is no
    possibility of overflow. So the behavior of an NTGR instruction
    is unlikely to be pertinent in answering this question.
    Tim Rentsch, Jan 6, 2011
    #13
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Replies:
    9
    Views:
    1,743
    Peter Nilsson
    Jul 26, 2004
  2. Steffen Fiksdal

    void*, char*, unsigned char*, signed char*

    Steffen Fiksdal, May 8, 2005, in forum: C Programming
    Replies:
    1
    Views:
    571
    Jack Klein
    May 9, 2005
  3. lovecreatesbeauty
    Replies:
    1
    Views:
    1,015
    Ian Collins
    May 9, 2006
  4. Ioannis Vranos
    Replies:
    11
    Views:
    750
    Ioannis Vranos
    Mar 28, 2008
  5. Ioannis Vranos

    Padding bits and char, unsigned char, signed char

    Ioannis Vranos, Mar 28, 2008, in forum: C Programming
    Replies:
    6
    Views:
    604
    Ben Bacarisse
    Mar 29, 2008
Loading...

Share This Page