pembed2012 said:
i don't understand any of this, what signal handler!!
The C standard says that the conversion of a value that is out of range
to a signed integer type is "implementation defined" or it may "raise an
implementation defined signal". On your system, plain char is signed so
it is a signed integer type, and the value, 0x91, is out of range for a
single-byte char so the above rule applies.
"Implementation defined" has a precise meaning in the C standard. It
means that the documentation for the implementation (often called rather
loosely "the compiler") must say what happens. Few implementations
choose to raise a signal when converting an out-of-range signed integer
value. Yours is one that does not. Most C implementation define the
conversion as simply copying as many of the low-order bits as are needed
from the source to the target.
if it is overflow why the value should be bit pattern?
is it a rule or something else?
It's not, technically, an overflow. Overflows can occur as the result
of arithmetic, but an out-of-range conversion is not really an
overflow -- it's just a conversion.
Your program does several rather odd things. All of them mean that the
results say more about what the compiler and the machine are doing
rather than what the C language says about your program. Here is the
highly system-specific description of what is happening:
First, the int value 145 is converted a char type value. On your
system, char objects can hold valued from -128 to 127 so 145 is
out-of-range. This conversion done by simply taking the bottom 8 bits
of
00000000000000000000000010010001
(that 145 as a 32-bit int) and stuffing them into the 8 bits of the char
called 'a'.
The program then needs the value of 'a', and it needs it converted to a
int, because the arguments to variadic functions like printf have what
are called "the integer promotions" applied to them. The char 'a'
contains the bits:
10010001
and your system uses 2's complement representation for signed values.
That means that 10010001 represents the value -111 (to see exactly why,
look up "signed number representation" on, say, Wikipedia). The
conversion of -111 to an int is well-defined (-111 is always in range
for the type int) you just get -111! Of course, with 32-bit ints it
looks like this:
11111111111111111111111110010001
(again, you may need to consult the Web to see exactly why).
Now your program does something even odder. The printf format specifier
%x expects an unsigned int rather than a plain int. This is,
technically, undefined by the C language standard. If you give a value
of the wrong type to printf, anything could happen, but in fact, it is
likely that printf will just plough on and pretend that the
11111111111111111111111110010001
it sees is a unsigned int, and it will go ahead and print it in hex as
requested:
ffffff91
If anyone was in any doubt, this might help explain why C is not a
particularly good vehicle for teaching programming. Simple, short
programs can involve one in long irrelevant explanations, or require the
rather dismissive "you'll understand later after computer architecture
101".