Money said:
How is it possible for sizeof(int)==1. I am not able to understand.
char must be at least 8 bits (CHAR_BIT >= 8).
int must be at least 16 bits (CHAR_BIT * sizeof(int) >= 16). [1]
An implementation with CHAR_BIT==16 and sizeof(int)==1 would satisfy
these requirements.
Note that sizeof yields the size of its argument in bytes. In C, a
"byte" is by definition the size of a char, so sizeof(char) == 1 by
definition, however many bits that happens to be. (It's common these
days to use the term "byte" to mean exactly 8 bits, but that's not how
C uses the term; a better word for exactly 8 bits is "octet".)
I really didn't understood that. Please can you explain in simpler words
Here's an example. Suppose CHAR_BIT==8, and sizeof(int)==4 (32 bits),
but only the high-order 24 bits contribute to the value; the low-order
8 bits are ignored. These 8 bits are called "padding bits". Suppose
the byte order is little-endian. Then the value 0x654321, for example,
would be represented by the byte values (0x00, 0x21, 0x43, 0x65), shown
from lowest to highest addresses within the word.
The proposed code sets an int to the value 1, which on our
hypothetical system would be represented as (0x00, 0x01, 0x00, 0x00).
It then looks at the first byte (at the lowest address) of the
representation. Seeing the value 0x00, it assumes, incorrectly, that
the 1 byte was stored at the other end of the word, and that the
machine is big-endian.
(I *think* I got this right.)
[1] The standard doesn't actually say direcly that int is at least 16
bits. It says that the range of values it can represent is at
least -32767 .. +32767. That, and the fact that a binary
represention is required, imply that it's at least 16 bits.