Pawel Dziepak said:
It looks like int size on your architecture is 4 bytes.
To be precise, it looks like int is 32 bits. A byte in C must be at
least 8 bits, but it can be more; theoretically, int could be a single
32-bit byte. In practice, a byte is almost certainly going to be
exactly 8 bits on any system you run into, unless you work with DSPs
(digital signal processors) or perhaps some other exotic embedded
system.
(I'm ignoring padding bits.)
In printf format
strings "%d" stands for signed decimals ("%ud" is for unsigned).
No, "%u" is for unsigned int; "%ud" is valid, but it prints an
unsigned int value (in decimal) followed by a letter 'd'.
0xFFFFFFFF is in two's complement [1] system a representation of -1.
Well, sort of. 0xFFFFFFFF is an integer constant; it denotes a value,
not a representation, specifically the value 4294967295. C doesn't
have a notation for representations.
Assuming 32-bit int, 0xFFFFFFFF is of type unsigned int. The maximum
representable value of type int is 2147483647. So the declaration:
int a=0xFFFFFFFF;
attempts to initialize a with a value that isn't of the same type
as a and that's too big to be stored in a. But both the type
of 0xFFFFFFFF and the type of a are arithmetic types, so the value
will be implicitly converted and then stored.
So what's the result of converting 0xFFFFFFFF (a value of type
unsigned int) to type int? It's implementation-defined. In practice,
the vast majority of systems use a 2's-complement representation *and*
this kind of conversion is defined to copy the bits rather than doing
anything fancier, so the value stored in a will probably be -1.
But this depends on several non-portable assumptions, and you should
probably avoid this kind of thing in real code. If type int is, say,
64 bits, then a will be assigned the value 4294967295.
If you want a to have the value -1, just write
int a = -1;
If you want a to have the value 0xFFFFFFFF -- well, if int is 32 bits
you just can't do that.
Decide what value you want a to have, and just initialize it with that
value.
If your implementation assumes that int is a signed value,
There is no "if"; type int is signed by definition.
a is equal to
-1. That's why you got that results for adding 1 and 2.
a is *probably* equal to -1.
On your architecture int is 32 bit, what means that the largest value
unsigned int can contain is 0xFFFFFFFF. When you adds anything to it,
there is an overflow. The results are the same as in adding to signed int.
The rules for arithmetic are different for signed and unsigned types.
If an arithmetic operation on a signed type yields a result that can't
be represented in that type, the behavior is undefined. (On most
systems, it will wrap around, but an implementation *could* insert
range-checking code and crash your program.) For an unsigned type,
however, the result just quietly wraps around. This may not
necessarily be what you want, but it's how the language defines it.
Finally, the "%x" printf format expects an argument of type unsigned
int; you're giving it an argument of type int. There are some subtle
rules that let you get away with this, but IMHO it's usually better
just to use the expected type. For example, you could write:
printf("a = %x\n", (unsigned int)a);