Michael said:
Keith said:
Michael said:
the function
int getchar();
reads a byte from the standard input and return it.
If End-of-file is read, it returns EOF (on my machine, it is 0xffffffff)
[...]
No, EOF cannot be defined as 0xffffffff. It must expand to "an
integer constant expression, with type int and a negative value". A
typical definition is
#define EOF (-1)
If you convert the value of EOF to unsigned int on a 32-bit system,
the result is likely to be 0xffffffff; that's not the value of EOF,
it's the result of the conversion.
0xffffffff is hexadecimal *is* -1 in decimal on 32-bit int.
No, 0xffffffff is an integer constant with the value 4294967295
(2**32-1, where "**" denotes exponentiation).
Assuming int is 32 bits, 2's-complement, no padding bits, no trap
representations, then that value cannot be represented by type int.
If you assign 0xffffffff to an int object, then, strictly speaking,
the result is an implementation-defined value (or, optionally and in
C99 only, an implementation-defined signal). In practice, it's very
likely that the value -1 will be assigned -- this is the
(implementation-defined but very common) result of the conversion.
Because of the conversion *the value changes*.
Assigning -1 to an object of type unsigned int will result in the
object having the value UINT_MAX, which, if unsigned int is 32 bits
with no padding bits, is 4294967295 or 0xffffffff. Again, the
implicit conversion from int (the type of the expression -1) to
unsigned int (the type of the object) changes the value. (Conversion
to unsigned types is defined differently by the standard than
conversion to signed types.)
I suspect that you're thinking of hexadecimal notation as a way of
specifying the representation of an object, as opposed to decimal
notation, which specifies a mathematical numeric value. If so, you
are mistaken. In C, decimal and hexadecimal are just two different
notations for representing integer values; there's nothing magical
about either one. 0xff, 0x00ff, and 255 mean *exactly* the same
thing.
On the other hand, in English text it's not unreasonable to use
hexadecimal notation to talk about object representations, so that
0xff refers to 8 bits all set to 1, and 0x00ff refers to 16 bits (and
thus is distinct from 0xff). But since C has a well-defined meaning
for hexadecimal notation, if you're going to use it that way you need
to say so explicitly.
For example, the representation of the 32-bit int value -1 is
0xffffffff.
(Octal is the third notation; it's probably not used as much these
days, though it was very useful on the PDP-11. Except that, strictly
speaking, 0 is an octal constant, so most C programmers use octal
every day without realizing it.)