I don't recall that ever being stated so plainly
on this newsgroup before.
Nor do I. And even though I at first thought it was technically
wrong because of padding bits, I now think that while it still may be
wrong, it's less wrong than I thought.
a) Plain char is unsigned. INT_MAX must be at least UCHAR_MAX so that
getchar() can return any plain char value, and INT_MIN must be less than
or equal to -32767. So the total number of values of 'int' must be at
least UCHAR_MAX+32768, which requires more bits than CHAR_BIT. Q.E.D.
b) Plain char is signed. The range of char, i.e., of signed char, must
be a subrange of the range of int. But is it possible we might have
#define CHAR_BIT 16
#define UCHAR_MAX 65535
#define SCHAR_MIN -32767 /* !!! */
#define SCHAR_MAX 32767
#define INT_MIN -32768
#define INT_MAX 32767
#define EOF -32768
Is anything wrong, from the C standpoint, with these definitions?
-Arthur