F
Frederick Gotham
On modern 32-Bit PC's, the following setup is common:
char: 8-Bit
short: 16-Bit
int: 32-Bit
long: 32-Bit
"char" is commonly used to store text characters.
"short" is commonly used to store large arrays of numbers, or perhaps wide
text characters (via wchar_t).
"int" is commonly used to store an integer.
"long" is commonly used to store an integer greater than 65535.
Now that 64-Bit machines are coming in, how should the integer types be
distributed? It makes sense that "int" should be 64-Bit... but what should
be done with "char" and "short"? Would the following be a plausible setup?
char: 8-Bit
short: 16-Bit
int: 64-Bit
long: 64-Bit
Or perhaps should "short" be 32-Bit? Or should "char" become 16-Bit (i.e.
16 == CHAR_BIT).
Another semi-related question:
If we have a variable which shall store the quantity of elements in an
array, then should we use "size_t"? On a system where "size_t" maps to
"long unsigned" rather than "int unsigned", it would seem to be inefficient
most of the time. "int unsigned" guarantees us at least 65535 array
elements -- what percentage of the time do we have an array any bigger than
that? 2% maybe? Therefore would it not make sense to use unsigned rather
than size_t to store array lengths (or the positive result of subtracting
pointers)?
char: 8-Bit
short: 16-Bit
int: 32-Bit
long: 32-Bit
"char" is commonly used to store text characters.
"short" is commonly used to store large arrays of numbers, or perhaps wide
text characters (via wchar_t).
"int" is commonly used to store an integer.
"long" is commonly used to store an integer greater than 65535.
Now that 64-Bit machines are coming in, how should the integer types be
distributed? It makes sense that "int" should be 64-Bit... but what should
be done with "char" and "short"? Would the following be a plausible setup?
char: 8-Bit
short: 16-Bit
int: 64-Bit
long: 64-Bit
Or perhaps should "short" be 32-Bit? Or should "char" become 16-Bit (i.e.
16 == CHAR_BIT).
Another semi-related question:
If we have a variable which shall store the quantity of elements in an
array, then should we use "size_t"? On a system where "size_t" maps to
"long unsigned" rather than "int unsigned", it would seem to be inefficient
most of the time. "int unsigned" guarantees us at least 65535 array
elements -- what percentage of the time do we have an array any bigger than
that? 2% maybe? Therefore would it not make sense to use unsigned rather
than size_t to store array lengths (or the positive result of subtracting
pointers)?