Markus Svilans posted:
I can see your point. But in the last 10-15 years, has there been a
new CPU or microprocessor produced that does not have 8 bits in a
byte? Are there any C++ compilers that compile code for non-8-bit-byte
systems?
I'm not arguing about the C++ standard, I'm just surprised that
variable byte sizes are something that people worry about enough to
include in the standard.
On second thought... 16-bit character sets could be considered to be
the harbingers of future non-8-bit bytes, could they not?
Very possible. I think there's a certainty in life: Twenty years from
now, the world will have progressed more than we expected it, and in
unexpected ways.
Who's knows what the computers of tomorrow will bring?
I'm confused. To set a pointer to null in C++, isn't the standard way
to do that to assign zero to the pointer? If you're on a system where
null pointers are non-zero, what happens to the pointer you thought
you had set to null?
A "compile-time constant" is an expression whose value can be evaluated
at compile-time. Here's a few examples:
7
56 * 5 / 2 + 3
8 == 2 ? 1 : 6
If you have a compile-time constant which evaluates to zero, whether it
be:
0
5 - 5
2 * 6 - 12
Then it gets special treatment in C++, and qualifies as a null pointer
constant. A null pointer constant can be used to set a pointer to its
null pointer value, like so:
char *p = 0;
Because 0 qualifies as a null pointer constant, it gets special treatment
in the above statement (note how we'd normally have a type mismatch from
int to char*). Anyway, what the above statement does is set the pointer
to its null pointer value, whether that be:
0000 0000 0000 0000 0000 0000 0000 0000
or:
1111 1111 1111 1111 1111 1111 1111 1111
or:
1000 0000 0000 0000 0000 0000 0000 0000
or:
0000 0000 0000 0000 0000 0000 0000 0001
or:
1010 0101 1010 0101 1010 0101 1010 0101
From what you say, would the truly portable way to do that be to
#define NULL depending on what system you're compiling for?
No, all you do is:
char *p = 0;
And let your compiler deal with the rest.
I can see where padding bits would be necessary, for example
representing 32-bit integers on a 7-bit-per-byte system would require
five 7-bit bytes, with 3 padding bits.
In actual fact it would make more sense to have a 35-Bit integer type,
instead of a 32-Bit one with padding.
But are there any cases in practice where primitive types actually
contain padding bits?
Mostly on supercomputers, I think.
Here's a quotation from a recent post on comp.lang.c:
For example, I'm currently logged into a system with the following
characteristics:
CHAR_BIT = 8
sizeof(short) = 8 (64 bits)
sizeof(int) = 8 (64 bits)
sizeof(long) = 8 (64 bits)
SHRT_MAX = 2147483647 (32 padding bits)
USHRT_MAX = 4294967295 (32 padding bits)
INT_MAX = 35184372088831 (18 padding bits)
UINT_MAX = 18446744073709551615 (no padding bits)
LONG_MAX = 9223372036854775807 (no padding bits)
ULONG_MAX = 18446744073709551615 (no padding bits)
(It's a Cray Y/MP EL running Unicos 9.0, basically an obsolete
supercomputer.)