CJ said:
How about notation similar to c++ templates one specifies the accuracy
in bits for integers, and the precision for reals and let the compile
"pick" the "object" to contain it.
int<1> a truly Boolean variable (_Bool)
_Bool (or bool said:
int<8> a UTF-7 char, or signed smallint
int8_t
int<16> a signed short
int16_t
int<32> a signed int
int32_t
int<256> a "long long long long" ???
int256_t, one day...
unsigned<64> an unsigned long long
uint64_t
It appears you haven't heard of said:
real<6.2> a float
real<12.2> a double
Hmmm; perhaps a said:
This allows for future compatibility to CPUs yet to be designed.
No, it guarantees incompatibility with any (present or future) CPU that
isn't designed exactly how you expect (i.e. power-of-two-sized integer
types). C's original set of integer types are specified as "at least" as
large as a certain size for a reason: not all systems are alike, and
assuming even basic things like register size would make the language less
portable. If you want a specific size integer, and don't care about _your_
code being portable, those are now available as well with any decent
implementation.
S