Richard said:
.... snip ...
If the real world weren't important, ISO could simply have made
every int type, qualified or not, 1048576 bits wide and be done
with the whole issue forever. But the real world /does/ matter,
and there are always (or, at least, for some time to come) going
to be some platforms that are going to struggle to provide 64-bit
integers.
Now look at the way Pascal, to either ISO7185 or ISO10206, does
it. It defines a single value, maxint. Integral values can define
anything from -maxint to +maxint. ANY other range can be defined
for any value, using a type definition:
CONST
MAXFOO = 1234;
TYPE
myfoo = -MAXFOO .. MAXFOO;
posfoo = 0 .. MAXFOO;
VAR
foo : myfoo;
ufoo : posfoo;
and the var is placed in some object known only to the compiler.
This is not feasible for C, due to the usual problem of 'existing
code'. However, note how easy it makes range checking. Any
constant expression can immediately be evaluated for legitimacy.
Any expression in variables can be evaluated for maximum (and
minimum) values, and again this can be done at compile time, so
that run time checks are not needed. This has been shown to reduce
runtime checks by about 75 to 90%.
Pascallers don't design 'checking code'. Instead, they design
types. Note that the popular Pascals, such as Borland, Turbo,
Delphi, FreePascal, don't follow the Pascal standards, so this note
doesn't apply to them.
Ada is very similar. You probably know all this, but the wide
world doesn't.