It seems to me to approach this from a subtly different angle. Correct me
if I am wrong but this seems to be saying not just, "at least of a certain
magnitude," (as I was wanting) but also, "as small as possible and
convenient on the architecture, but not necessarily exact." This is more
than a hint to the compiler. It is a demand (if I understand correctly).
This integer will _not_ be bigger than the predefined at-least-as-big-as-N
under any circumstances. As such it may inhibit code optimisation rather
than enhance it unless the programmer is careful to consider how that
variable or array so declared is used in the code.
Having the code optimizer determine the size of a type based on
looking at the code is darn near impossible and I don't know of any
compiler that can do it. A nearly impossible to bypass hurdle is
that all parts of separate compilations have to agree on the size
of a type, so any attempt would pretty much have to defer all code
generation to what's commonly called the "linking" step. "Object
code" becomes lightly preprocessed source code. Even with that is
the hurdle that all programs that use files with these types in
them (running on the same machine) need to use the same size for
each type. It's just too much trouble for too little benefit.
You can have a compiler that lets you determine the size of a type
based on compiler flags (which amounts to a different implementation).
There is nothing preventing an implementor (or in an implementation
with text include files, possibly a user of the compiler) from
hacking up <inttypes.h> with #ifdefs on COMPILE_FOR_SPEED to change
what is used for int_leastN_t, and using -DCOMPILE_FOR_SPEED to
select the implementation. However, you'd better use the same
option on all parts of the same program. That may mean SPEED vs.
SPACE libraries, which you may not have source code for.
For example, I know that nearly all of my code needs numeric quantities
that are at least 16-bits so I declare a bunch of integers at least as big
as 16-bit. That's it. In simple terms, if I am compiling for small size I
want it to use the smallest integers it can in arrays. If I am compiling
for speed I want it to use the fastest operations it can. (In reality I
What do you mean by "compile for small size" or "compile for speed"?
Is this controlled by flags passed to the compiler? ANSI C specifies
no such choice. It doesn't rule it out, either, and I believe there
are compilers that let you select sizeof(int) from two choices with
flags. I would certainly want the type sizes controlled independently
from other optimizations. Type sizes must match across all parts
of a program. Code optimization need not.
(There is, for example, one file in the code of a well-known database
program which is often compiled without optimization because compiling
with optimization tends to run the compilation step out of swap
space on typical compilations, and the parser tables generated can't
really be optimized much anyway. Also, turning off code optimization
is sometimes necessary because the generated code is broken).
Forget about the optimization step determining the size of a type.
It won't happen.
think the compiliation for /smallest/ size is anachronistic. If I want
No, it isn't. And compiling for smallest size data may BE the fastest.
Consider how processor cache operates. Consider how "swapping" and
"paging" work.
small I normally want to try to fit to a certain size but still want as
fast as possible.)
Not everyone is a speed freak. And sometimes the programmer knows
that smaller is faster, especially if it's smaller in memory vs.
disk file.
It seems that int_least takes that choice away from the compilation step
and embeds it, possibly in conflicting ways, in the code.
WHAT choices at the compilation step? Forget about the optmization step
changing the size of a type. It won't happen.
I guess I'm
suggesting int_least and int_fast should be one and the same. Neither is
int_exact which is, in fact, useful!
It doesn't matter how darn fast something is if it won't fit in
available memory!
int_exact may be useful but it has the potential for being darn
slow, too, especially if the implementation decided to synthesize
all of them not supported directly by hardware: e.g. int17_t done
with masks.
I don't think so. It is a hack. It is possibly included to satisfy the
16-bit quantities, faster 32-bit operations model.
The bigger the native int gets, the more likely it is that accessing
smaller chunks of memory will require inefficient methods to access
small pieces. I wouldn't be too surprised to see 16-bit-addressable
machines (with a 64-bit or 128-bit int) using a Unicode character
set in the future, where accessing 8-bit (legacy ASCII) quantities
requires a shift-and-mask operation, and accessing a 32-bit quantity
is no faster than accessing a 64-bit quantity.
int_fastN_t is a
nonsense. It is not faster.
If it avoids shift-and-mask, it's probably faster.
It is saying to choose a wider integer for this
variable or elements of this array if, on this architecture, operations on
this size of integer are normally faster. It takes no consideration
whatsoever as to what operations are performed on those integers in the
code.
Forget about the optimizer choosing the size of a type. It's not
going to happen.
Most of the operations performed on an N-bit type are moving them
around, or are of equivalent speed.
Say the world is not the rosy, modern, mainstream, familiar, Intel-ish
16/32-bit paradigm and we have a machine which performs addition and
subtraction faster on 32-bit integers and performs multiplication and
division faster on 16-bit rather than 32-bit values. ie. it has a 32-bit
adder but only a 16-bit multiplier (to give a 32-bit result). What, then,
is int_fast16_t? There is no such thing. The fastest operations depend on
how the values are to be used in the code, particularly in the innermost
loops.
The speed metric I'd probably use for deciding what to make
int_leastN_t is how fast you can load, store, or assign a quantity
of the given size. There are a number of operations that are
generally performed as fast as a load or a store for a given size
integer quantity. Addition, subtraction, bitwise operations,
compare, etc. generally qualify, so if you can move that size around
quickly, you can also do simple operations on it at no extra cost.
Multiplication and division generally do not fall into this class.
Forget about the optimizer choosing the size of a type. It's not
going to happen.
Agreed that the compile/link step needs to be unified to handle this.
So as a practical matter, it's not going to happen. Among other
problems, all the people who distribute proprietary code libraries
are going to object that "object code" is now too close to source
code and it makes everything involuntary open-source. Plus, it
explodes the combinations of code libraries needed: you've got 3
choices for int_least16_t, and 2 choices for int_least32_t, making
6 copies of every library.
And even putting everything in the link step doesn't solve the
problem: sometimes you need several programs to all agree on the
type sizes to access data they pass between themselves.
It is intN_t but, sadly, intN_t is optional. If the code is to be portable
I have to fiddle about with the declaraions or, more likely, use
int_leastN_t and bitmasks. Again this is nonsense. If I have to use
Well, is it really any better than making the compiler use the type
of int_leastN_t and bitmasks and shifts? Very few platforms will
support int17_t natively. Of course, there's not a lot of demand
for that size, either. I can see reasonable arguments for int48_t,
though, where 32 bits won't do but 64 bits is overkill. (It may
be a few years before we hit the 280-terabyte hard disk size barrier).
bitmasks I may be better going for int_fastN_t and masking those. On the
other hand, if I use the "faster" integers and bitmask them, while I will
get faster code on the right hardware I've maybe made a rod for the back of
true right-sized-word hardware. Maybe a clever compiler could get me out of
this trap but I'd rather not have been put in it in the first place. And
then there are all those masks littering the code.
Valid point. Having said that I think there is a difference here. AFAIK
'register' is a hint or suggestion and need not be followed. I assume the
least and fast types are directives.
A register directive should be followed to the extent that taking
its address must be diagnosed as an error. This is not something
that can be left to the optimization stage. Otherwise, the code
can't tell whether it was followed or not.
On the other hand, the code CAN tell the difference between
int_leastN_t and int_fastN_t if they are different sizes: sizeof.
Incidentally, if you really want to try out different cases, there's
nothing prohibiting you from using your own typedefs which you can
conditional as needed. For example, rid_type might be the typedef
for a record ID in a database, and it can be int_least16_t,
int_least32_t, or int_least64_t depending on target platform,
licensing (e.g. the demo version only does 65535 records), and
application. This also would let you make separate decisions
for instances of different types, where you know more about how
the types will be used than the compiler.
I fully agree with the point you mentioned above - ints of sizes other than
the well known ones. I don't know if they would be used much but they would
have their place and allow a fine degree of control for someone carefully
writing transportable code.
Gordon L. Burditt