L
Lauri Alanko
I have been considering C's various integer types for a while,
and I'm having trouble seeing if they are all really justified.
In general, the most important thing when selecting an integer
type is to choose one that can represent the desired range of
values. It rarely hurts if the range is larger than desired, and
even then a judicious use of "& 0xfff..." will help. (This,
incidentally, is why I don't see why the exact-sized intN_t types
would ever be necessary.)
Classic (pre-C99) C offers a basic selection of integer types
based on the desired range: char, short and long are guaranteed
to have a minimum of 8, 16 and 32 bits, respectively. An
implementation then chooses a representation for each of these
types, based on the above range constraints and some reasonable
compromise between speed and size.
But sometimes the programmer wants to explicitly specify whether
speed or size is to be emphasized. Classic C already had one way
to do this in the form of int: int is like short except that it
may, at the implementation's discretion, be larger than short if
that makes it faster.
C99 then generalized these sorts of performance hints into the
int_fastN_t and int_leastN_t -types: specify the minimum required
range, and whether you want to emphasize speed or size, and the
implementation then gives you something appropriate for the given
specs.
But I'm beginning to wonder if the use of the int_fastN_t -types (or
even the classic short/int/long -types) is actually ever warranted.
I had a minor revelation when I realized that the size of an
integer type only really matters when the value is stored in
memory. In temporary values, local variables (whose address is
not taken) or parameters, the implementation is free to use
whatever representation it likes even for smaller types, e.g.
registers, or full words in stack.
So the only performance penalty from using int_leastN_t -types
seems to come when reading/writing a value from/to memory, when
it possibly gets possibly converted into another representation.
This can have a cost, to be sure, but I think it would be
negligible compared to the computation that is actually done with
that value. Not to mention that the saved space may actually
increase performance due to less cache pressure.
So here are my questions:
Are there any situations where the use of a int_fastN_t -type
could actually produce meaningfully faster code than would be
produced with the corresponding int_leastN_t -type, given a
reasonably sophisticated implementation?
Are there any situations where the use of an exact-sized intN_t
-type is warranted?
Comments much appreciated.
Cheers,
Lauri
and I'm having trouble seeing if they are all really justified.
In general, the most important thing when selecting an integer
type is to choose one that can represent the desired range of
values. It rarely hurts if the range is larger than desired, and
even then a judicious use of "& 0xfff..." will help. (This,
incidentally, is why I don't see why the exact-sized intN_t types
would ever be necessary.)
Classic (pre-C99) C offers a basic selection of integer types
based on the desired range: char, short and long are guaranteed
to have a minimum of 8, 16 and 32 bits, respectively. An
implementation then chooses a representation for each of these
types, based on the above range constraints and some reasonable
compromise between speed and size.
But sometimes the programmer wants to explicitly specify whether
speed or size is to be emphasized. Classic C already had one way
to do this in the form of int: int is like short except that it
may, at the implementation's discretion, be larger than short if
that makes it faster.
C99 then generalized these sorts of performance hints into the
int_fastN_t and int_leastN_t -types: specify the minimum required
range, and whether you want to emphasize speed or size, and the
implementation then gives you something appropriate for the given
specs.
But I'm beginning to wonder if the use of the int_fastN_t -types (or
even the classic short/int/long -types) is actually ever warranted.
I had a minor revelation when I realized that the size of an
integer type only really matters when the value is stored in
memory. In temporary values, local variables (whose address is
not taken) or parameters, the implementation is free to use
whatever representation it likes even for smaller types, e.g.
registers, or full words in stack.
So the only performance penalty from using int_leastN_t -types
seems to come when reading/writing a value from/to memory, when
it possibly gets possibly converted into another representation.
This can have a cost, to be sure, but I think it would be
negligible compared to the computation that is actually done with
that value. Not to mention that the saved space may actually
increase performance due to less cache pressure.
So here are my questions:
Are there any situations where the use of a int_fastN_t -type
could actually produce meaningfully faster code than would be
produced with the corresponding int_leastN_t -type, given a
reasonably sophisticated implementation?
Are there any situations where the use of an exact-sized intN_t
-type is warranted?
Comments much appreciated.
Cheers,
Lauri