L
Lash Rambo
How does one go about programming in C++ without making assumptions about
the size of types? Is it really possible?
For instance, even the well-intentioned, portable C++ (and C) code I've
seen assumes an int will be larger than, say, 4 bits. I assume this is a
safe assumption, since the size of an int is guaranteed to be at least as
big as the size of a char, and if the size of a char is 4 bits, your
character set has only 16 characters, which isn't enough to express all
of C++'s keywords and symbols. Reasonable?
What else can we assume? Is it safe to assume an int will be at least 7
bits? 8 bits?
How does this work in the real world? Do programmers just write C++ for
their target architecture(s), and add support for differing architectures
later, as needed? For instance, what if they originally write assuming
32-bit ints, and for whatever reason later need to port to an
architecture with 28-bit ints. When they compile for the new
architecture and the code breaks, do they have to go through and check
every little int? That sounds like a huge PITA!
I'm not necessarily talking about hard-coded bitfields and things like
that, but arithmetic overflow, and things of that sort.
What are the odds of "weird" architectures like that needing support? Is
it even worth worrying about if one has to ask?
I know about using the preprocessor to selectively typedef int16, int32,
etc., although such a trick wouldn't work for our 28-bit example, above.
What's troubling me is more on a "moral" level. "They" say you're not
"supposed" to make assumptions (not guaranteed by the language standard)
about the size of integers, but I have to wonder, how many programmers in
the real world, who write working, even portable C++ problems, truly
follow that advice 100%. Does anyone? Is it really possible?
the size of types? Is it really possible?
For instance, even the well-intentioned, portable C++ (and C) code I've
seen assumes an int will be larger than, say, 4 bits. I assume this is a
safe assumption, since the size of an int is guaranteed to be at least as
big as the size of a char, and if the size of a char is 4 bits, your
character set has only 16 characters, which isn't enough to express all
of C++'s keywords and symbols. Reasonable?
What else can we assume? Is it safe to assume an int will be at least 7
bits? 8 bits?
How does this work in the real world? Do programmers just write C++ for
their target architecture(s), and add support for differing architectures
later, as needed? For instance, what if they originally write assuming
32-bit ints, and for whatever reason later need to port to an
architecture with 28-bit ints. When they compile for the new
architecture and the code breaks, do they have to go through and check
every little int? That sounds like a huge PITA!
I'm not necessarily talking about hard-coded bitfields and things like
that, but arithmetic overflow, and things of that sort.
What are the odds of "weird" architectures like that needing support? Is
it even worth worrying about if one has to ask?
I know about using the preprocessor to selectively typedef int16, int32,
etc., although such a trick wouldn't work for our 28-bit example, above.
What's troubling me is more on a "moral" level. "They" say you're not
"supposed" to make assumptions (not guaranteed by the language standard)
about the size of integers, but I have to wonder, how many programmers in
the real world, who write working, even portable C++ problems, truly
follow that advice 100%. Does anyone? Is it really possible?