size and nomenclature of integral types

Discussion in 'C++' started by Shailesh, Apr 3, 2004.

  1. Shailesh

    Shailesh Guest

    One problem I've been wrestling with for a long time is how to use the
    C++ integral data types, vis-a-vis their size. The C++ rules
    guarantee that a char is at least 1 bytes, a short and int at least 2
    bytes, and a long at least 4 bytes. The rules also define a size
    precedence to the types. In Stroustrup's book, it says that all type
    sizes are multiples of char, "so by definition the size of a char is
    1." According to the rules, that means 1 unit which can be any number
    of bytes.

    Why should I trust the compiler to optimize the memory usage of the
    types behind my back? As for portability, wouldn't fixed,
    unambiguosly-sized types be much more portable? Doesn't the ambiguity
    open the door for me on my system X with compiler Y to rely on its
    Z-byte representation of int? And if system-dependent optimization is
    desired, wouldn't it be easier to do with fixed-size types instead?

    One of my gripes is that the terminology is unclear. New programmers
    can be especially confused. For example, 'short' and 'long' are
    relative adjectives, and they don't say how big or at least how big.
    The other extreme are the names like __int8, __int16, and __int32 in
    MSVC++. Wouldn't I be much less likely to use something called __int8
    to count numbers over 255, than I would something called char? On the
    other hand, these keywords fix the size of the type and allow no room
    for the compiler to optimize.

    If I could invent new types, I would name them something like:

    uint8, uint16, uint24, uintN, ... (unsigned integer types)
    sint8, sint16, sint24, sintN, ... (signed integer types)

    where N is any multiple of 8 greater than 0 (i.e. arbitrary precision
    types would be built-in.) I feel the signed/unsigned aspect is better
    part of the keyword, and not separate and optional. The Mozilla
    sources are instructive in that their cross-platform code implements
    macros following a similar convention; but macros are like pollution.

    I'd further have a new keyword like "allowopt", which when placed
    after the type keyword grants access to the compiler to optimize the
    memory allocation of the type. For example, when I write "uint16
    allowopt myCounter;", then I would unambiously be declaring, "Give me
    a 16-bit, unsigned, integer called myCounter whose size the compiler
    may optimize."

    In most compilers, the default setting would be to enable optimization
    for all the declarations, and a pragma could turn it off. I have
    suspicions about why things are the way they are, but I'd like to hear
    the experts' opinions.
     
    Shailesh, Apr 3, 2004
    #1
    1. Advertising

  2. Shailesh

    Ben Measures Guest

    Shailesh wrote:
    > One problem I've been wrestling with for a long time is how to use the
    > C++ integral data types, vis-a-vis their size. The C++ rules guarantee
    > that a char is at least 1 bytes, a short and int at least 2 bytes, and a
    > long at least 4 bytes. The rules also define a size precedence to the
    > types. In Stroustrup's book, it says that all type sizes are multiples
    > of char, "so by definition the size of a char is 1." According to the
    > rules, that means 1 unit which can be any number of bytes.
    >
    > Why should I trust the compiler to optimize the memory usage of the
    > types behind my back?


    That question can have one of two meanings IMO.

    Q1: How do I know the compiler won't introduce an error in its optimisation?
    A: You don't. You'll just have to trust its optimisations or disable them.

    Q2: How do I know the compiler is coming up with the best, most
    optimised code?
    A: You don't. If you want assurance, write in assembly code (with many
    years experience behind you).

    Trivial low-level optimisations like these have miniscule impact
    compared to algorithmic optimisations.

    > As for portability, wouldn't fixed,
    > unambiguosly-sized types be much more portable? Doesn't the ambiguity
    > open the door for me on my system X with compiler Y to rely on its
    > Z-byte representation of int? And if system-dependent optimization is
    > desired, wouldn't it be easier to do with fixed-size types instead?


    The C++ standard specifies the minimum ranges that integral types must
    support. If you consider these to be your maximum ranges then your code
    will definitely be portable in that respect. (Note though, that the C
    standard library provides ways of getting the exact ranges.)

    In any case, you should try IMHO to avoid the concepts of sizes in bits
    and bytes when programming in C++ and instead think in higher-level
    terms of ranges.

    --
    Ben Measures
    Software programming, Internet design/programming, Gaming freak.

    http://ben.measures.org.uk - when I find time
     
    Ben Measures, Apr 3, 2004
    #2
    1. Advertising

  3. Shailesh

    Joe C Guest

    "Shailesh" <> wrote in message
    news:CNubc.114082$...
    > One problem I've been wrestling with for a long time is how to use the
    > C++ integral data types, vis-a-vis their size.


    Try using the standard c (not c++) header:
    #include <stdint.h>
    it came on board with c99.

    This shows the names it allows.
    http://www.dinkumware.com/manuals/reader.aspx?b=c/&h=stdint.html

    My experience is that this header can be used in c++ programs with no
    problems in gcc-based compilers.
     
    Joe C, Apr 3, 2004
    #3
  4. Shailesh

    Jack Klein Guest

    On Sat, 03 Apr 2004 08:42:10 GMT, Shailesh <>
    wrote in comp.lang.c++:

    > One problem I've been wrestling with for a long time is how to use the
    > C++ integral data types, vis-a-vis their size. The C++ rules
    > guarantee that a char is at least 1 bytes, a short and int at least 2
    > bytes, and a long at least 4 bytes.


    Your statement above is completely incorrect, because you are making
    the common mistake of confusing the word "byte" with the word "octet".

    In C and C++, the word "byte" is by definition the size of a
    character, and is at least 8 bits in width but may be wider. A char
    does not contain "at least 1 bytes", it contains exactly one byte,
    although that may be larger than one octet. A byte in C and C++ may
    have more than 8 bits.

    C++ does not guarantee that short and int are at least two bytes,
    although they must be at least two octets. Likewise with long.

    There are architectures where char contains more than 8 bits, mostly
    digital signal processors. On one such DSP, the minimum addressable
    unit is 16 bits, that is one byte has 16 bits. The character, short,
    and int types all contain 16 bits and their sizeof is 1. Another only
    addresses 32 bit quantities. All of the integer types, from char
    through long, are 32 bits, and all are exactly one byte.


    > The rules also define a size
    > precedence to the types. In Stroustrup's book, it says that all type
    > sizes are multiples of char, "so by definition the size of a char is
    > 1." According to the rules, that means 1 unit which can be any number
    > of bytes.


    No, the size of a char in C or C++ is exactly one byte, no matter how
    many bits it contains. You are still making the common mistake of
    assuming that the term "byte" means exactly 8 bits, which it most
    certainly does not. Especially in C and C++, where by definition it
    does not.

    > Why should I trust the compiler to optimize the memory usage of the
    > types behind my back? As for portability, wouldn't fixed,
    > unambiguosly-sized types be much more portable? Doesn't the ambiguity
    > open the door for me on my system X with compiler Y to rely on its
    > Z-byte representation of int? And if system-dependent optimization is
    > desired, wouldn't it be easier to do with fixed-size types instead?


    As others have already pointed out, C++ does not specify the size of
    any type in bytes, except for the character types. What it does
    specify is the range of values each type must be able to hold. If you
    stay within the minimum range of values for a type, it will be
    portable to all applications.

    > One of my gripes is that the terminology is unclear. New programmers
    > can be especially confused. For example, 'short' and 'long' are
    > relative adjectives, and they don't say how big or at least how big.
    > The other extreme are the names like __int8, __int16, and __int32 in
    > MSVC++. Wouldn't I be much less likely to use something called __int8
    > to count numbers over 255, than I would something called char? On the
    > other hand, these keywords fix the size of the type and allow no room
    > for the compiler to optimize.
    >
    > If I could invent new types, I would name them something like:
    >
    > uint8, uint16, uint24, uintN, ... (unsigned integer types)
    > sint8, sint16, sint24, sintN, ... (signed integer types)


    As already pointed out, C's <stdint.h> provides this for hardware
    platforms that actually support these exact sizes. The <stdint.h> for
    the 16-bit DSP I am currently working with does not typedef the int8_t
    and uint8_t types because they do not exist on the hardware.

    > where N is any multiple of 8 greater than 0 (i.e. arbitrary precision
    > types would be built-in.) I feel the signed/unsigned aspect is better
    > part of the keyword, and not separate and optional. The Mozilla
    > sources are instructive in that their cross-platform code implements
    > macros following a similar convention; but macros are like pollution.
    >
    > I'd further have a new keyword like "allowopt", which when placed
    > after the type keyword grants access to the compiler to optimize the
    > memory allocation of the type. For example, when I write "uint16
    > allowopt myCounter;", then I would unambiously be declaring, "Give me
    > a 16-bit, unsigned, integer called myCounter whose size the compiler
    > may optimize."
    >
    > In most compilers, the default setting would be to enable optimization
    > for all the declarations, and a pragma could turn it off. I have
    > suspicions about why things are the way they are, but I'd like to hear
    > the experts' opinions.


    You really need to do a net search for stdint.h, or get and read a
    copy of the current C standard. All implementations are required to
    provide typedefs for types that eliminate the need for the rather
    clumsy concept of a keyword like "allowopt".

    You can choose required types that are either the smallest or fastest
    to hold at least 8, 16, 32, and 64 bits, and an implementation is free
    to provide other widths if it supports them.

    The feature's of C's <stdint.h> will almost certainly be included in
    the next revision of the C++ standard.

    --
    Jack Klein
    Home: http://JK-Technology.Com
    FAQs for
    comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
    comp.lang.c++ http://www.parashift.com/c -faq-lite/
    alt.comp.lang.learn.c-c++
    http://www.contrib.andrew.cmu.edu/~ajo/docs/FAQ-acllc.html
     
    Jack Klein, Apr 3, 2004
    #4
  5. Shailesh

    Shailesh Guest

    Jack Klein wrote:
    > As others have already pointed out, C++ does not specify the size of
    > any type in bytes, except for the character types. What it does
    > specify is the range of values each type must be able to hold. If you
    > stay within the minimum range of values for a type, it will be
    > portable to all applications.
    >
    >


    You're right that I had no idea a byte could other than 8-bits. I
    also hadn't heard of the stdint.h header. I browsed it online, and it
    includes exactly the kinds of things I was looking for. I feel that a
    standard header like this would be far better than rolling one's own
    fixed-size types. Thank you for pointing it out. The fixed-width
    types are very helpful for tightly controlling data representation in
    files, memory, and network traffic. On the other hand, with so many
    flavors of CPU around, I can see how size is less meaningful in that
    context.
     
    Shailesh, Apr 4, 2004
    #5
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. ark

    sizeof C integral types

    ark, Jan 14, 2004, in forum: C Programming
    Replies:
    45
    Views:
    1,085
    Dan Pop
    Jan 19, 2004
  2. SasQ

    integral types

    SasQ, Mar 24, 2007, in forum: C++
    Replies:
    13
    Views:
    639
    James Kanze
    Mar 30, 2007
  3. Ioannis Vranos
    Replies:
    13
    Views:
    498
    James Kanze
    Mar 10, 2008
  4. Ioannis Vranos

    Minimum sizes of integral and floating point types

    Ioannis Vranos, Mar 8, 2008, in forum: C Programming
    Replies:
    9
    Views:
    349
    Richard Heathfield
    Mar 9, 2008
  5. Ian Collins
    Replies:
    26
    Views:
    982
    James Kanze
    Mar 30, 2009
Loading...

Share This Page