Discussion in 'C Programming' started by Jorgen Grahn, Mar 1, 2014.

  1. Jorgen Grahn

    David Brown Guest

    That would cover any integer types of the same size, on any sane
    computer - but would normally rule out compatibility between floating
    point and integer types of the same size, and probably also
    compatibility between pointers and integers. That is all fair enough.
    Thanks for that explanation. It is reassuring to know that the reason I
    couldn't find a clear definition in the standard is that there is no
    clear definition in the standard! It seems an odd omission, given how
    often phrases like "compatible types" turn up in the standard.
    David Brown, Mar 13, 2014
    1. Advertisements

  2. Jorgen Grahn

    James Kuyper Guest

    It seems to me that I've heard of computers that had built-in hardware
    support for both big-endian and little-ending types, or for signed types
    that include two or more of the three representations allowed by the C
    standard for signed integers. Whether or not I'm remembering correctly,
    it would obviously be feasible to design such a system. Would you
    consider such hardware insane? Would you consider a C implementation for
    such a system that supported extended integer types that gave access to
    this feature insane?
    James Kuyper, Mar 13, 2014
    1. Advertisements

  3. Jorgen Grahn

    David Brown Guest

    There are certainly many cpus that have hardware support for both big
    and little endian data formats (ARM, PPC, MIPS are examples). And I
    have known compilers that have extensions to support foreign-endian
    formats (through pragmas). So no, I don't consider these "insane" - but
    what would be "insane" in my book would be something like "int" being
    32-bit little-endian and "long int" being 32-bit big-endian. A PPC
    compiler that supported a built-in type "__int_le32_t" as a native
    big-endian 32-bit int would be fine, and it would be fine for these two
    types to be incompatible. But these would not be the normal default
    integer types.
    David Brown, Mar 13, 2014
  4. I'm inclined to clarify something here. James's extra conditions are
    also not sufficient for two types to be compatible. He does not say
    they are, nor do you explicitly take them to be, but it seems worth

    On my laptop, long long int and long int are identical at the machine
    level -- both are 8 byte, 2's complement, signed integers -- but there
    are not compatible. Further more, int and const int are clearly the
    same size and I think they must use the same representation, yet they,
    too are not compatible.

    The base rules is that two types are compatible if they are the same
    type. There are some other rules that admit other pairs of compatible
    types but they are all about derived types -- compatibility is quite
    narrowly defined.

    The obvious question is then what real types (integer and floating
    types) are "the same" as each other? That's answered by the standard
    listing them all and the referencing the alternative was of writing
    them. For example, 6.2.5 p4 lists the five standard signed integer
    types and points you to 6.7.2 for different ways of writing them. For
    this we can conclude that char and signed char are different types, even
    when char is signed, and that int and signed int are always the same

    You start with 6.2.7 (the main section on compatible types) that states
    that they are compatible if they are the same. Section 6.7.2 p2 tells
    you that they are different types, so, unless they are stipulated to be
    compatible somewhere else, they are not so. This does mean that have to
    check that there is no such stipulation, but the sections where this
    might be said are listed in section 6.2.7, and all those sections are
    about the various forms of derived type.

    It's obviously debatable if it's clear or not, but there rules are there
    in one form or another. The most helpful starting point it to note how
    strict compatibility is: the types must be the same (with a very few
    relaxations of that rule) and that "the same" is a C language construct,
    not a machine construct.
    Ben Bacarisse, Mar 13, 2014

  5. That's going to miss some types, because not all the predefined integer
    types are used in the definitions of the intN_t types.

    For example:

    #include <stdint.h>
    #include <stdio.h>

    #define type_of(arg) \
    (_Generic(arg, int8_t: "int8_t", \
    uint8_t: "uint8_t", \
    int16_t: "int16_t", \
    uint16_t: "uint16_t", \
    int32_t: "int32_t", \
    uint32_t: "uint32_t", \
    int64_t: "int64_t", \
    uint64_t: "uint64_t", \
    default: "UNKNOWN"))

    int main(void) {
    printf("42: %s\n", type_of(42));
    printf("42L: %s\n", type_of(42L));
    printf("42LL: %s\n", type_of(42LL));

    On my 64-bit system, the output is:

    42: int32_t
    42L: int64_t

    because int64_t is defined as long, and no intN_t type is defined as
    long long. If I compile with "-m32", the output is:

    42: int32_t
    42L: UNKNOWN
    42LL: int64_t

    If I were to throw the predefined types into the mix, I'd get conflicts
    because I'd be specifying the same type twice.

    I'm not sure how _Generic could have been defined *without* the
    restriction that no two selections can specify compatible types. For
    example, if int32_t is a typedef for int, this:

    _Generic(42, int: "int", int32_t: "int32_t")

    is a constraint violation. If it weren't, would it resolve to "int" or
    "int32_t"? There's no good answer. (Remember that a typedef creates a
    new name for an existing type, not a new type.)
    int and long are distinct types, and are not compatible.
    Keith Thompson, Mar 13, 2014
  6. (snip)
    Ones I know, either have two different load and store instructions,
    such that the bytes are the same order in registers, but different
    in memory, or a mode bit to select which way load and store work.

    IA32 has bswap, which will appropriately swap the bytes in a 16 bit
    or 32 bit register.

    I believe it is usual for IA32 C compilers to support in in-line
    bswap() function, such that it can be used with minimal overhead.

    -- glen
    glen herrmannsfeldt, Mar 13, 2014
  7. Jorgen Grahn

    Phil Carmody Guest

    You don't even need to go back to the VAX for such examples. With only
    24 bits on the 68000 address bus, you can use the top 8 bits in A0-A7
    for anything, right? The 68030 soon blasted that presumtion out of the
    water. (Of couse, you'd use the bottom 2 bits for flags too, but would
    have to and them away before use on all members of the family.)

    Phil Carmody, Mar 16, 2014
  8. My favorite use for such is in Finite-state automata for searching,
    where you need one bit to indicate that it is a matching transition.
    For biology, BLAST does this. They used the low bit, except on
    word-addressed Cray machines, where they use the high bit.
    (#ifdef to select which one)
    Doing it in application programs is one thing, doing it in the OS
    another. Much of OS/360 uses the high eight bits of 32 bit words
    since S/360 (except the 360/67) uses 24 bit addresses.
    (The core memory for S/360 cost in the $/byte range, and OS/360
    could run on 64K systems.)

    With XA/370, and 31 bit addressing, much had to change. Many
    system control blocks still have to be "below the line" to
    be addressable.

    If they had learned from history, Apple wouldn't have done the
    same thing in MacOS on 68000 machines, which caused the same
    problem porting to the 68020.

    -- glen
    glen herrmannsfeldt, Mar 16, 2014
  9. I have seen recommendations for enums in this case:

    typedef enum { ev_initially_reset, ev_initially_set } ev_initial_state;
    typedef enum { ev_auto_reset, ev_manual_reset } ev_reset_method;
    void CreateEvent(..., ev_initial_state, ev_reset_method, ...);

    CreateEvent(NULL, ev_initially_set, ev_auto_reset, NULL);

    Now you get the readability and can't get the order wrong.
    (Easy extensibility to more values in the same type is an added bonus.)
    Seungbeom Kim, Mar 24, 2014
  10. Jorgen Grahn

    Kaz Kylheku Guest

    About the second point. C does not have type-safe enumerations. C++ does.

    However, if you write your code in "Clean C", you will get this benefit when
    compiling your code as C++.
    Kaz Kylheku, Mar 24, 2014
  11. Oh, that slipped my mind. Thanks for the correction!
    Seungbeom Kim, Mar 24, 2014
  12. You certainly can get the order wrong. Enumeration constants
    are of type int, and enumeration and integer types are implicitly
    converted in both directions. (Some compilers might be persuaded
    to warn about this kind of thing.)

    If you really want that kind of type safety in C, you can use a struct
    type, but that might be considered overkill:

    typedef struct {
    enum { ev_initially_reset_, ev_initially_set_ } value;
    } ev_initial_state;
    const ev_initial_state ev_initially_reset = { ev_initially_reset_ };
    const ev_initial_state ev_initially_set = { ev_initially_set_ };

    Here there is no implicit conversion to or from any other type.
    (I'm using a convention of a trailing underscore for identifiers
    that aren't intended to be used by client code; other conventions
    are possible.)
    Keith Thompson, Mar 24, 2014
  13. Are they really limited to int or does the compiler get to choose an
    appropriate type of integer (possibly depending on the values)?
    Barry Schwarz, Mar 24, 2014
  14. Jorgen Grahn

    James Kuyper Guest

    "An identifier declared as an enumeration constant has type int."
    "The expression that defines the value of an enumeration constant shall
    be an integer constant expression that has a value representable as an
    int." (

    "Each enumerated type shall be compatible with char, a signed integer
    type, or an unsigned integer type. The choice of type is
    implementation-defined,128) but shall be capable of representing the
    values of all the members of the enumeration." (

    This means that the "compatible integer type" for any given enumerated
    type can be smaller than 'int' if the range of it's members allows. It's
    technically also allowed to be bigger than 'int', but because of, there's not much point in doing so.
    James Kuyper, Mar 24, 2014
  15. Jorgen Grahn

    Eric Sosman Guest

    The compiler gets to choose the integer type that underlies
    the enum itself. It can use any integer type whose range covers
    all of the enum's named constants. It can choose different integer
    types for different enum types: `int' for this one, `unsigned short'
    for the next, `__builtin_signed_22_bit__' for the third.

    The named constants themselves, though, are always `int'.

    Yeah, weird. Cope. ;-)
    Eric Sosman, Mar 25, 2014
  16. Jorgen Grahn

    Tim Rentsch Guest

    Let me ask you the question I tried to ask Jacob Navia. Is
    your question a generic question independent of the
    identifiers involved, or are you asking specifically
    regarding the identifiers 'bool', 'true', and 'false'?
    Tim Rentsch, Mar 29, 2014
  17. Jorgen Grahn

    Tim Rentsch Guest

    I didn't give an answer because I wasn't sure what question you
    were asking. That's why I asked the question I did, to clarify
    what you were asking about. One difference between you and me is
    I try to make sure I understand what the other person is saying
    before jumping into responding.
    That's true. One advantage of a typedef is the name can be
    reused as the name of a local variable, or a member of a
    struct, without invalidating the typedef. You can't do
    that with a macro.
    I guess you missed the point about local variables and
    struct members.
    Tim Rentsch, Mar 29, 2014
  18. Jorgen Grahn

    Tim Rentsch Guest

    [excerpted in an effort to focus on the key point]
    Still nonsense, and still irrelevant to what I was saying.

    Do you? Perhaps you could paraphrase what it is you think
    I'm saying, and say it back to verify following the usual
    "active listening" precepts.
    See next...

    The problem is, I don't see any evidence that you've put in any
    significant effort to understand what I've been saying, let alone
    explore the consequences of the different alternatives. What
    programs did you try compiling to see what sort of diagnostics
    might be produced after doing a #include <stdbool.h>? Despite
    what you may think, I am not responsible for doing the thinking
    for people too lazy to think for themselves.
    Tim Rentsch, Mar 29, 2014
  19. Jorgen Grahn

    David Brown Guest

    You are trying to tell us that the advantage of making bool, false and
    true a typedef enum rather than macros is that you can re-use the names
    as local variables and struct members? It is certainly true that there
    are occasions when re-using a typedef name makes sense (such as when
    writing "typedef struct S { ... } S;").

    But re-using "bool", "false" or "true" as local variable names or struct
    members is in no way "perfectly reasonable local use" - such code would
    be cryptic and confusing, and would be highly unlikely to occur in even
    the most badly written code.

    So we are back to your original claim - either give us an example of
    /perfectly reasonable/ use where macro bool would not work while typedef
    bool would be valid, or accept that the macro versions work fine. (Note
    that I personally think typedef bool would have been more elegant, but
    we are concerned here about what works and what breaks.)

    Secondly, give us an example of "rather cryptic error messages (or
    worse?) in cases where these names are used in ways not in keeping with
    the #define'd values" caused by the use of macro bool and fixed by the
    use of typedef bool.

    You made these claims, and were called on them - clearly and repeatedly,
    by several people. If it turns out that you can't find good examples,
    then say so - that's okay. It's fine to say "macro bool is less
    elegant", or that it goes against good coding practices - after all,
    typedef and enum were introduced as a better solution than preprocessor
    macros for type definitions. But you've made claims here, and you've
    brought up and old thread to keep the issue alive - now it is time to
    show us "perfectly reasonable" code or accept that while there is
    /legal/ code that works with typedef bool and not with macro bool, such
    code is not reasonable or realistic.
    David Brown, Mar 30, 2014
  20. Jorgen Grahn

    David Brown Guest

    Most people - including myself - are fairly happy with macro bool. We
    might have done it a little differently, but we accept it as a perfectly
    workable solution, and we assume the C committee have done their job of
    looking at the options and picking the one that gave a working system
    with the least risk of breaking existing code.

    /You/ claim that macro bool breaks perfectly reasonable code, and gives
    cryptic error messages.

    Every time I or someone else asks you to back up these claims with
    examples, you procrastinate - you say you don't understand the question,
    or that we others must find examples ourselves. Sorry, but that is not
    how discussions or arguments work. /You/ made the claim, /you/ provide
    the examples or the justification.

    We can all think of pros and cons for using macros, typedefs, enums,
    language extensions, and mixtures between them. But no one can read
    your mind as to the specific problems you have thought of that macro
    bool has but typedef bool solves.

    So either show us that you have a /real/ point, or stop accusing people
    of being lazy or unimaginative because they won't do your job for you.
    David Brown, Mar 30, 2014
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.