stdbool.h

D

David Brown

On 03/12/2014 07:39 AM, David Brown wrote:
On 11/03/14 17:56, Keith Thompson wrote:
...
I haven't made use of it myself, but I believe _Generic can be nested to
handle the types of more than one parameter.

Perhaps -- but I think the size of the code would be proportional to the
square of the number of types it handles, and it couldn't be portably
written to apply to extended integer types. Perhaps it could be
generated automatically.

Extended integer types are non-portable anyway.

But code that is portable can make use of extended integer types, if
they come into play through standard typedefs such as size_t, int64_t,
sig_atomic_t, etc. Dealing with this possibility using _Generic() is
problematic, because "No two generic associations in the same generic
selection shall specify compatible types.".

OK, that clause makes it quite a lot harder. Perhaps the best way then
would be to include <stdint.h> and use types intN_t for _Generic -
the different sizes (assuming the platform supports them, of course)
will not be compatible with each other.

Are different standard integer types "compatible" if they are the same
size? ...


As a practical matter (I'll get to issue of what the standard says
farther down), same size is insufficient to enable two types to be
compatible. They must also have the same representation and alignment
requirements, and must use the same mechanism when passed as a parameter
and when returned as the value of a function.


That would cover any integer types of the same size, on any sane
computer - but would normally rule out compatibility between floating
point and integer types of the same size, and probably also
compatibility between pointers and integers. That is all fair enough.
I've argued in the past that the standard guarantees compatibility for
certain pairs of types (which do not include "int" and "long"), but that
it does not prohibit other pairs of types from happening to be
compatible on particular implementations. I was told that the cases
where the standard specifies that two types are compatible are
exhaustive - no other pairs of types can be compatible. This is an
example of a more general rule when interpreting the standard: whenever
it provides a list of things for which something is true, that list is
exhaustive unless the standard explicitly says otherwise. I knew of that
rule, but find that argument less than compelling, because the
compatibility rules are not provided as a list, but as several
independent clauses in widely separated parts of the standard.
However, the official interpretation is that whenever a constraint
requires that two types be compatible, it is a violation of that
constraint, requiring a diagnostic, if the two types are not ones
explicitly specified as being compatible by the standard.

After issuing the required diagnostic, the implementation is free to
produce an executable anyway, and if you chose to execute the resulting
program it might behave in exactly the same fashion as would be
mandatory if the standard had specified that those two types were
compatible, and the implementation is free to document this as a fact.
However, the diagnostic message is still the only mandatory result of
translating such a program. The implementation's documentation cannot,
therefore, accurately describe this fact by saying "int and long int are
compatible"; it must use some other wording to describe that fact.

Thanks for that explanation. It is reassuring to know that the reason I
couldn't find a clear definition in the standard is that there is no
clear definition in the standard! It seems an odd omission, given how
often phrases like "compatible types" turn up in the standard.
 
J

James Kuyper

On 13/03/14 12:22, James Kuyper wrote: ....

That would cover any integer types of the same size, on any sane
computer - but would normally rule out compatibility between floating
point and integer types of the same size, and probably also
compatibility between pointers and integers. That is all fair enough.

It seems to me that I've heard of computers that had built-in hardware
support for both big-endian and little-ending types, or for signed types
that include two or more of the three representations allowed by the C
standard for signed integers. Whether or not I'm remembering correctly,
it would obviously be feasible to design such a system. Would you
consider such hardware insane? Would you consider a C implementation for
such a system that supported extended integer types that gave access to
this feature insane?
 
D

David Brown

It seems to me that I've heard of computers that had built-in hardware
support for both big-endian and little-ending types, or for signed types
that include two or more of the three representations allowed by the C
standard for signed integers. Whether or not I'm remembering correctly,
it would obviously be feasible to design such a system. Would you
consider such hardware insane? Would you consider a C implementation for
such a system that supported extended integer types that gave access to
this feature insane?

There are certainly many cpus that have hardware support for both big
and little endian data formats (ARM, PPC, MIPS are examples). And I
have known compilers that have extensions to support foreign-endian
formats (through pragmas). So no, I don't consider these "insane" - but
what would be "insane" in my book would be something like "int" being
32-bit little-endian and "long int" being 32-bit big-endian. A PPC
compiler that supported a built-in type "__int_le32_t" as a native
big-endian 32-bit int would be fine, and it would be fine for these two
types to be incompatible. But these would not be the normal default
integer types.
 
B

Ben Bacarisse

David Brown said:
That would cover any integer types of the same size, on any sane
computer - but would normally rule out compatibility between floating
point and integer types of the same size, and probably also
compatibility between pointers and integers. That is all fair enough.

I'm inclined to clarify something here. James's extra conditions are
also not sufficient for two types to be compatible. He does not say
they are, nor do you explicitly take them to be, but it seems worth
emphasising.

On my laptop, long long int and long int are identical at the machine
level -- both are 8 byte, 2's complement, signed integers -- but there
are not compatible. Further more, int and const int are clearly the
same size and I think they must use the same representation, yet they,
too are not compatible.

The base rules is that two types are compatible if they are the same
type. There are some other rules that admit other pairs of compatible
types but they are all about derived types -- compatibility is quite
narrowly defined.

The obvious question is then what real types (integer and floating
types) are "the same" as each other? That's answered by the standard
listing them all and the referencing the alternative was of writing
them. For example, 6.2.5 p4 lists the five standard signed integer
types and points you to 6.7.2 for different ways of writing them. For
this we can conclude that char and signed char are different types, even
when char is signed, and that int and signed int are always the same
type.

You start with 6.2.7 (the main section on compatible types) that states
that they are compatible if they are the same. Section 6.7.2 p2 tells
you that they are different types, so, unless they are stipulated to be
compatible somewhere else, they are not so. This does mean that have to
check that there is no such stipulation, but the sections where this
might be said are listed in section 6.2.7, and all those sections are
about the various forms of derived type.

Thanks for that explanation. It is reassuring to know that the reason I
couldn't find a clear definition in the standard is that there is no
clear definition in the standard! It seems an odd omission, given how
often phrases like "compatible types" turn up in the standard.

It's obviously debatable if it's clear or not, but there rules are there
in one form or another. The most helpful starting point it to note how
strict compatibility is: the types must be the same (with a very few
relaxations of that rule) and that "the same" is a C language construct,
not a machine construct.
 
K

Keith Thompson

David Brown said:
But code that is portable can make use of extended integer types, if
they come into play through standard typedefs such as size_t, int64_t,
sig_atomic_t, etc. Dealing with this possibility using _Generic() is
problematic, because "No two generic associations in the same generic
selection shall specify compatible types.".

OK, that clause makes it quite a lot harder. Perhaps the best way then
would be to include <stdint.h> and use types intN_t for _Generic -
the different sizes (assuming the platform supports them, of course)
will not be compatible with each other.


That's going to miss some types, because not all the predefined integer
types are used in the definitions of the intN_t types.

For example:

#include <stdint.h>
#include <stdio.h>

#define type_of(arg) \
(_Generic(arg, int8_t: "int8_t", \
uint8_t: "uint8_t", \
int16_t: "int16_t", \
uint16_t: "uint16_t", \
int32_t: "int32_t", \
uint32_t: "uint32_t", \
int64_t: "int64_t", \
uint64_t: "uint64_t", \
default: "UNKNOWN"))

int main(void) {
printf("42: %s\n", type_of(42));
printf("42L: %s\n", type_of(42L));
printf("42LL: %s\n", type_of(42LL));
}

On my 64-bit system, the output is:

42: int32_t
42L: int64_t
42LL: UNKNOWN

because int64_t is defined as long, and no intN_t type is defined as
long long. If I compile with "-m32", the output is:

42: int32_t
42L: UNKNOWN
42LL: int64_t

If I were to throw the predefined types into the mix, I'd get conflicts
because I'd be specifying the same type twice.

I'm not sure how _Generic could have been defined *without* the
restriction that no two selections can specify compatible types. For
example, if int32_t is a typedef for int, this:

_Generic(42, int: "int", int32_t: "int32_t")

is a constraint violation. If it weren't, would it resolve to "int" or
"int32_t"? There's no good answer. (Remember that a typedef creates a
new name for an existing type, not a new type.)
Are different standard integer types "compatible" if they are the same
size? i.e., if both "int" and "long int" are 32-bit, are they
considered "compatible" for _Generic? I couldn't find a clear statement
in the C11 standard.

int and long are distinct types, and are not compatible.
 
G

glen herrmannsfeldt

(snip)
It seems to me that I've heard of computers that had built-in hardware
support for both big-endian and little-ending types, or for signed types
that include two or more of the three representations allowed by the C
standard for signed integers. Whether or not I'm remembering correctly,
it would obviously be feasible to design such a system. Would you
consider such hardware insane? Would you consider a C implementation for
such a system that supported extended integer types that gave access to
this feature insane?

Ones I know, either have two different load and store instructions,
such that the bytes are the same order in registers, but different
in memory, or a mode bit to select which way load and store work.

IA32 has bswap, which will appropriately swap the bytes in a 16 bit
or 32 bit register.

I believe it is usual for IA32 C compilers to support in in-line
bswap() function, such that it can be used with minimal overhead.

-- glen
 
P

Phil Carmody

Eric Sosman said:
[...]
Assume that you would know that on your architecture one bit
of a pointer lways is zero and that you only have very
little memory available. Wouldn't it be tempting, when you
are developing for a special hardware?

Someone I used to know once did that trick with a big array
of function pointers. Since function pointers always point to
the function's first instruction, and since instructions always
start on word boundaries (he "knew" this to be true, understand?),
he used the low-order pointer bit as a flag to indicate which
functions actually needed to run on the current pass and which
could be skipped.

I discovered this awfulness when porting his code to the VAX,
on which *neither* of the things he "knew" held true ...

You don't even need to go back to the VAX for such examples. With only
24 bits on the 68000 address bus, you can use the top 8 bits in A0-A7
for anything, right? The 68030 soon blasted that presumtion out of the
water. (Of couse, you'd use the bottom 2 bits for flags too, but would
have to and them away before use on all members of the family.)

Phil
 
G

glen herrmannsfeldt

My favorite use for such is in Finite-state automata for searching,
where you need one bit to indicate that it is a matching transition.
For biology, BLAST does this. They used the low bit, except on
word-addressed Cray machines, where they use the high bit.
(#ifdef to select which one)
You don't even need to go back to the VAX for such examples. With only
24 bits on the 68000 address bus, you can use the top 8 bits in A0-A7
for anything, right? The 68030 soon blasted that presumtion out of the
water. (Of couse, you'd use the bottom 2 bits for flags too, but would
have to and them away before use on all members of the family.)

Doing it in application programs is one thing, doing it in the OS
another. Much of OS/360 uses the high eight bits of 32 bit words
since S/360 (except the 360/67) uses 24 bit addresses.
(The core memory for S/360 cost in the $/byte range, and OS/360
could run on 64K systems.)

With XA/370, and 31 bit addressing, much had to change. Many
system control blocks still have to be "below the line" to
be addressable.

If they had learned from history, Apple wouldn't have done the
same thing in MacOS on 68000 machines, which caused the same
problem porting to the 68020.

-- glen
 
S

Seungbeom Kim

It's an argument against boolean parameters for which constants are expectd to
be used in 90% of the calls.

Quick, what kind of event does this create in Windows:

CreateEvent(NULL, FALSE, FALSE, NULL);

The proper documentation is there; the problem is I have to go look at it.

It's tempting to make meaningful synonyms for the booleans. Trobule is there is no
type checking.

Suppose we do this:

#define EV_MANUAL_RESET TRUE
#define EV_AUTO_RESET FALSE
#define EV_INITIALY_SET TRUE
#define EV_INITIALY_RESET FALSE

We can now mix up the order, without any diagnostics from the compiler:

CreateEvent(NULL, EV_INITIALLY_SET, AV_AUTO_RESET, NULL);

Oops! So we still have to check the documentation to get the order right.

When we have ensured that, the readability improvement is undeniable, though.

I have seen recommendations for enums in this case:

typedef enum { ev_initially_reset, ev_initially_set } ev_initial_state;
typedef enum { ev_auto_reset, ev_manual_reset } ev_reset_method;
void CreateEvent(..., ev_initial_state, ev_reset_method, ...);

CreateEvent(NULL, ev_initially_set, ev_auto_reset, NULL);

Now you get the readability and can't get the order wrong.
(Easy extensibility to more values in the same type is an added bonus.)
 
K

Kaz Kylheku

CreateEvent(NULL, ev_initially_set, ev_auto_reset, NULL);

Now you get the readability and can't get the order wrong.

About the second point. C does not have type-safe enumerations. C++ does.

However, if you write your code in "Clean C", you will get this benefit when
compiling your code as C++.
 
K

Keith Thompson

Seungbeom Kim said:
I have seen recommendations for enums in this case:

typedef enum { ev_initially_reset, ev_initially_set } ev_initial_state;
typedef enum { ev_auto_reset, ev_manual_reset } ev_reset_method;
void CreateEvent(..., ev_initial_state, ev_reset_method, ...);

CreateEvent(NULL, ev_initially_set, ev_auto_reset, NULL);

Now you get the readability and can't get the order wrong.
(Easy extensibility to more values in the same type is an added bonus.)

You certainly can get the order wrong. Enumeration constants
are of type int, and enumeration and integer types are implicitly
converted in both directions. (Some compilers might be persuaded
to warn about this kind of thing.)

If you really want that kind of type safety in C, you can use a struct
type, but that might be considered overkill:

typedef struct {
enum { ev_initially_reset_, ev_initially_set_ } value;
} ev_initial_state;
const ev_initial_state ev_initially_reset = { ev_initially_reset_ };
const ev_initial_state ev_initially_set = { ev_initially_set_ };

Here there is no implicit conversion to or from any other type.
(I'm using a convention of a trailing underscore for identifiers
that aren't intended to be used by client code; other conventions
are possible.)
 
B

Barry Schwarz

You certainly can get the order wrong. Enumeration constants
are of type int, and enumeration and integer types are implicitly

Are they really limited to int or does the compiler get to choose an
appropriate type of integer (possibly depending on the values)?
 
J

James Kuyper

....

Are they really limited to int or does the compiler get to choose an
appropriate type of integer (possibly depending on the values)?

"An identifier declared as an enumeration constant has type int."
(6.4.4.3p2)
"The expression that defines the value of an enumeration constant shall
be an integer constant expression that has a value representable as an
int." (6.7.2.2p2)

"Each enumerated type shall be compatible with char, a signed integer
type, or an unsigned integer type. The choice of type is
implementation-defined,128) but shall be capable of representing the
values of all the members of the enumeration." (6.7.2.2p2)

This means that the "compatible integer type" for any given enumerated
type can be smaller than 'int' if the range of it's members allows. It's
technically also allowed to be bigger than 'int', but because of
6.4.4.3p2, there's not much point in doing so.
 
E

Eric Sosman

Are they really limited to int or does the compiler get to choose an
appropriate type of integer (possibly depending on the values)?

The compiler gets to choose the integer type that underlies
the enum itself. It can use any integer type whose range covers
all of the enum's named constants. It can choose different integer
types for different enum types: `int' for this one, `unsigned short'
for the next, `__builtin_signed_22_bit__' for the third.

The named constants themselves, though, are always `int'.

Yeah, weird. Cope. ;-)
 
T

Tim Rentsch

Ian Collins said:
David was one of many who couldn't imagine any "perfectly reasonable
local uses" and requested an example. I'm adding my name to the list
of those lacking imagination! Please enlighten us.

Let me ask you the question I tried to ask Jacob Navia. Is
your question a generic question independent of the
identifiers involved, or are you asking specifically
regarding the identifiers 'bool', 'true', and 'false'?
 
T

Tim Rentsch

jacob navia said:
Not in this case, no. That's why I asked the question that you
don't actually answer.

I didn't give an answer because I wasn't sure what question you
were asking. That's why I asked the question I did, to clarify
what you were asking about. One difference between you and me is
I try to make sure I understand what the other person is saying
before jumping into responding.
One advantage of the macro is that you can undefine it. You can't
do that with a typedef.

That's true. One advantage of a typedef is the name can be
reused as the name of a local variable, or a member of a
struct, without invalidating the typedef. You can't do
that with a macro.
No, I can't, and I think you can't either...

:)

I guess you missed the point about local variables and
struct members.
 
T

Tim Rentsch

[excerpted in an effort to focus on the key point]
[I mentioned TRUE and FALSE]

All-caps may be in common use in C, but that does not make it a
good idea.

Different developers have different opinions on that point, but in
any case it's irrelevant to what I was saying.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Nonsense, as James Kuyper has pointed out.

One can keep habits from old systems after new ones are introduced
- such as having a habit of writing some things in all-caps even
after you are able to use small letters. [snip elaboration]

Still nonsense, and still irrelevant to what I was saying.

That has no bearing on my comment.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Having said that, if you want to introduce 'bool', 'true', and
'false' into C99 programs, go ahead and do that. However, I
would suggest not using <stdbool.h> but rather defining the three
names yourself (in a regular header file) without using the
preprocessor, eg,

typedef _Bool bool;
enum { true = 1, false = 0 };

I see no justification for this idea. These identifiers are part
of the C language, as provided in the standards-mandated standard
header.
[snip]
What I am doing is pointing out
alternate definitions for bool, true, and false, with somewhat
different characteristics for how the identifiers can be used,
that may be preferable in some circumstances. If some
developers find that their own circumstances favor those
different characteristics, they may reasonably prefer to use
those alternate definitions; in other circumstances they may
reasonably prefer to use the <stdbool.h> definitions. The
choice is up to them. Under some circumstances (which I will
not describe), I would advocate using using the <stdbool.h>
definitions; in others I would advocate using the alternate
definitions mentioned above. IMO this question should not be
given a "one size fits all" kind of answer.

I understand what you are saying here.

Do you? Perhaps you could paraphrase what it is you think
I'm saying, and say it back to verify following the usual
"active listening" precepts.
But what we are missing is
any good reasons as to when one would want to use the definitions
you suggest rather than <stdbool.h>. There are occasions why one
might need an odd definition (such as for weird code that defines
TRUE to be 0), but I fail to see any serious advantage of your
definition over the standard one. It's not that I think there is ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
anything /wrong/ with your enum type - it is just that I see no ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
reason to use it rather than the standard one.

See next...

The idea that bool might or should be an enumerated type has ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
no bearing on my comments.
^^^^^^^^^^^^^^^^^^^^^^^^^^

I think you have missed the point of what I've been saying. In
any case I cannot be responsible for your lack of imagination.

You /are/ responsible in this case for telling us that there are
"perfectly reasonable" cases where the <stdbool.h> macros would
cause conflicts but your typedef enum version would work, and
for telling us that there are cases where the enum version is
significantly better than the <stdbool.h> version. [snip
elaboration]

The problem is, I don't see any evidence that you've put in any
significant effort to understand what I've been saying, let alone
explore the consequences of the different alternatives. What
programs did you try compiling to see what sort of diagnostics
might be produced after doing a #include <stdbool.h>? Despite
what you may think, I am not responsible for doing the thinking
for people too lazy to think for themselves.
 
D

David Brown

I didn't give an answer because I wasn't sure what question you
were asking. That's why I asked the question I did, to clarify
what you were asking about. One difference between you and me is
I try to make sure I understand what the other person is saying
before jumping into responding.


That's true. One advantage of a typedef is the name can be
reused as the name of a local variable, or a member of a
struct, without invalidating the typedef. You can't do
that with a macro.


I guess you missed the point about local variables and
struct members.

You are trying to tell us that the advantage of making bool, false and
true a typedef enum rather than macros is that you can re-use the names
as local variables and struct members? It is certainly true that there
are occasions when re-using a typedef name makes sense (such as when
writing "typedef struct S { ... } S;").

But re-using "bool", "false" or "true" as local variable names or struct
members is in no way "perfectly reasonable local use" - such code would
be cryptic and confusing, and would be highly unlikely to occur in even
the most badly written code.

So we are back to your original claim - either give us an example of
/perfectly reasonable/ use where macro bool would not work while typedef
bool would be valid, or accept that the macro versions work fine. (Note
that I personally think typedef bool would have been more elegant, but
we are concerned here about what works and what breaks.)

Secondly, give us an example of "rather cryptic error messages (or
worse?) in cases where these names are used in ways not in keeping with
the #define'd values" caused by the use of macro bool and fixed by the
use of typedef bool.


You made these claims, and were called on them - clearly and repeatedly,
by several people. If it turns out that you can't find good examples,
then say so - that's okay. It's fine to say "macro bool is less
elegant", or that it goes against good coding practices - after all,
typedef and enum were introduced as a better solution than preprocessor
macros for type definitions. But you've made claims here, and you've
brought up and old thread to keep the issue alive - now it is time to
show us "perfectly reasonable" code or accept that while there is
/legal/ code that works with typedef bool and not with macro bool, such
code is not reasonable or realistic.
 
D

David Brown

David Brown said:
I can't imagine any "perfectly reasonable local uses" of bool,
true or false that conflict with this, nor can I imagine any
cryptic error messages as a result. If you can provide examples,
maybe I'll change my mind.

I think you have missed the point of what I've been saying. In
any case I cannot be responsible for your lack of imagination.

You /are/ responsible in this case for telling us that there are
"perfectly reasonable" cases where the <stdbool.h> macros would
cause conflicts but your typedef enum version would work, and
for telling us that there are cases where the enum version is
significantly better than the <stdbool.h> version. [snip
elaboration]

The problem is, I don't see any evidence that you've put in any
significant effort to understand what I've been saying, let alone
explore the consequences of the different alternatives. What
programs did you try compiling to see what sort of diagnostics
might be produced after doing a #include <stdbool.h>? Despite
what you may think, I am not responsible for doing the thinking
for people too lazy to think for themselves.

Most people - including myself - are fairly happy with macro bool. We
might have done it a little differently, but we accept it as a perfectly
workable solution, and we assume the C committee have done their job of
looking at the options and picking the one that gave a working system
with the least risk of breaking existing code.

/You/ claim that macro bool breaks perfectly reasonable code, and gives
cryptic error messages.

Every time I or someone else asks you to back up these claims with
examples, you procrastinate - you say you don't understand the question,
or that we others must find examples ourselves. Sorry, but that is not
how discussions or arguments work. /You/ made the claim, /you/ provide
the examples or the justification.

We can all think of pros and cons for using macros, typedefs, enums,
language extensions, and mixtures between them. But no one can read
your mind as to the specific problems you have thought of that macro
bool has but typedef bool solves.

So either show us that you have a /real/ point, or stop accusing people
of being lazy or unimaginative because they won't do your job for you.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,733
Messages
2,569,440
Members
44,830
Latest member
ZADIva7383

Latest Threads

Top