The lack of a boolean data type in C

J

JimS

You could have extra bits to represent the bit offset, as 32-bit read/write
machines do with char pointers. 32 bit chars are a pain.
Or you could pad to one byte, which is the sensible way of doing it.
This would mean that
bool flag = 1;
flag += 1;

would set flag to zero.

Pretty sure in C99

bool flag=1;
flag++;

flag is now (still) == 1

Jim
 
I

Ian Collins

Keith said:
#define false 0
#define true 1
typedef int bool;

[...]

Rather than the above, have you considered:

typedef enum { false, true } bool;

? It's very nearly equivalent, but I find it a bit better
esthetically.
My preference as well. It would have been even better if C enums were
type safe.
 
I

Ian Collins

Default said:
KimmoA wrote:




It's a silly one though. Even languages with built-in boolean types are
likely to have it be some machine-adressable type. You aren't
guaranteed to have it smaller than a char, and if fact seldom will.
Given a bool type, you can specialise or optimise operations or
constructs for that specific type, which you can't do with a user
defined equivalent.

<OT> this trick is used in the C++ standard library</OT>.
 
K

KimmoA

Keith said:
What principle?

That it makes sense to have a boolean type? I still want to know what
you all use instead...
You want C to have had a built-in Boolean type from the beginning.
It didn't. I'm afrid there's really nothing we can do about that
beyond offering a very limited amount of sympathy.

Obviously I realize that nothing can be done about it now, but I want
to fully understand why they decided to design the language this way.
So far, nobody has really told me anything that truly convinces me that
it made sense.
There are other languages that *have* had a Boolean type from their
beginnings.

I know what you mean by this. You want me to switch to a different
language for questioning one single thing. That's silly, and there is
no other language that I'm even remotely interested in. I switched FROM
C++ to C, but at least C++ had the bool...
 
R

Richard Heathfield

KimmoA said:
That it makes sense to have a boolean type? I still want to know what
you all use instead...

If I need one, I use an int. Who cares about the storage cost of one lousy
int? And if I need loads, I use an array of unsigned char and some bit
macros. One bit per bool.

<snip>
 
C

Charlton Wilbur

KA> That it makes sense to have a boolean type? I still want to
KA> know what you all use instead...

Integers.

KA> Obviously I realize that nothing can be done about it now, but
KA> I want to fully understand why they decided to design the
KA> language this way. So far, nobody has really told me anything
KA> that truly convinces me that it made sense.

Because the only advantage is theoretical.

Imagine that you have a bool type. It's either an int or char with
syntactic sugar imposing constraints on its behavior, or it's a single
bit which needs to be extracted. The former gives you no real
advantage over what the language offers now, plus a whole lot of
potential for confusion, and the latter is likely to trade off speed
for theoretical purity.

Because it can't be predictably translated into machine code.

Microprocessors understand integers and floating-point numbers at a
very low level, but most often implement boolean as equal to zero or
not equal to zero. The C approach to Boolean math -- that zero is
false and non-zero is true -- maps directly to this approach. Adding
the conceit of Boolean variables to the language would either map
directly to this, giving the programmer no real advantage over using
integral types in the first place, or map to the extracting-bits approach.

In short: the benefit you get from a pure Boolean type is nonexistent.
That's why it's not in C. You have yet to show any compelling reason
to include it.

Charlton
 
D

Default User

Ian said:
Given a bool type, you can specialise or optimise operations or
constructs for that specific type, which you can't do with a user
defined equivalent.

True, but not what the OP was belly-aching about. He thinks a defined
type would save storage (char is still too big in his estimation), and
that's unlikely.



Brian
 
D

Default User

Richard said:
KimmoA said:


If I need one, I use an int. Who cares about the storage cost of one
lousy int? And if I need loads, I use an array of unsigned char and
some bit macros. One bit per bool.

I did the latter for my text-adventure game. Mostly it was for the
feature that suppresses showing the room's long description if you've
been in it before. So I had a bit array that could be indexed by the
room number.

Like many features, I did it that way because the game was a training
tool while I was learning C and that seemed a good way to learn to do
bit arrays. Were I doing it from the beginning, I'd probably keep that
information in the room structure.




Brian
 
I

Ian Collins

Default said:
Ian Collins wrote:



True, but not what the OP was belly-aching about. He thinks a defined
type would save storage (char is still too big in his estimation), and
that's unlikely.
I was thinking of the way C++ specialises vector<bool> as a compact
vector of bits, which does save on storage. C could do this for arrays
of _Bool, but it would break too many addressing rules to be viable.
 
J

jacob navia

Richard Heathfield a écrit :
KimmoA said:




If I need one, I use an int. Who cares about the storage cost of one lousy
int? And if I need loads, I use an array of unsigned char and some bit
macros. One bit per bool.

<snip>

There are many processors with specialized bit shifting or
bit extracting instructions. The "macro" approach hides the
intention of the programmer behind a lot of shifts and
masks AND operations, making it impossible for the compiler
to do any optimizations specifically designed for bit arrays.
 
D

Default User

Ian said:
I was thinking of the way C++ specialises vector<bool> as a compact
vector of bits, which does save on storage. C could do this for
arrays of _Bool, but it would break too many addressing rules to be
viable.


And you can do it in C if that's desired, as mentioned elsewhere, with
bit arrays and few access macros.




Brian
 
R

Richard Heathfield

jacob navia said:

The "macro" approach hides the
intention of the programmer behind a lot of shifts and
masks AND operations,

Not at all. The names I chose for my macros make it perfectly clear what
they do.
making it impossible for the compiler
to do any optimizations specifically designed for bit arrays.

So what? If ever it becomes a performance problem, I'll worry about it.
Until then, I'll write the code to be clear to a human reader, and let the
compiler worry about how best to optimise it.
 
K

Keith Thompson

Eric Sosman said:
Keith said:
CBFalconer said:
#define false 0
#define true 1
typedef int bool;
[...]
Rather than the above, have you considered:
typedef enum { false, true } bool;
? It's very nearly equivalent, but I find it a bit better
esthetically.

"Aesthetically."

Both spellings are acceptable, but it looks like "aesthetically" is
preferred (at least according to Merriam Webster).
I once ran across this gem:

typedef enum { TRUE, FALSE } bool;

No foolin', I really did!

Ick!

Well, I suppose you could make that work if you're *really* careful
(which means avoiding idiomatic C in many cases).
 
A

Arthur J. O'Dwyer

jacob navia said:


Not at all. The names I chose for my macros make it perfectly clear what
they do.


So what? If ever it becomes a performance problem, I'll worry about it.
Until then, I'll write the code to be clear to a human reader, and let the
compiler worry about how best to optimise it.

And FWIW, compilers are really smart. I'm biased,[1] but I assure you
that if the compiler has /any/ "optimizations specifically designed for
bit arrays", it will go out of its way to find (x = x&~m|y&m), even
hidden behind a macro, and make the obvious substitution.

-Arthur

[1] - working as I do, now, with an optimizing compiler for compiling
embedded software that's just packed with bitfield operations
(pun intended). I honestly have no idea whether GCC on x86 is anywhere
close to Green Hills on PowerPC in terms of bitfield ops.
 
K

Keith Thompson

KimmoA said:
Keith Thompson wrote: [...]
There are other languages that *have* had a Boolean type from their
beginnings.

I know what you mean by this. You want me to switch to a different
language for questioning one single thing. That's silly, and there is
no other language that I'm even remotely interested in. I switched FROM
C++ to C, but at least C++ had the bool...

No, I'm not encouraging you to switch languages.
 
K

Keith Thompson

jacob navia said:
Richard Heathfield a écrit :

There are many processors with specialized bit shifting or
bit extracting instructions. The "macro" approach hides the
intention of the programmer behind a lot of shifts and
masks AND operations, making it impossible for the compiler
to do any optimizations specifically designed for bit arrays.

I don't see why a sufficiently clever optimizer couldn't translate
code using shifts and masks into code using some machine-specific bit
extraction instruction -- assuming the resulting code is actually
better than shifting and masking.
 
K

Keith Thompson

Charlton Wilbur said:
KA> That it makes sense to have a boolean type? I still want to
KA> know what you all use instead...

Integers.

KA> Obviously I realize that nothing can be done about it now, but
KA> I want to fully understand why they decided to design the
KA> language this way. So far, nobody has really told me anything
KA> that truly convinces me that it made sense.

Because the only advantage is theoretical.

Imagine that you have a bool type. It's either an int or char with
syntactic sugar imposing constraints on its behavior, or it's a single
bit which needs to be extracted. The former gives you no real
advantage over what the language offers now, plus a whole lot of
potential for confusion, and the latter is likely to trade off speed
for theoretical purity.

Because it can't be predictably translated into machine code.

Why is that a problem? I don't care what the machine code looks like,
as long as it implements the semantics correctly; efficiency is a
secondary concern, but I *still* don't usually care about the specific
instruction sequences.
Microprocessors understand integers and floating-point numbers at a
very low level, but most often implement boolean as equal to zero or
not equal to zero. The C approach to Boolean math -- that zero is
false and non-zero is true -- maps directly to this approach. Adding
the conceit of Boolean variables to the language would either map
directly to this, giving the programmer no real advantage over using
integral types in the first place, or map to the extracting-bits approach.

In short: the benefit you get from a pure Boolean type is nonexistent.
That's why it's not in C. You have yet to show any compelling reason
to include it.

But C99 does have _Bool.
 
K

Keith Thompson

KimmoA said:
That it makes sense to have a boolean type? I still want to know what
you all use instead...


Obviously I realize that nothing can be done about it now, but I want
to fully understand why they decided to design the language this way.
So far, nobody has really told me anything that truly convinces me that
it made sense.

It's because Dennis Ritchie wanted it that way.

At the time, it simply wasn't felt that it was necessary. C has the
convention that any scalar expression can be used as a condition (in
an if or while statement, or as an operand of "!", "||", or "&&"). If
the expression has the value zero, it's treated as false; if it has
any non-zero value, it's treated as true.

Given these conventions, a separate Boolean type simply is not needed.
In fact, C programmers have been writing code for decades without the
need for such a type. C is intended to be a relatively small
language. If you want a variable to hold a condition, just use an
int. For example:

int done = 0;
while (!done) {
...
if (...) {
done = 1;
}
...
}

You have to be careful about some things. For example this:

cond = isdigit(c);
if (cond == 1) {
...
}

is incorrect, because isdigit doesn't necessary return 0 or 1. But this:

cond = isidigit(c);
if (cond) {
...
}

is both correct and better stylistically.

As for storage size concerns, that's already been explained several
times. Storing a single Boolean object in a single bit is not
helpful; the rest of the byte or word containing that bit isn't going
to be used for anything anyway, the code to extract that single bit is
probably larger than the code to load a byte or word, and you wouldn't
be able to take the object's address. If you want large arrays of
booleans, large enough that storage size becomes a concern, you may
want to trade off code complexity against data size and use a packed
bit array. C has no syntax for doing this directly; apparently
there's never been enough of a demand for it to justify adding it to
the language. But again, this is something that can already be
implemented in the language itself.

Of course that's not the whole story. A *lot* of programmers have
implemented their own Boolean types, by various names and with various
definitions. See section 9 of the comp.lang.c FAQ,
<http://www.c-faq.com/>, for some examples. The problem with that is
that types defined by different programmers may not be compatible, and
when their code is combined into a single program, it has to be
reconciled somehow. C99 added _Bool and <stdbool.h> to bring some
order to the situation *without* breaking any existing solutions.

None of this implies that C *couldn't* have had a built-in Boolean
type from the beginning. It easily could have. Maybe it even would
have been better if it had. C, like any language, was designed by
imperfect human beings (mostly by one particular human being). If you
don't agree with the decisions, that's fine, but there were valid
reasons.
 
E

Eric Sosman

Keith said:
[...] C has the
convention that any scalar expression can be used as a condition (in
an if or while statement, or as an operand of "!", "||", or "&&"). If
the expression has the value zero, it's treated as false; if it has
any non-zero value, it's treated as true.

Given these conventions, a separate Boolean type simply is not needed.
> [...]

At some risk to topicality (but we're already discussing
"What-If C" anyhow), I'll mention that there's at least one
other popular language that gets along quite well without a
Boolean type, thank you very much. I refer to Lisp, in which
the value `nil' (roughly meaning "no value") is considered
false, while any non-`nil' ("any actual value") is true.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,011
Latest member
AjaUqq1950

Latest Threads

Top