Richard said:
Implementation-defined, surely?
Harrumph. I guess so, but the distinction seems not
to be very important. When `(signed char)0xFF' is evaluated
"... either the result is implementation-defined or an
implementation-defined signal is raised." (6.3.1.3/3)
On the face of it, that's implementation-defined behavior and
not undefined behavior. But what if the implementation takes
the second alternative and raises a signal? If a function has
been installed to handle the signal
"If and when the function returns, if the value of _sig_
is [...] or any other implementation-defined value
corresponding to a computational exception, the behavior
is undefined; [...]" (7.14.1.1/3)
So if there's a handler, it cannot return without invoking
undefined behavior. I guess that means it must call abort()
or _Exit() or run an infinite loop; all of these have defined
effects, but are sufficiently unfortunate that they ought to
be avoided just about as strenuously as undefined behavior.
No nasal demons, surely, but no happy outcome either.
If there's not a handler, the implementation-defined signal
is treated as if one of SIG_IGN or SIG_DFL had been set up (the
choice is implementation-defined). If the handling is equivalent
to SIG_IGN, I think we're back in U.B. territory again: we're
told that we'll get *either* a result *or* a signal, not both.
Thus, we can't count on getting a result of any kind if a signal
is raised and ignored; the Standard doesn't specify any behavior,
so the behavior is undefined by omission (c.f. 3.4.3).
In the SIG_DFL case, the handling of the implementation-defined
signal is implementation-defined, not undefined. But right here
in the documentation I see
"The default handling for SIGBITROT causes demons
to fly out of your nose." (DS 9000 programmer's
manual, courtesy Armed Response Technologies)
.... which is not undefined behavior, but might seem so to a
casual observer. ;-)
Summary:
- You're right: `(signed char)0xFF' produces implementation-
defined, not undefined, behavior. My apologies.
- ... but since the I.B. is just about as unpredictable as
U.B., the programmer would be well-advised to avoid it.
- The *real* solution, I think, is to use `unsigned' types
whenever you want to deal with bits as bits. To ask the
question "Does this byte have all its bits set?", one
should not use potentially signed arithmetic. To answer
the question "Does this byte have the value 42?", either
signed or unsigned arithmetic will do.
- And, of course, all this is just another c.l.c exercise
in taking a census on a pinhead. We know perfectly well
that two's complement has won the game and extinguished
its competitors, right? And we're certain that it's the
ultimate in integer representations, and will never ever
be supplanted, right? Computer design is immune to the
vagaries of fashion, right?
(Ahem.) "Right?"
(I know you're out there; I can hear you breathing. C'mon,
stand up and be counted -- in two's complement ...)