Keith Thompson said:
Tim Rentsch said:
Keith Thompson said:
[...]
But (C99 7.14.1.1p3):
If and when the function [the signal handler] returns, if
the value of sig is SIGFPE, SIGILL, SIGSEGV, or any other
implementation-defined value corresponding to a computational
exception [presumably this includes overflow on a signed integer
conversion], the behavior is undefined; otherwise the program
will resume execution at the point it was interrupted.
So if the particular undefined behavior of the implementation-defined
signal handler involves causing the conversion to yield a trap
representation, then yes, a trap representation can result from a
conversion.
It's undefined behavior. That means the implementation is free
to define it as yielding a trap representation, and isn't even
obligated to document that decision.
It doesn't seem terribly likely, though.
To me it does seem likely, precisely because the logic necessary
to effect a conversion is then so simple, eg, an arithmetic shift
left followed by a logical shift right. That there is no hardware
signal generated is irrelevant -- a signal raised in the abstract
machine need not have any corresponding presence in the physical
machine.
The whole idea is irrelevant except on systems that actually have trap
representations for integer types; there aren't many of those around
today. In addition, the implementation would have to support C99 (C90
didn't permit the implementation-defined signal), and the implementers
would have to decide to use signal semantics to justify the chosen
behavior.
First, it isn't just trap representations, it's also negative
zeros; the same argument about signal handling applies equally
to both.
Ok, so it applies only to systems that either have trap
representations for signed integers, or that use a representation
other than two's-complement. Again, there aren't many such systems.
Oh, I didn't mean to imply it was common. The question
is does the Standard allow it.
Second, there is no reason that the implementation-defined result
of a narrowing conversion (to a signed integer type) can't be a
value that is not representable[*] in the type in question, which
means an exceptional condition, which means undefined behavior,
which means any value at all could be produced, including a trap
representation. Again the same argument applies to negative zeros;
the presence of undefined behavior trumps any other statement of
behavior that the Standard prescribes.
I've assumed that a trap representation cannot represent a value. But
C99 6.2.6.1p5, defining trap representations, says:
Certain object representations need not represent a value of the
object type.
which could be interpreted to mean that a trap representation *can*
represent a value of the type.
Since dealing with trap representations is undefined behavior,
an implementation could define any behavior it wanted, including
interpreting it as a legal value (and yes even some of the time
but not all of the time).
I'm not sure what that would mean, though; you wouldn't be able to
access the value without invoking undefined behavior. I suppose the
implementation could define the behavior of accessing a certain trap
representation as yielding a specified value; other operations on it
might have non-standard behavior. For example, addition and
subtraction might work properly on the full range of a type, but
multiplication and division might work only on a smaller subrange.
One obvious example might be to allow comparison to see
if an object has a trap representation in it, but not
allow any other use. This could be useful for debugging
in an implementation that sets automatic variables without
explicit initializers to trap representations.
Both C90 and C99 say that the result of a conversion from an integer
type to a signed integer type yields an implementation-defined result
if the source value can't be represented; C99 additionally allows an
implementation-defined signal to be raised -- which *can* invoke
undefined behavior. So I'd still say that the only way a conversion
can yield a trap representation is if the conversion raises a signal,
and the implementation chooses to define the undefined behavior so
that it stores a trap representation in the target object, which an
ordinary signal handler would have no way of accessing.
I'm not aware of any language in the Standard that limits the
word "result" used in 6.3.1.3p3 to be an in-range value for
the target type. Do you have any evidence to support that
contention? Certainly if the result can be anything other
than an in-range value for the target type, then the exceptional
condition/undefined behavior rule would apply.
Oh, and that raises another point. A trap representation makes sense
only as something stored in an object. A conversion doesn't
necessarily involve any objects, so there's not necessarily any place
for the trap representation to exist.
I think you may have misunderstood my comment. Suppose we're
converting to a 16-bit integer type with a range of -32767..32767.
Suppose the implementation-defined rule for conversions to any signed
integer type yields the same value if the original value is in range
for the target type, and -32768 if the original value is not in range.
Since -32768 is not within the range of what these 16-bit integers
can represent, that's an exceptional condition/undefined behavior.
It's the UB that then causes a trap representation to appear, which
may appear anywhere.
Also, on a practical level, the idea that trap representations don't
make sense outside of "objects" can't be taken very seriously. In
real computers the values produced by arithmetic operations are stored
in some sort of memory that is just as capable (especially for the
case under consideration) of holding a trap representation as it is a
bona fide value.
[*] Notice, for example, the last sentence of 6.2.5p3: "If any
other character is stored in a char object, the resulting value is
implementation-defined but shall be within the range of values
that can be represented in that type." Clearly the final clause
is necessary only if a resulting value might /not/ be within the
range of values that can be represented in the target type.
There are plenty of clauses in the standard that aren't strictly
necessary.
I won't say there aren't places where this happens, but if it does
it's the exception not the rule. Even if there are other similar
cases, the counter-example in 6.2.5p3 puts the onus of defense on the
side that says the text in 6.3.1.3p3 cannot be an out-of-range value.
All 6.3.1.3p3 says is ".. either the result is implementation-defined
or ..". The word "result" is used frequently in the Standard,
covering all kinds of eventualities, including potential undefined
behavior. Even in cases where the word "result" clealy means a
value, it can mean an out-of-range value, as for example 6.2.5p9:
A computation involving unsigned operands can never overflow,
because a _result_ [emphasis added] that cannot be represented
by the resulting unsigned integer type is reduced modulo the
number that is one greater than the largest value that can be
represented by the resulting type.
All the evidence I'm aware of suggests that "result" as used
in 6.3.1.3p3 includes the possibility of an out-of-range value.
What evidence is there that it is limited to an in-range value
and cannot be anything else?