plain int and signed int

T

Tim Rentsch

Ben Bacarisse said:
Tim Rentsch said:
Keith Thompson said:
[...]
But (C99 7.14.1.1p3):

If and when the function [the signal handler] returns, if
the value of sig is SIGFPE, SIGILL, SIGSEGV, or any other
implementation-defined value corresponding to a computational
exception [presumably this includes overflow on a signed integer
conversion], the behavior is undefined; otherwise the program
will resume execution at the point it was interrupted.

So if the particular undefined behavior of the implementation-defined
signal handler involves causing the conversion to yield a trap
representation, then yes, a trap representation can result from a
conversion.

It's undefined behavior. That means the implementation is free
to define it as yielding a trap representation, and isn't even
obligated to document that decision.

It doesn't seem terribly likely, though.

To me it does seem likely, precisely because the logic necessary
to effect a conversion is then so simple, eg, an arithmetic shift
left followed by a logical shift right. That there is no hardware
signal generated is irrelevant -- a signal raised in the abstract
machine need not have any corresponding presence in the physical
machine.

The whole idea is irrelevant except on systems that actually have trap
representations for integer types; there aren't many of those around
today. In addition, the implementation would have to support C99 (C90
didn't permit the implementation-defined signal), and the implementers
would have to decide to use signal semantics to justify the chosen
behavior.

First, it isn't just trap representations, it's also negative
zeros; the same argument about signal handling applies equally
to both.

I don't get this part. The bit pattern that /might/ be a negative
zero is either a trap representation or it is a normal value; in which
case it is called negative zero (6.2.6.2 p2). I don't see how signal
handling applies to negative zero. This is an argument about terms,
but they seem to be important ones.

The point I was trying to make is that the same argument
that trap representations can be produced (through an
implementation-defined signal) applies to negative zeros,
so narrowing conversions to signed integer types can
produce negative zero values as results. Some people
are of the opinion that such conversions cannot produce
negative zeros.
 
T

Tim Rentsch

Keith Thompson said:
Tim Rentsch said:
Keith Thompson said:
[...]
But (C99 7.14.1.1p3):

If and when the function [the signal handler] returns, if
the value of sig is SIGFPE, SIGILL, SIGSEGV, or any other
implementation-defined value corresponding to a computational
exception [presumably this includes overflow on a signed integer
conversion], the behavior is undefined; otherwise the program
will resume execution at the point it was interrupted.

So if the particular undefined behavior of the implementation-defined
signal handler involves causing the conversion to yield a trap
representation, then yes, a trap representation can result from a
conversion.

It's undefined behavior. That means the implementation is free
to define it as yielding a trap representation, and isn't even
obligated to document that decision.

It doesn't seem terribly likely, though.

To me it does seem likely, precisely because the logic necessary
to effect a conversion is then so simple, eg, an arithmetic shift
left followed by a logical shift right. That there is no hardware
signal generated is irrelevant -- a signal raised in the abstract
machine need not have any corresponding presence in the physical
machine.

The whole idea is irrelevant except on systems that actually have trap
representations for integer types; there aren't many of those around
today. In addition, the implementation would have to support C99 (C90
didn't permit the implementation-defined signal), and the implementers
would have to decide to use signal semantics to justify the chosen
behavior.

First, it isn't just trap representations, it's also negative
zeros; the same argument about signal handling applies equally
to both.

Ok, so it applies only to systems that either have trap
representations for signed integers, or that use a representation
other than two's-complement. Again, there aren't many such systems.

Oh, I didn't mean to imply it was common. The question
is does the Standard allow it.
Second, there is no reason that the implementation-defined result
of a narrowing conversion (to a signed integer type) can't be a
value that is not representable[*] in the type in question, which
means an exceptional condition, which means undefined behavior,
which means any value at all could be produced, including a trap
representation. Again the same argument applies to negative zeros;
the presence of undefined behavior trumps any other statement of
behavior that the Standard prescribes.

I've assumed that a trap representation cannot represent a value. But
C99 6.2.6.1p5, defining trap representations, says:

Certain object representations need not represent a value of the
object type.

which could be interpreted to mean that a trap representation *can*
represent a value of the type.

Since dealing with trap representations is undefined behavior,
an implementation could define any behavior it wanted, including
interpreting it as a legal value (and yes even some of the time
but not all of the time).
I'm not sure what that would mean, though; you wouldn't be able to
access the value without invoking undefined behavior. I suppose the
implementation could define the behavior of accessing a certain trap
representation as yielding a specified value; other operations on it
might have non-standard behavior. For example, addition and
subtraction might work properly on the full range of a type, but
multiplication and division might work only on a smaller subrange.

One obvious example might be to allow comparison to see
if an object has a trap representation in it, but not
allow any other use. This could be useful for debugging
in an implementation that sets automatic variables without
explicit initializers to trap representations.
Both C90 and C99 say that the result of a conversion from an integer
type to a signed integer type yields an implementation-defined result
if the source value can't be represented; C99 additionally allows an
implementation-defined signal to be raised -- which *can* invoke
undefined behavior. So I'd still say that the only way a conversion
can yield a trap representation is if the conversion raises a signal,
and the implementation chooses to define the undefined behavior so
that it stores a trap representation in the target object, which an
ordinary signal handler would have no way of accessing.

I'm not aware of any language in the Standard that limits the
word "result" used in 6.3.1.3p3 to be an in-range value for
the target type. Do you have any evidence to support that
contention? Certainly if the result can be anything other
than an in-range value for the target type, then the exceptional
condition/undefined behavior rule would apply.
Oh, and that raises another point. A trap representation makes sense
only as something stored in an object. A conversion doesn't
necessarily involve any objects, so there's not necessarily any place
for the trap representation to exist.

I think you may have misunderstood my comment. Suppose we're
converting to a 16-bit integer type with a range of -32767..32767.
Suppose the implementation-defined rule for conversions to any signed
integer type yields the same value if the original value is in range
for the target type, and -32768 if the original value is not in range.
Since -32768 is not within the range of what these 16-bit integers
can represent, that's an exceptional condition/undefined behavior.
It's the UB that then causes a trap representation to appear, which
may appear anywhere.

Also, on a practical level, the idea that trap representations don't
make sense outside of "objects" can't be taken very seriously. In
real computers the values produced by arithmetic operations are stored
in some sort of memory that is just as capable (especially for the
case under consideration) of holding a trap representation as it is a
bona fide value.
[*] Notice, for example, the last sentence of 6.2.5p3: "If any
other character is stored in a char object, the resulting value is
implementation-defined but shall be within the range of values
that can be represented in that type." Clearly the final clause
is necessary only if a resulting value might /not/ be within the
range of values that can be represented in the target type.

There are plenty of clauses in the standard that aren't strictly
necessary.

I won't say there aren't places where this happens, but if it does
it's the exception not the rule. Even if there are other similar
cases, the counter-example in 6.2.5p3 puts the onus of defense on the
side that says the text in 6.3.1.3p3 cannot be an out-of-range value.
All 6.3.1.3p3 says is ".. either the result is implementation-defined
or ..". The word "result" is used frequently in the Standard,
covering all kinds of eventualities, including potential undefined
behavior. Even in cases where the word "result" clealy means a
value, it can mean an out-of-range value, as for example 6.2.5p9:

A computation involving unsigned operands can never overflow,
because a _result_ [emphasis added] that cannot be represented
by the resulting unsigned integer type is reduced modulo the
number that is one greater than the largest value that can be
represented by the resulting type.

All the evidence I'm aware of suggests that "result" as used
in 6.3.1.3p3 includes the possibility of an out-of-range value.
What evidence is there that it is limited to an in-range value
and cannot be anything else?
 
K

Keith Thompson

Tim Rentsch said:
Keith Thompson said:
Tim Rentsch said:
[...]
But (C99 7.14.1.1p3):

If and when the function [the signal handler] returns, if
the value of sig is SIGFPE, SIGILL, SIGSEGV, or any other
implementation-defined value corresponding to a computational
exception [presumably this includes overflow on a signed integer
conversion], the behavior is undefined; otherwise the program
will resume execution at the point it was interrupted.

So if the particular undefined behavior of the implementation-defined
signal handler involves causing the conversion to yield a trap
representation, then yes, a trap representation can result from a
conversion.

It's undefined behavior. That means the implementation is free
to define it as yielding a trap representation, and isn't even
obligated to document that decision.

It doesn't seem terribly likely, though.

To me it does seem likely, precisely because the logic necessary
to effect a conversion is then so simple, eg, an arithmetic shift
left followed by a logical shift right. That there is no hardware
signal generated is irrelevant -- a signal raised in the abstract
machine need not have any corresponding presence in the physical
machine.

The whole idea is irrelevant except on systems that actually have trap
representations for integer types; there aren't many of those around
today. In addition, the implementation would have to support C99 (C90
didn't permit the implementation-defined signal), and the implementers
would have to decide to use signal semantics to justify the chosen
behavior.

First, it isn't just trap representations, it's also negative
zeros; the same argument about signal handling applies equally
to both.

Ok, so it applies only to systems that either have trap
representations for signed integers, or that use a representation
other than two's-complement. Again, there aren't many such systems.

Oh, I didn't mean to imply it was common. The question
is does the Standard allow it.

Well, upthread you did say "To me it does seem likely".

Certainly the standard allows anything for undefined behavior.
I just think that the idea of defining that the result of the
conversion is to raise an implementation-defined signal (something
that's new in C99), and then to have that (implicit?) system
signal handler do something that a user-defined signal handler
couldn't possibly do, is a bit more convoluted than something I'd
expect any actual implementer to do. If were an implementer and
wanted to achieve the same effect, I think I'd just do it in a
non-conforming mode.
Second, there is no reason that the implementation-defined result
of a narrowing conversion (to a signed integer type) can't be a
value that is not representable[*] in the type in question, which
means an exceptional condition, which means undefined behavior,
which means any value at all could be produced, including a trap
representation. Again the same argument applies to negative zeros;
the presence of undefined behavior trumps any other statement of
behavior that the Standard prescribes.

I've assumed that a trap representation cannot represent a value. But
C99 6.2.6.1p5, defining trap representations, says:

Certain object representations need not represent a value of the
object type.

which could be interpreted to mean that a trap representation *can*
represent a value of the type.

Since dealing with trap representations is undefined behavior,
an implementation could define any behavior it wanted, including
interpreting it as a legal value (and yes even some of the time
but not all of the time).
Granted.
I'm not sure what that would mean, though; you wouldn't be able to
access the value without invoking undefined behavior. I suppose the
implementation could define the behavior of accessing a certain trap
representation as yielding a specified value; other operations on it
might have non-standard behavior. For example, addition and
subtraction might work properly on the full range of a type, but
multiplication and division might work only on a smaller subrange.

One obvious example might be to allow comparison to see
if an object has a trap representation in it, but not
allow any other use. This could be useful for debugging
in an implementation that sets automatic variables without
explicit initializers to trap representations.
Agreed.
Both C90 and C99 say that the result of a conversion from an integer
type to a signed integer type yields an implementation-defined result
if the source value can't be represented; C99 additionally allows an
implementation-defined signal to be raised -- which *can* invoke
undefined behavior. So I'd still say that the only way a conversion
can yield a trap representation is if the conversion raises a signal,
and the implementation chooses to define the undefined behavior so
that it stores a trap representation in the target object, which an
ordinary signal handler would have no way of accessing.

I'm not aware of any language in the Standard that limits the
word "result" used in 6.3.1.3p3 to be an in-range value for
the target type. Do you have any evidence to support that
contention? Certainly if the result can be anything other
than an in-range value for the target type, then the exceptional
condition/undefined behavior rule would apply.
Oh, and that raises another point. A trap representation makes sense
only as something stored in an object. A conversion doesn't
necessarily involve any objects, so there's not necessarily any place
for the trap representation to exist.

I think you may have misunderstood my comment. Suppose we're
converting to a 16-bit integer type with a range of -32767..32767.
Suppose the implementation-defined rule for conversions to any signed
integer type yields the same value if the original value is in range
for the target type, and -32768 if the original value is not in range.
Since -32768 is not within the range of what these 16-bit integers
can represent, that's an exceptional condition/undefined behavior.

No, it's not UB; the conversion yields an implementation-defined
result or raises an implementation-defined signal. (The consequences
of the signal might be undefined.)
It's the UB that then causes a trap representation to appear, which
may appear anywhere.

Only if the behavior is actually undefined, and only if the
implementation takes advantage of it.
Also, on a practical level, the idea that trap representations don't
make sense outside of "objects" can't be taken very seriously. In
real computers the values produced by arithmetic operations are stored
in some sort of memory that is just as capable (especially for the
case under consideration) of holding a trap representation as it is a
bona fide value.

But in the abstract machine, I don't believe trap representations can
exist other than in objects.
[*] Notice, for example, the last sentence of 6.2.5p3: "If any
other character is stored in a char object, the resulting value is
implementation-defined but shall be within the range of values
that can be represented in that type." Clearly the final clause
is necessary only if a resulting value might /not/ be within the
range of values that can be represented in the target type.

There are plenty of clauses in the standard that aren't strictly
necessary.

I won't say there aren't places where this happens, but if it does
it's the exception not the rule. Even if there are other similar
cases, the counter-example in 6.2.5p3 puts the onus of defense on the
side that says the text in 6.3.1.3p3 cannot be an out-of-range value.
All 6.3.1.3p3 says is ".. either the result is implementation-defined
or ..". The word "result" is used frequently in the Standard,
covering all kinds of eventualities, including potential undefined
behavior. Even in cases where the word "result" clealy means a
value, it can mean an out-of-range value, as for example 6.2.5p9:

A computation involving unsigned operands can never overflow,
because a _result_ [emphasis added] that cannot be represented
by the resulting unsigned integer type is reduced modulo the
number that is one greater than the largest value that can be
represented by the resulting type.

All the evidence I'm aware of suggests that "result" as used
in 6.3.1.3p3 includes the possibility of an out-of-range value.
What evidence is there that it is limited to an in-range value
and cannot be anything else?

I'll have to think about that. (Or, to be honest, I might not get
around to it.)
 
T

Tim Rentsch

Keith Thompson said:
Tim Rentsch said:
Keith Thompson said:
[snip,snip,snip]

[*] Notice, for example, the last sentence of 6.2.5p3: "If any
other character is stored in a char object, the resulting value is
implementation-defined but shall be within the range of values
that can be represented in that type." Clearly the final clause
is necessary only if a resulting value might /not/ be within the
range of values that can be represented in the target type.

There are plenty of clauses in the standard that aren't strictly
necessary.

I won't say there aren't places where this happens, but if it does
it's the exception not the rule. Even if there are other similar
cases, the counter-example in 6.2.5p3 puts the onus of defense on the
side that says the text in 6.3.1.3p3 cannot be an out-of-range value.
All 6.3.1.3p3 says is ".. either the result is implementation-defined
or ..". The word "result" is used frequently in the Standard,
covering all kinds of eventualities, including potential undefined
behavior. Even in cases where the word "result" clealy means a
value, it can mean an out-of-range value, as for example 6.2.5p9:

A computation involving unsigned operands can never overflow,
because a _result_ [emphasis added] that cannot be represented
by the resulting unsigned integer type is reduced modulo the
number that is one greater than the largest value that can be
represented by the resulting type.

All the evidence I'm aware of suggests that "result" as used
in 6.3.1.3p3 includes the possibility of an out-of-range value.
What evidence is there that it is limited to an in-range value
and cannot be anything else?

I'll have to think about that. (Or, to be honest, I might not get
around to it.)

Are you trying to say that when or if your future deliberations
produce a result you'll get back to us?
 
P

Phil Carmody

Keith Thompson said:
Nobody said that, or anything resembling it.

You obviously speak a different language on your side of the pond.
Look up 'demonstrative pronoun'.

Phil
 
K

Keith Thompson

Phil Carmody said:
You obviously speak a different language on your side of the pond.
Look up 'demonstrative pronoun'.

Ok, fine, a literal reading might indicate that Squeamizh was saying
that real world has nothing to do with floating point. It was
*extremely* obvious from the context that that wasn't what he meant.
 
P

Phil Carmody

Keith Thompson said:
Ok, fine, a literal reading might indicate that Squeamizh was saying
that real world has nothing to do with floating point.
Yup.

It was
*extremely* obvious from the context that that wasn't what he meant.

Hmmm... I think you'll find that one of us (not me) introduced
an absolute ('nothing'), and that when there's doubt the absolute
almost always tends to be false.

Phil
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,774
Messages
2,569,596
Members
45,130
Latest member
MitchellTe
Top