V
>
now again consider a^=b^=a^=b
when compiler see this expression it starts from RHS as assignment
operator is right associative
so first it has to calculate rightmost a^=b then value of a must have
to change....it porceedes then to calculate b^=a ..then again a^=b....
Andrey said:You are making a popular mistake assuming that operator associativity
somehow defines the order of computation. This is absolutely incorrect.
Operator associativity only defines how the expression should be
_parsed_, i.e. it says which operand belong to which operator, but it
introduces no temporal ordering on the actual computations whatsoever.
This expression has no sequence points in the middle, which means that
there's absolutely no way to predict in which order it will be evaluated
and when the side effects are going to take place.
In this case the associativity dictates the following association of
operators and arguments
a ^= (b ^= (a ^= b)))
which is equivalent to
a = a ^ (b = b ^ (a = a ^ b)))
This means that the compiler has to evaluate the following intermediate
values
v1 = a ^ b
v2 = b ^ v1
v3 = a ^ v2
and realize the following side effects
a = v1
b = v2
a = v3
Now the important part is that there's absolutely no ordering
requirements on the side effects, meaning that the compiler is free to
realize these side effects in any order and at any moment after they are
introduced. The compiler is free to do it in this order, for example
1. v1 = a ^ b
2. v2 = b ^ v1
5. b = v2
3. v3 = a ^ v2
4. a = v3
6. a = v1
Needless to add, this will not "swap" your variables.
Not perfectly.
Given that a and b are of int-type,
(a^=b) is capable of generating a trap representation.
Can you please elaborate about this trap representation.....
Can you please elaborate about this trap representation.....
James Kuyper said:The standard quite explicitly says that the consequences of undefined
behavior can include failing to compile, which clearly indicates that it
can precede execution of the relevant code. The standard does not
explain this in any detail, but I believe that the relevant rule is that
the undefined behavior is allowed at any point after execution of the
relevant code becomes inevitable.
Example:
if(some condition)
a^=b^=a^=b;
For code like this, the behavior of the code becomes undefined as soon
as it becomes inevitable that the if() clause will be executed. This
means that at points in the code prior the if() statement, the compiler
is allowed to generate code using optimizations that only work if the
if-condition is not true. As a result, those optimizations may cause
your code to misbehave long before evaluation of the offending statement.
James Kuyper said:[...]
The standard explicitly states (6.2.6.2p2) that any given signed integer
type has one bit pattern that might or might not be a trap
representation - it's up to the implementation to decide (and to
document their decision). For types that use a one's complement or
sign-magnitude representation, this is the bit pattern that would
otherwise represent negative 0. If the type uses a twos-complement
representation, this is the bit pattern that would otherwise represent
-2^N, where N is the number of value bits in the type.
Some people read that clause as allowing only that one trap
representation, and requiring that all other bit patterns must be valid.
I don't read it that way. It seems to me that what it says still allows
for the possibility of other trap representations as well. An
implementation that used 1 padding bit, 1 sign bit, and 30 value bits
for 'int' could set INT_MAX to 1000000000, and INT_MIN to -1000000000,
and declare that all bit patterns that would seem to represent values
outside that range are actually trap representations. It's been argued
that this violates the requirement that for any signed type "Each bit
that is a value bit shall have the same value as the same bit in the
object representation of the corresponding unsigned type." But every
does have that value, in every non-trap representation that has that bit
set.
Tim said:James Kuyper said:[...]
The standard explicitly states (6.2.6.2p2) that any given signed integer
type has one bit pattern that might or might not be a trap
representation - it's up to the implementation to decide (and to
document their decision). For types that use a one's complement or
sign-magnitude representation, this is the bit pattern that would
otherwise represent negative 0. If the type uses a twos-complement
representation, this is the bit pattern that would otherwise represent
-2^N, where N is the number of value bits in the type.
Some people read that clause as allowing only that one trap
representation, and requiring that all other bit patterns must be valid.
I don't read it that way. It seems to me that what it says still allows
for the possibility of other trap representations as well. An
implementation that used 1 padding bit, 1 sign bit, and 30 value bits
for 'int' could set INT_MAX to 1000000000, and INT_MIN to -1000000000,
and declare that all bit patterns that would seem to represent values
outside that range are actually trap representations. It's been argued
that this violates the requirement that for any signed type "Each bit
that is a value bit shall have the same value as the same bit in the
object representation of the corresponding unsigned type." But every
does have that value, in every non-trap representation that has that bit
set.
I must admit to having trouble with this one. What's the basis for
the position you state? In the example given the presence of a
padding bit seems completely irrelevant (except perhaps to make the
number of bits a multiple of 8 while preserving round limits?)
-- is
there any difference between this example and one using 31 value bits
to represent values in [-2000000000 .. 2000000000]?
... It seems like
all you are saying is that you think some combinations of values
bits are allowed to be trap representations whereas other people
think they aren't (not counting the distinguished ones explicitly
identified in the Standard, of course). What's the argument to
support this position?
Get a good textbook on Temporal Mechanics. Miles O'Brien of Deep
Space Nine has one. The subject isn't as simple as you might think.
jameskuyper said:Tim said:James Kuyper said:[...]
The standard explicitly states (6.2.6.2p2) that any given signed integer
type has one bit pattern that might or might not be a trap
representation - it's up to the implementation to decide (and to
document their decision). For types that use a one's complement or
sign-magnitude representation, this is the bit pattern that would
otherwise represent negative 0. If the type uses a twos-complement
representation, this is the bit pattern that would otherwise represent
-2^N, where N is the number of value bits in the type.
Some people read that clause as allowing only that one trap
representation, and requiring that all other bit patterns must be valid.
I don't read it that way. It seems to me that what it says still allows
for the possibility of other trap representations as well. An
implementation that used 1 padding bit, 1 sign bit, and 30 value bits
for 'int' could set INT_MAX to 1000000000, and INT_MIN to -1000000000,
and declare that all bit patterns that would seem to represent values
outside that range are actually trap representations. It's been argued
that this violates the requirement that for any signed type "Each bit
that is a value bit shall have the same value as the same bit in the
object representation of the corresponding unsigned type." But every
does have that value, in every non-trap representation that has that bit
set.
I must admit to having trouble with this one. What's the basis for
the position you state? In the example given the presence of a
padding bit seems completely irrelevant (except perhaps to make the
number of bits a multiple of 8 while preserving round limits?)
[..minor detour on padding bits..]
... It seems like
all you are saying is that you think some combinations of values
bits are allowed to be trap representations whereas other people
think they aren't (not counting the distinguished ones explicitly
identified in the Standard, of course). What's the argument to
support this position?
Which position - mine or theirs? My position is based upon the fact
that the standard explicitly allows for trap representations, and says
nothing to limit how many any given type may have. The opposing
position is based upon the claim that 6.2.6.2p2 defines the only trap
representation involving value bits that a signed integer type is
allowed to have. As I read it, 6.2.6.2p2 serves primarily to explain
the fact that the bit pattern that would otherwise represent negative
zero in 1's complement or sign-magnitude representations is allowed to
be a trap representation. This clears up any ambiguity that might
arise due to the fact that 0 has two distinct representations for such
types. It doesn't imply in any way that negative zero is the only
allowed trap representation. The fact that it also defines a bit
pattern for 2's complement representations that is allowed to be a
trap representation is a weak point in my argument. If my argument is
correct, that clause is redundant; but it doesn't directly contradict
my conclusion.
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.