The abstract machine doesn't admit to such possibilities. A volatile
object is still an object with normal object semantics (e.g., it retains
No it is not. This is completely a wrong path to take regarding
the meaning of an assignment expression; it does not rest on the
semantics of volatile, but on the semantics of ``assignment expression''.
its last stored value), it's just that its value can be stored in ways
The volatile qualifier expresses precisely the opposite idea: that the object
does not have normal object semantics: that it cannot be relied upon to retain
its last stored value.
Caching optimizations rely on objects retaining their last stored value,
so that the cached copies can be trusted to be coherent with the original.
volatile defeats such optimizations for objects which are known, or suspected
of violating this coherency requirement.
The only reason for the peculiar wording was to handle the cases where
the value is changed by the assignment in ways that aren't obvious from
the overt type of the left hand side (e.g., the value is truncated when
assigned to a small bit-field).
There are other ways to express such an intent
without actually saying that the value of the expression /is/ that of
the left operand.
Now let's think about this. Suppose that the lvalue foo.three is
an unsigned bit-field, three bits wide, and suppose that it's
a normal object which retains its last stored value.
If we perform this expression:
return foo.three = 42;
the value 42 is reduced modulo 8 into the range 0-7, and that resulting value
is also returned.
Why does assignment return this reduced value?
Argument: it's could be because the value is understood as going into the
object, and then emerging again to form the result.
If the value does not go into the object, there is no reason to reduce
it to three bits; the assignment expression might as well yield 42.
I.e. the right hand side is shunted into the object, where it must be reduced,
/and/ it is also returned.
For that matter, the resulting value can have the original type, too.
Someone designing a language in which the assignment /forks/ the value into two
destinations might not design it this way. The resulting value would be the
original value of the right hand side, of the original type. That's a much
more sensible design: why introduce a potentially dangerous conversion where
one isn't needed?
For instance, we could have a C dialect in which the following initializes
the two real values to 3.14:
double1 = double2 = integer = 3.14;
The 3.14 is truncated when it is shunted into the integer, but the
result of (integer = 3.14) is of type double, so the original 3.14 (or best
approximation of that constant in the double type) propagates to real2 and
real1. Basically, the initialization is fully paralellizeable.
If you want an alternate rationale for the strange wording which
supports your case, how about this: it is necessary because without it,
expressions like the following would be undefined behavior:
node = node->next = NULL;
The NULL could race ahead and get stored into node, turning node->next
into a null dereference. The wording ``value of the left operand after the
assignment'' introduces a data flow constraint which allows expressions like
the above to be harmless and even useful.