(s/were/would be/ in both this and the snipped part, BTW.
And s/paran/paren/ .)
Not so, especially when dealing with floating point or possible
overflows. The compiler is only free to rearrange things when it
can detect that the results are identical.
Yes, that's exactly what I and Irrwahn said.

The "as-if"
rule allows the compiler to produce whatever executable code it
likes, as long as the result is the same as if (w+x) had been
subtracted from (u+v).
Note that it is
perfectly allowable for an intermediate result to cause an
overflow, even though the overall expression does not.
True. However, if the compiler can tell ahead-of-time that
such an overflow will not occur, then it's free to optimize
in that way. Heck, it can use the built-in "subtract A+B from
C+D" FPU instruction, if it happens to have one.
Oh, and something else for the OP's question: Many compilers
will have a switch that allows you to turn off (or on) this
strict compliance with ISO C as regards floating-point "as if"s.
It is sometimes much faster, and occasionally much more
accurate (!), to produce answers that *don't* follow the C
standard. Consider my earlier example of
double d = 3.14, e = 2.55;
double f = d/e;
printf("%d\n", d/e == f);
On some compilers for x86, that will print different numbers
(for appropriate values of 3.14 and 2.55), because the FPU
registers are wider than the variables on the program's stack.
So storing d/e into f loses precision, which is reflected in
the output. To accurately reflect the standard (unless, as
is likely, the standard leaves some loopholes for this sort
of thing), you'd need to add an instruction to store d/e into
a regular 'double' before the comparison, and that would slow
down the program. So many compilers let you turn on and off
some optimizations related to this sort of thing.
HTH,
-Arthur