Alf said:
Mostly this ties in with C++'s archaic/historical notion that the compiler
can do a much better job of optimization if it's left free to rearrange
the order that things happen in.
Obviously, after sequence point A it can only rearrange things that happen
before the next sequence point B, because at B the total effect has been
achieved -- nothing that was specified to happen between A and B can be
moved after B (at least not unless the compiler can prove that you can't
notice).
Which of course it often can prove exactly that. Once the compiler has
converted the code to SSA form, it is very often extremely clear what could
and what could not be noticed. The simple fact is that those rules make
avoiding inadvertantly invoking UB make easier than would otherwise be the
case, while in extremely few cases does it enable better optimization.
That is of course talking about modern compilers. Early compilers could do
very little optimization and relied on those rules quite a bit.
I'd like to see the difference in a modern compiler if strict sequencing
were required, versus the sequence point rules. It would probably be hard to
notice the difference except in a few very specific cases. These sorts of
rules become much more important again under multi-threading, which is why
that whole section of the standard was heavilly re-written for C++0x.
Changing write ordering can be a problem in multithreaded applications on
all platforms, and things only get worse when the platform does not by
default guarentee cache consitancy and order of memory operations, requiring
complications like memory barriers, explic cache flushes, and more. If all
went well few programmers would need to worry too much about these details,
beyond being aware that they are going on under the hood, but in reality
dealing with these complications can take far too much time of even
Application Programmers, not just Systems Programmers like the threading
library author.