It's only strange and baffling if you have written strange and baffling
code.
Sorry, no, it's far worse. Like wholesale removal of if-blocks and loops.
Most people assume "int" works like a mathematical integer - not
modular arithmetic.
Hm, don't know about "most people".
A Python 3.x programmer might think so.
However, no competent C or C++ programmer assumes that.
Making that assumption might even be used as a defining criterion for incompetence and/or newbieness.
Along with writing "void main", which you will find in examples all over Microsoft's documentation.
If you have an integer "x" which you know is
non-negative, then you will assume that adding 1 to it will give you a
positive integer.
No, all my reviews, when I worked (and now that I'm almost fully healed from surgeries I may just start working again, if employers are not scared away by long absence), said that I was extremely competent, not the opposite.
So no, I would absolutely not make an assumption flagrantly in contradiction with reality.
That's the way maths works. Why shouldn't the
compiler make the same assumption?
Because that's not the way that integer arithmetic works in C++, and because, far more importantly, because making and exploiting that assumption produces effectively incorrect machine code, even if formally allowed.
The only time the compiler will get it "wrong" (meaning "contrary to the
user's expectations" rather than "contrary to the standards") by
treating signed overflow as undefined behaviour is if you have code that
specifically relies on modular arithmetic of integers.
Possibly right, but no point in debating that I think.
Relying on modular arithmetic is pretty common.
Because all integer arithmetic in C++ is now in practice modular (no Univacany more), and because for efficiency reasons trapping on integer overflowis in practice not used.
I suspect (from
gut-feeling) that such code would often be better written using unsigned
integers rather than signed ones - that would make more sense to me, and
be correct according to the standards.
I agree.
(For the kind of code I write,
unsigned integers are more common than signed ones, so I could be biased.)
Can you give examples of code that relies directly on the modular nature
of two's compliment integers and which could not be done just as easily,
and safer, using unsigned integers? If not, then having signed overflow
as undefined behaviour just gives your compiler greater freedom to
generate smaller and faster code without compromising the expected
functionality of the code.
No, it gives the compiler freedom to waste the programmer's time figuring out why e.g. some loop's code is never executed: silent and time-wasting changes of the code's meaning.
It is completely unnecessary and completely fails to weight a local efficiency micro-advantage against far much important predictability and correctness.
[snip]
I just can't see where you would get a conflict with the expected
behaviour in real-world code here.
Right, it's necessarily at least as rare as the "optimization" opportunity itself.
And so to the degree that one can argue that the formal loophole exploitation will not produce a problem (oh it's so very rare!), one can equally argue that the "optimization" is worthless, wasted work, and that it's therefore necessarily at the cost of some real world improvement to the compiler.
What would you rather do? Disable optimisations that conflict with
/your/ particular idea of "expected behaviour" regardless of the
standards?
There are two incorrect assumptions in that question.
The first incorrect assumption, that it's subjective to expect a now universal-for-C-and-C++ arithmetic behavior (not even gcc behaves differently unless one asks for trapping). It's not subjective. It's the reality.
The second incorrect assumption, that a compiler team is free to exploit formal loopholes that are in the standard in support of now archaic systems, regardless of practical consequences. The cost for something that occurs very seldom is low, but it is there. The waste of programmer's valuable time,and the mere EXISTENCE of such, which is noticed and remarked on, and evenused to produce clever blog articles "guess what this code does" and the like, translates directly into a negative perception of the compiler.
Where do you draw the line? Should the compiler have a
"-fI-know-what-I-am-doing" flag?
It should merely be PREDICTABLE, doing by default the most practical thing,as opposed to the most impractical, time-wasting and unimaginable thing.
So it's an easy line to draw in most cases. ;-)
The irony, of course, is that gcc /has/ such a flag in this case -
"-fstrict-overflow". But it is turned on by -O2 and above, and you
don't need to know what you are doing to enable -O2 optimisation!
It's no irony that gcc has a lot of flags that impact its behavior and renders it needlessly unpredictable: it is merely very sad.
By choice I am unfamiliar with most such flags.
I look them up when necessary, but dang if I should let the needless compiler complexity use up my time.
[snip]
Cheers,
- Alf