Integer arithmetic when overflow exists

J

junyangzou

Tow 32 bit integer values A and B, are processed to give the 32 bit integers C and D as per the following rules. Which of the rule(s) is(are) reversible? i.e. is it possible to obtain A and B given c and D in all condition?

A. C = (int32)(A+B), D = (int32)(A-B)

B. C = (int32)(A+B), D= (int32)((A-B)>>1)

C. C = (int32)(A+B), D = B

D. C = (int32)(A+B), D = (int32)(A+2*B)

E. C = (int32)(A*B), D = (int32)(A/B)

A few questions about the integer arithmetic. Modular addition forms amathematical structure known as an abelian group. How about signed addition? It's also commutative (that’s where the “abelian” part comes in) and associative, is this forms a n an abelian group?

Given that integer addition is commutative and associative, C is apparentlytrue, because we can retrieve A by (A+(B-B)). What about D? Can we assume that 2 * B = B + B st. B = A+B+B-(A+B)?

And multiplication is more complicated, but I know that it can not be retrieve A if there is an overflow.
 
A

alf.p.steinbach

Tow 32 bit integer values A and B, are processed to give the 32 bit integers C and D as per the following rules. Which of the rule(s) is(are) reversible? i.e. is it possible to obtain A and B given c and D in all condition?

A. C = (int32)(A+B), D = (int32)(A-B)
B. C = (int32)(A+B), D= (int32)((A-B)>>1)
C. C = (int32)(A+B), D = B
D. C = (int32)(A+B), D = (int32)(A+2*B)
E. C = (int32)(A*B), D = (int32)(A/B)

This sounds very much like homework.

A few questions about the integer arithmetic. Modular addition forms amathematical structure known as an abelian group. How about signed addition?

Whether C++ integer addition is modular depends FORMALLY on the implementation, and can be checked via std::numeric_limits.

In practice it's modular on all modern systems, but at least one compiler (namely g++) uses the formal UB for overflow, supporting archaic systems, asan excuse to do rather inconvenient things, like "optimizations". You willmost proabbly stop g++ from doing that by using the option "-fwrapv". Thisinforms that compiler that you do want the practical and efficient machinecode level modular arithmetic behavior, two's complement form.
It's also commutative (that’s where the “abelian” part comes in) and associative, is this forms a n an abelian group?

If you don't know what an Abelian group is, how will it help you to get answer about whether arithmetic forms such group?

Given that integer addition is commutative and associative, C is apparently true, because we can retrieve A by (A+(B-B)). What about D? Can we assume that 2 * B = B + B st. B = A+B+B-(A+B)?

Only for an implementation where signed arithmetic is modular.

And multiplication is more complicated, but I know that it can not be retrieve A if there is an overflow.

That sounds a bit too pessimistic. :)


Cheers & hth.,

- Alf
 
J

junyangzou

This sounds very much like homework.

Hah, actually it is a test question from Microsoft intern hiring written test.
If you don't know what an Abelian group is, how will it help you to get answer about whether arithmetic forms such group?

AFAIK, commutative and associative is a sufficient for an operation to form a Abelian group.


And can I understand that under two's complement, C and D is the right answer?
 
A

alf.p.steinbach

Oops, I didn't mean that.

s/integer/signed integer/

Unsigned integer arithmetic is always modular in C and C++.

gcc will also use "practical and efficient machine code" when -fwrapv is
not in effect

Oh yes, it has no choice there. That's all there is, now that the Univac series of computers is no longer extant. As you rightly point out, the difference is merely whether gcc is allowed to ASSUME that it could be using something else than it's actually using, or that that would fine by the programmer.

- noting that signed overflow is undefined behaviour lets
the compiler write more efficient code in some cases, but it never means
writing /worse/ code.

Needless to say, the assumption that's contrary to reality, if one lets thecompiler make it (which it's more than eager to do), leads to sometimes strange & baffling, and altogether impractical and time-wasting, results.

Personally I think of these results as incorrect machine code, even if formally permitted, and incorrect is IMO very much "worse" than not optimally efficient. Yes, I do understand that you meant "worse" in the context of what I wrote, i.e. pertaining to efficiency, and that's true, the code isn't worse in that respect, and I should not have given that misleading impression, sorry. Still, correctness, not just wrt. formal rules, is more important..

g++ is (rightly, IMHO) by some / many regarded as a "perverse" compiler, generally exploiting every little formal loop-hole to do the unexpected and impractical. I.e., doing small and insignificant little micro-optimizations at huge and overwhelming global cost. Balancing that -- and "better the Devil you know" etc. -- it's generally far more up-to-date and correct wrt.. the core language parts of the standard than e.g. Visual C++.


Cheers,

- Alf (noting that the holiness of g++ is a religious issue, heh :) )
 
A

alf.p.steinbach

It's only strange and baffling if you have written strange and baffling
code.

Sorry, no, it's far worse. Like wholesale removal of if-blocks and loops.

Most people assume "int" works like a mathematical integer - not
modular arithmetic.

Hm, don't know about "most people".

A Python 3.x programmer might think so.

However, no competent C or C++ programmer assumes that.

Making that assumption might even be used as a defining criterion for incompetence and/or newbieness.

Along with writing "void main", which you will find in examples all over Microsoft's documentation.

If you have an integer "x" which you know is
non-negative, then you will assume that adding 1 to it will give you a
positive integer.

No, all my reviews, when I worked (and now that I'm almost fully healed from surgeries I may just start working again, if employers are not scared away by long absence), said that I was extremely competent, not the opposite.

So no, I would absolutely not make an assumption flagrantly in contradiction with reality.

That's the way maths works. Why shouldn't the
compiler make the same assumption?

Because that's not the way that integer arithmetic works in C++, and because, far more importantly, because making and exploiting that assumption produces effectively incorrect machine code, even if formally allowed.

The only time the compiler will get it "wrong" (meaning "contrary to the
user's expectations" rather than "contrary to the standards") by
treating signed overflow as undefined behaviour is if you have code that
specifically relies on modular arithmetic of integers.

Possibly right, but no point in debating that I think.

Relying on modular arithmetic is pretty common.

Because all integer arithmetic in C++ is now in practice modular (no Univacany more), and because for efficiency reasons trapping on integer overflowis in practice not used.

I suspect (from
gut-feeling) that such code would often be better written using unsigned
integers rather than signed ones - that would make more sense to me, and
be correct according to the standards.

I agree.

(For the kind of code I write,
unsigned integers are more common than signed ones, so I could be biased.)

Can you give examples of code that relies directly on the modular nature
of two's compliment integers and which could not be done just as easily,
and safer, using unsigned integers? If not, then having signed overflow
as undefined behaviour just gives your compiler greater freedom to
generate smaller and faster code without compromising the expected
functionality of the code.

No, it gives the compiler freedom to waste the programmer's time figuring out why e.g. some loop's code is never executed: silent and time-wasting changes of the code's meaning.

It is completely unnecessary and completely fails to weight a local efficiency micro-advantage against far much important predictability and correctness.


[snip]
I just can't see where you would get a conflict with the expected
behaviour in real-world code here.

Right, it's necessarily at least as rare as the "optimization" opportunity itself.

And so to the degree that one can argue that the formal loophole exploitation will not produce a problem (oh it's so very rare!), one can equally argue that the "optimization" is worthless, wasted work, and that it's therefore necessarily at the cost of some real world improvement to the compiler.

What would you rather do? Disable optimisations that conflict with
/your/ particular idea of "expected behaviour" regardless of the
standards?

There are two incorrect assumptions in that question.

The first incorrect assumption, that it's subjective to expect a now universal-for-C-and-C++ arithmetic behavior (not even gcc behaves differently unless one asks for trapping). It's not subjective. It's the reality.

The second incorrect assumption, that a compiler team is free to exploit formal loopholes that are in the standard in support of now archaic systems, regardless of practical consequences. The cost for something that occurs very seldom is low, but it is there. The waste of programmer's valuable time,and the mere EXISTENCE of such, which is noticed and remarked on, and evenused to produce clever blog articles "guess what this code does" and the like, translates directly into a negative perception of the compiler.

Where do you draw the line? Should the compiler have a
"-fI-know-what-I-am-doing" flag?

It should merely be PREDICTABLE, doing by default the most practical thing,as opposed to the most impractical, time-wasting and unimaginable thing.

So it's an easy line to draw in most cases. ;-)

The irony, of course, is that gcc /has/ such a flag in this case -
"-fstrict-overflow". But it is turned on by -O2 and above, and you
don't need to know what you are doing to enable -O2 optimisation!

It's no irony that gcc has a lot of flags that impact its behavior and renders it needlessly unpredictable: it is merely very sad. :(

By choice I am unfamiliar with most such flags.

I look them up when necessary, but dang if I should let the needless compiler complexity use up my time.


[snip]

Cheers,

- Alf
 
A

alf.p.steinbach

(e-mail address removed) wrote in



Right, competent programmers know that overflow in signed arithmetics is UB
and do not rely on it in portable code. Even if it appears to work in some
particular way now and is not trapped, nobody guarantees the same in the
next version of gcc.

It's good that there's agreement about something, now. :)

David Brown and I have already expressed, in this thread, about the same sentiment about preferably using unsigned for modular arithmetic.

Perhaps we have landed on that conclusion from slightly different rationales, but my rationale is that when it's not too much work to write standard-conforming code, then there is IMO no good reason to not do that.

However, the gcc default is that you can not rely on the expected behavior in other's code (other's code may not add the requisite casting to unsignedtype, e.g. for use of Windows' GetTickCount API function), and the gcc default is that you cannot even rely on reasonable behavior for very system-specific code.

In short, it's unreliable. And due to the "optimizations" that it adds by default, that may wholesale remove parts of one's code, the effect of one's code (that may use others' libraries), with this compiler, is also unpredictable. Or perhaps that distinction is too subtle, too fine, but anyway, it's somewhere in the area of unreliable/unpredictable/arbitrary/impractical.

But as I've also already remarked on, "Better the Devil that one knows": with regard to conformance to the core language parts of the standard, and support for those parts of the standard, g++ shines brightly. :)

Cheers & hth.,

- Alf
 
T

Tobias Müller

[...]
Most people assume "int" works like a mathematical integer - not
modular arithmetic.

Hm, don't know about "most people".

A Python 3.x programmer might think so.

However, no competent C or C++ programmer assumes that.

Making that assumption might even be used as a defining criterion for
incompetence and/or newbieness.

They way it is defined, it seems to be obvious that this was the original
intent. Why else should overflow be UB?
If you restrict yourself to a reasonable range you are on the safe side.
Along with writing "void main", which you will find in examples all over
Microsoft's documentation.

When did you read MSDN the last time? A quick google search (excluding C#)
revealed just one occurrence in an old Visual Studio 6.0 page.
No, all my reviews, when I worked (and now that I'm almost fully healed
from surgeries I may just start working again, if employers are not
scared away by long absence), said that I was extremely competent, not the opposite.

So no, I would absolutely not make an assumption flagrantly in contradiction with reality.

I'm a bit baffled, that you (as an appearantly competent C++ programmer)
advocate for relying on clearly undefined behavior. The only sane thing to
do is make sure that overflow never happens!
Because that's not the way that integer arithmetic works in C++, and
because, far more importantly, because making and exploiting that
assumption produces effectively incorrect machine code, even if formally allowed.



Possibly right, but no point in debating that I think.

Relying on modular arithmetic is pretty common.

Because all integer arithmetic in C++ is now in practice modular (no
Univac any more), and because for efficiency reasons trapping on integer
overflow is in practice not used.

For the most signed integer types there isn't even a guaranteed size.
Without ugly preprocessor hackery there is no possibility to write portable
code that relies on modular integer arithmetic.
I suspect (from
gut-feeling) that such code would often be better written using unsigned
integers rather than signed ones - that would make more sense to me, and
be correct according to the standards.

I agree.

(For the kind of code I write,
unsigned integers are more common than signed ones, so I could be biased.)

Can you give examples of code that relies directly on the modular nature
of two's compliment integers and which could not be done just as easily,
and safer, using unsigned integers? If not, then having signed overflow
as undefined behaviour just gives your compiler greater freedom to
generate smaller and faster code without compromising the expected
functionality of the code.

No, it gives the compiler freedom to waste the programmer's time figuring
out why e.g. some loop's code is never executed: silent and time-wasting
changes of the code's meaning.

It is completely unnecessary and completely fails to weight a local
efficiency micro-advantage against far much important predictability and correctness.


[snip]
I just can't see where you would get a conflict with the expected
behaviour in real-world code here.

Right, it's necessarily at least as rare as the "optimization" opportunity itself.

And so to the degree that one can argue that the formal loophole
exploitation will not produce a problem (oh it's so very rare!), one can
equally argue that the "optimization" is worthless, wasted work, and that
it's therefore necessarily at the cost of some real world improvement to the compiler.

No, you cannot simply reverse that. The problem cases are only a (probably
small) subset of the optimization possibilities.
There are two incorrect assumptions in that question.

The first incorrect assumption, that it's subjective to expect a now
universal-for-C-and-C++ arithmetic behavior (not even gcc behaves
differently unless one asks for trapping). It's not subjective. It's the reality.

It's not reality, because it's UB. Every C or C++ programmer should be
aware of that. If you know that it causes subtle bugs, why do you insist on
using it?
The second incorrect assumption, that a compiler team is free to exploit
formal loopholes that are in the standard in support of now archaic
systems, regardless of practical consequences. The cost for something
that occurs very seldom is low, but it is there. The waste of
programmer's valuable time, and the mere EXISTENCE of such, which is
noticed and remarked on, and even used to produce clever blog articles
"guess what this code does" and the like, translates directly into a
negative perception of the compiler.

The waste of programmers time is not the compilers fault, but the
programmers. You don't rely on modular arithmetic by _accident_!
It should merely be PREDICTABLE, doing by default the most practical
thing, as opposed to the most impractical, time-wasting and unimaginable thing.

It is predictable if you restrict yourself to the defined behavior.
So it's an easy line to draw in most cases. ;-)

Yes, it's easy. Don't rely on UB.

Tobi
 
R

Rupert Swarbrick

junyangzou said:
AFAIK, commutative and associative is a sufficient for an operation to
form a Abelian group.

.... and invertible.

A set with just a commutative and associative operation is called an
abelian monoid. One example of such an object is the free abelian monoid
on one generator. It is in bijection with the non-negative integers and
the operation is the same as addition.

Another way to think of it is as follows. I've got a "generating object"
X and want to be able to talk about X+X. Fine, but I should also be able
to talk about X+(X+X)=(X+X)+X (by associativity). And probably X+X+X+X
and... and... Let's abbreviate the sum of n X's as nX. I'm also going to
throw in another object, which I'll write as 0, and I'll say that
X+0=0+X=X. Notice that associativity now shows that nX+0 = 0+nX = nX, so
0 is a left and right unit. To make my notation nice and uniform, I
could define 0X = 0.

If I assume all of these objects are distinct, I get something that
looks awfully like the non-negative integers. Indeed, the bijection is
nX <-> n.

Note that I didn't mention subtraction (or negative numbers).

In a group, I also require that for any element, g, I can add something
to g to get back to zero. If I add a new element to my example above and
call it -X and ask that -X+X = X+(-X) = 0, then you see that
(-X)+(-X)+2X=0 and so on (by associativity again). Indeed, I may as well
write (-X)+(-X) as -2X. Now I've got an abelian group that's obviously
"the same" as the integers. (The technical term is isomorphic)

Hey, I didn't mention commutativity! Well, that's because it doesn't
matter in the example above, which is commutative by
definition. However, if I started out with two different objects (X and
Y, say), then I'd need to tell you whether or not you could assume
X+Y=Y+X.

Finally, back to my original point. Is signed addition invertible? Well,
that depends on what the standard says about behaviour on overflow. For
example, consider what happens if signed saturating addition is allowed.


Rupert
 
A

alf.p.steinbach

[...]
Most people assume "int" works like a mathematical integer - not
modular arithmetic.

Hm, don't know about "most people".

A Python 3.x programmer might think so.

However, no competent C or C++ programmer assumes that.

Making that assumption might even be used as a defining criterion for
incompetence and/or newbieness.

They way it is defined, it seems to be obvious that this was the original
intent. Why else should overflow be UB?

C++ admits three possible representations of signed integers, namely magnitude-and-sign, one's complement, and two's complement. Only the last gives modulo arithmetic. At one time computers/systems using the first two, existed.

This is the main reason, and it pertains to now archaic systems.

But there is also a still-practically-possible reason, that of trapping on integer overflow. Even C has no direct interface for handling such traps, even though it's natural to think it will result in a "signal". In C++ the support for signals is, however, very much lacking, with almost all roads leading to UB.

If you restrict yourself to a reasonable range you are on the safe side.

Yes, in general.

When did you read MSDN the last time? A quick google search (excluding C#)
revealed just one occurrence in an old Visual Studio 6.0 page.

You might google <<msdn c++ "void main">>.

I think your question was rhetoric, indicating that my statement was incorrect.

However, the inability to find all those examples is just about googling skills, or experience with MSDN. ;-)


[snip]
I'm a bit baffled, that you (as an appearantly competent C++ programmer)
advocate for relying on clearly undefined behavior.

Say, could you provide a quote of that?


[snip]

Sorry for not reading the rest, but, you know, time.


Cheers & hth.,

- Alf
 
T

Tobias Müller

[...]
On Tuesday, October 8, 2013 5:15:50 PM UTC+2, David Brown wrote:
Most people assume "int" works like a mathematical integer - not
modular arithmetic.

Hm, don't know about "most people".

A Python 3.x programmer might think so.

However, no competent C or C++ programmer assumes that.

Making that assumption might even be used as a defining criterion for
incompetence and/or newbieness.

They way it is defined, it seems to be obvious that this was the original
intent. Why else should overflow be UB?

C++ admits three possible representations of signed integers, namely
magnitude-and-sign, one's complement, and two's complement. Only the last
gives modulo arithmetic. At one time computers/systems using the first two, existed.

This is the main reason, and it pertains to now archaic systems.

Modular overflow is just an implementation detail of twos complement and
just because twos complement made the cut does not mean that modular
overflow is more natural that any other behavior.
But there is also a still-practically-possible reason, that of trapping
on integer overflow. Even C has no direct interface for handling such
traps, even though it's natural to think it will result in a "signal". In
C++ the support for signals is, however, very much lacking, with almost
all roads leading to UB.

In what way are signals more natural than exceptions?

Trapping overflow has actually no relevance here. It has an entirely
different purpose. It prevents _accidential_ overflow. But modular overflow
isn't any better than UB with respect to accidential overflow.
Yes, in general.

Why "in general"? Except what?
You might google <<msdn c++ "void main">>.

I just did and there was not even one example on the first 5 pages. I don't
have the time for looking through dozens of search result pages.
I think your question was rhetoric, indicating that my statement was incorrect.

Yes. At least a massive overstatement.
However, the inability to find all those examples is just about googling
skills, or experience with MSDN. ;-)

Please give me some concrete examples. But not for ancient versions.
[snip]
I'm a bit baffled, that you (as an appearantly competent C++ programmer)
advocate for relying on clearly undefined behavior.

Say, could you provide a quote of that?

I guess I've misunderstood you a bit here. But still, you are arguing to
make modular signed overflow (implementation) defined behavior, which is
IMO only slightly better.
Relying on modular overflow is essentially not possible (portably) without
also relying on other weak assumptions like the size of int.
And therefore legalizing it is a step into the wrong direction. It
encourages people to rely on it.
[snip]

Sorry for not reading the rest, but, you know, time.

This is just rude.
Considering your answers in other threads here you seem to have plenty of
time for answering much more "trolly" postings.

Tobi
 
A

alf.p.steinbach

(e-mail address removed) writes:


Computers using magnitude and sign still exist. The descendents of the
aforementioned Univac (known now as Clearpath Dorado) and the Burroughs
B5500 (known now as Clearpath Libra) still exist, are still being developed
and both host C and C++ compilers; and neither need use modulo arithmetic..

Even if 1 or 2 Dorados still exist in active service, they can for all practical purposes be ignored.

Just as the ENIAC. ;-)

And, of course, one can't ignore the Z-series, for which new software is
developed every day (albeit mostly COBOL) and for which packed decimal is
a standard machine datatype.

Z-series, are you talking (before World War II) Konrad Zuse's Z1 etc. here?

Long memory then, if that.

Anyway, Konrad must be the most underrated man in the history of Computer Science, creating both the first digital computers and the first high level programming language (Plankakul), all before or during World War II, and losing all credit to John von Neumann, a Hungarian whose only contribution inthis regard was to omit all references and credits in the internal but accidentally-widely-distributed memo that he wrote right after calculating theoptimal height to explode an atomic bomb over Hiroshima.

Not to mention the decimal stuff added to C11.

There are no decimal integer types in C++.

On the contrary, the standard has always included a requirement of pure binary,

in C++11 §3.9.1/7 "The representations of integral types shall define values by use of a pure binary numeration system."


Cheers & hth.,

- Alf (detecting a bit of trolling here)
 
A

alf.p.steinbach

[snip]
You might google <<msdn c++ "void main">>.

I just did and there was not even one example on the first 5 pages. I don't
have the time for looking through dozens of search result pages.

Please give me some concrete examples. But not for ancient versions.

http://msdn.microsoft.com/en-us/library/windows/desktop/bb773687(v=vs.85).aspx

http://msdn.microsoft.com/en-us/library/windows/desktop/bb773745(v=vs.85).aspx

http://msdn.microsoft.com/en-us/library/windows/desktop/bb773757(v=vs.85).aspx

http://msdn.microsoft.com/en-us/library/windows/desktop/bb773739(v=vs.85).aspx

http://msdn.microsoft.com/en-us/library/windows/desktop/bb773742(v=vs.85).aspx

http://msdn.microsoft.com/en-us/library/k204dhw5.aspx

http://msdn.microsoft.com/en-us/library/aa448695.aspx

http://msdn.microsoft.com/en-us/library/windows/desktop/aa446602(v=vs.85).aspx

I'm stopping there, not for me to do Microsoft's cleaning job for them.

But as you can see, no shortage of "void main" usage in their docs, indicating a certain lack of competence at least on the part of the technical writers.

You're just wrong.

This is just rude.

No seriously, I don't have time for reading pages of stuff after encountering active misrepresentation.


Cheers & hth.,

- Alf
 
A

alf.p.steinbach

"void main" is perfectly legal in freestanding (non-hosted) C, and is in
common usage in embedded systems of all sorts.

Well we're talking C++ here.

In C++ it's invalid, even in freestanding implementation.

C++11 §3.6.1/2 "[main] shall have a return type of type int, but otherwise its type is implementation-defined"

I don't know about C.


Cheers & hth.,

- Alf
 
A

alf.p.steinbach

Is that a fair summary of the argument?

Nope.

A compiler that yields unpredictable (lack of) effects for ordinary code, is simply ... unpredictable.

Not a good thing, regardless of the users.

And let's face: not every C++ programmer is a language lawyer.

A compiler, such as g++, should not support only the best and brightest, but also ordinary programmers.

And the compiler should not only support the programmers who think those who use platform-specific functionality are incompetents.

The compiler should also support those, like myself and (I know from earlier clc++ discussions) Pete Becker, who penned the latest standard, who are comfortable with using platform-specific functionality where appropriate, regardless of formal UB. Not that he'd necessary completely share my point ofview (I think most probably not!), but as authority arguments go, I think it's nice that by chance he has expressed, in clc++, views diametrically opposed to yours, in this matter.

:)


Cheers & hth.,

- Alf






Cheers & hth.,

- Alf
 
A

alf.p.steinbach

You're several decades behind the times Alf!

He he. :)


Thanks.

I see no evidence or indication that it uses anything but two's complement for integers, but knowing IBM, it might just do that plus having a Hollerith card based system console and using EBCDIC through and through. Of coursewith a special Java adapter somewhere, so that Java can run there. Ow hell.. :(


Cheers, & thanks!,

- Alf
 
A

alf.p.steinbach

A C or C++ programmer who writes code without a defined meaning, but
expects the compiler to read his mind for what he wants, has a lot to
learn.

That's totally irrelevant, a slur.

Do quote the alleged "code without a defined meaning".


[snipped pages of further misrepresentation and innuendo]

- Alf (annoyed)
 
A

alf.p.steinbach

On 09/10/13 00:49, (e-mail address removed) wrote:
A compiler that yields unpredictable (lack of) effects for ordinary
code, is simply ... unpredictable.
Not a good thing, regardless of the users.

And let's face: not every C++ programmer is a language lawyer.

A C or C++ programmer who writes code without a defined meaning, but
expects the compiler to read his mind for what he wants, has a lot to
learn.

That's totally irrelevant, a slur.
Do quote the alleged "code without a defined meaning".

[snipped pages of further misrepresentation and innuendo]

My post was not meant to be a slur or to cause annoyance of any sort. I
am sorry you took it that way.

No quote of the alleged "code without a defined meaning".

Since there was no such.

I.e. your claim of being sorry can only refer to being caught.

But like it or not, a lot of C and C++ behaviour has clear definitions,
and there is a lot that is legal to write in C or C++ but is documented
as "undefined behaviour". That is the way the language works - and no
amount of discussions of "what programmers expect" will change that.

This pretends to be arguing against something I have written.

It is at best a misrepresentation.


- Alf
 
A

Alf P. Steinbach

[...] On Tuesday, October 8, 2013 5:15:50 PM UTC+2, David Brown
wrote:
Most people assume "int" works like a mathematical integer -
not modular arithmetic.

If the early creators of C (since this pre-dates and is inherited by
C++) and its standards thought that defining signed overflow as modular
behaviour were important, they would have defined it that way.

Who knows. If they had thought 'bool' or 'void' were important, they
would have included those types. If they thought humans were meant to
fly in the air, they would have equipped us with gills. Oh, wait... !

And just as the argument about what the early C creators would have done
and their motivations, is pure nonsense, so is the implied relevance to
anything earlier in this thread.

I.e., the above speculation about motivations etc. it's pure nonsense
supposition which in addition is about an irrelevance.

At the
very least, they would have made it "implementation defined" rather than
"undefined".

The first C standard was C89.

I'm pretty sure the "early creators" were engaged in that process, but
it was by committee, and it defined a language quite different from early C.

C was created in the middle 1970's, with early starts made already
pre-1970, IIRC.

At the time there was no language standard, but later the language was
effectively defined by the book "The C Programming Language" by
Kernighan and Ritchie (my first edition of that book rests comfortably
in a box in the outhouse), which is why it's referred to as "K&R C".

K&R C, early C, was a language quite different from modern C. You can
see this language used for actual large scale programming in not-so-old
source code for gcc. E.g., function declarations differ, and much else.

When you conflate that with C89 and later, and conflate THAT again with
C++, and combine that soup of conflations with speculations about
possible motivations for decision about irrelevant stuff in the 1970s,
then that is ...

well it can be misleading to some readers, I guess, so it can serve that
purpose, but other than that it's just pure meaningless nonsense,
balderdash.


[even more silly nonsense speculations elided]

Cheers & hth.,

- Alf
 
R

Rupert Swarbrick

Paavo Helde said:
C++ standard is very clear here, overflow in signed arithmetics is
undefined behaviour. So certainly it is not guaranteed to be invertible.

Yep. I was going for the Socratic question. You probably don't have to
prove to me that *you* understand the C++ standard (whichever revision).

Rupert
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,042
Latest member
icassiem

Latest Threads

Top