Does integer over flow cause undefined behaviour ?

A

Alf P. Steinbach

* Bo Persson:
Here I think "ban" is a synonym for "making an implementation totally
inefficient".

Forcing 36 bit one's complement hardware to

Also, words like "forcing". ;-)

That's a conflict that doesn't exist.

I'ts made up.
 
J

Jerry Coffin

[ ... ]
I am sure the standards committee didn't make up these rules just for
fun, but were well aware that specifying seemingly tiny details would
make it impossible to implement the language on a wide range of
hardware.

There's really quite a bit more to the situation than
that.
By not specifying some variations ('being vague'), you don't
limit yourself to:

8 bit bytes
2^n bytes per datatype
two's complement integers
no pad bits
'a' == 0x20

[ ... ]

There's a lot more at stake here than just a minor
tradeoff between ease of writing portable code and ease
of writing a compiler that produces efficient code.

One basic intent of C++ is that it should support
essentially any kind of programming that C did/does. One
of the things for which C (and therefore C++) is intended
to be used is system programming, such as implementing
operating systems.

Just for one minor example, consider the result of
mandating the sizes of types. If you were going to do
that, you'd almost certainly mandate them as powers of
two. If you do that, however, you make it essentially
impossible to any longer use C++ to write something like
the OS (or any other "bare metal" code) for almost any
machine with an unusual word size.

Contrary to some people's beliefs, such unusual word
sizes are NOT merely strange leftovers from a bygone era,
nor is there any reasonable likelihood that such machines
are going to go away anytime soon. Consider, for example,
the specs on the TI TMS320C3x DSPs:
32-bit floating point
24-bit fixed point
40-bit registers
Likewise, the Motorola DSP 563xx series:
24-bit addresses
24-bit fixed point data
56-bit accumulators

It's not particularly difficult to find more examples
either.

To summarize: the fundamental question here is primarily
whether you want an applications programming language or
a systems programming language. There is certainly room
in the world for applications programming languages --
but that's never been the intent of C++.
 
A

Alf P. Steinbach

* Jerry Coffin:
One basic intent of C++ is that it should support
essentially any kind of programming that C did/does. One
of the things for which C (and therefore C++) is intended
to be used is system programming, such as implementing
operating systems.

Just for one minor example, consider the result of
mandating the sizes of types. If you were going to do
that, you'd almost certainly mandate them as powers of
two. If you do that, however, you make it essentially
impossible to any longer use C++ to write something like
the OS (or any other "bare metal" code) for almost any
machine with an unusual word size.

Contrary to some people's beliefs, such unusual word
sizes are NOT merely strange leftovers from a bygone era,
nor is there any reasonable likelihood that such machines
are going to go away anytime soon. Consider, for example,
the specs on the TI TMS320C3x DSPs:
32-bit floating point
24-bit fixed point
40-bit registers
Likewise, the Motorola DSP 563xx series:
24-bit addresses
24-bit fixed point data
56-bit accumulators

It's not particularly difficult to find more examples
either.

I do understand why I have to repeat the same umpteen times in the same
subthread.

But for the record, once more: there is no conflict, it's a false,
construed dichotomy.

Furthermore, as mentioned, C manages to support fixed size types, and
also as mentioned, that will probably also be supported in C++0x, so the
question about fixed size types is really moot. :)
 
J

Jerry Coffin

[ ... ]
I do understand why I have to repeat the same umpteen times in the same
subthread.

I suspect you meant you _don't_ understand. I can explain
it easily: it's a natural consequence of the fact that
you're mostly wrong.
But for the record, once more: there is no conflict, it's a false,
construed dichotomy.

Furthermore, as mentioned, C manages to support fixed size types, and
also as mentioned, that will probably also be supported in C++0x, so the
question about fixed size types is really moot. :)

C supports fixed-size types -- sort of -- and only
optionally at that. There's no problem with C++ doing the
same, but the portability gains are minimal at best.
 
A

Alf P. Steinbach

* Jerry Coffin:
[ ... ]
I do understand why I have to repeat the same umpteen times in the same
subthread.

I suspect you meant you _don't_ understand. I can explain
it easily: it's a natural consequence of the fact that
you're mostly wrong.

If you believe that, if would be prudent to argue your case (whatever it
is) rather than resorting to an infantile accusation.

C supports fixed-size types -- sort of -- and only
optionally at that. There's no problem with C++ doing the
same, but the portability gains are minimal at best.

There's no "sort of": C supports fixed-size types.

They're optional as they should be.

The portability gains are substantial: without standardization, each
application would have to provide portability on its own.
 
J

Jerry Coffin

[ ... ]
There's no "sort of": C supports fixed-size types.

By now, you've probably already realized what complete
nonsense this was, in context, but just in case you've
missed the obvious...

What C has are typedefs, and typedef's are only sort of
types. Admittedly, within the C context, most of the
difference isn't visible -- but C++ it is far more often.
Consider, for example:

int32_t x(int32_t a) {
// ...
}

int x(int a) {
// ...
}

This may work part of the time, but it certainly isn't
portable.

What C provides are only sort of types. The difference
between what's provided and a real type is usually
negligible in C, but becomes much more prominent in C++.
They're optional as they should be.

The portability gains are substantial: without standardization, each
application would have to provide portability on its own.

Code that really needs to be portable can't depend on
their being present -- so it still has to provide the
portability on its own.
 
A

Alf P. Steinbach

* Jerry Coffin:
[ ... ]
There's no "sort of": C supports fixed-size types.

By now, you've probably already realized what complete
nonsense this was, in context, but just in case you've
missed the obvious...

The error/misconception lies in the word "obvious".
What C has are typedefs, and typedef's are only sort of
types. Admittedly, within the C context, most of the
difference isn't visible -- but C++ it is far more often.
Consider, for example:

int32_t x(int32_t a) {
// ...
}

int x(int a) {
// ...
}

This may work part of the time, but it certainly isn't
portable.


You're arguing that because the C solution doesn't meet your arbitrary
C++ requirements, it's not portable: that conclusion does not follow.

And your arbitrary C++ requirements are not met by types such as
std::size_t or std::ptr_diff, and they're portable: that's a direct
counter-example (or two).

In short, the argument you present is (1) bereft of logic, and (2) if it
were valid, would make existing C++ standard types non-portable.

What C provides are only sort of types. The difference
between what's provided and a real type is usually
negligible in C, but becomes much more prominent in C++.


Code that really needs to be portable can't depend on
their being present -- so it still has to provide the
portability on its own.

That's nonsense.
 
J

Jerry Coffin

[ ... ]
You're arguing that because the C solution doesn't meet your arbitrary
C++ requirements, it's not portable: that conclusion does not follow.

And your arbitrary C++ requirements are not met by types such as
std::size_t or std::ptr_diff, and they're portable: that's a direct
counter-example (or two).

Not really -- these are typedefs oriented toward an
_intent_, which makes them entirely different. To use
your example, overloading ptrdiff_t and whatever its base
type might be simply doesn't make sense, because even if
they have the same representation, they have different
uses.

That's not at all the case with int32_t and such -- these
are relatively ordinary integer types, without a
fundamentally different intent from short, int, long,
etc.
In short, the argument you present is (1) bereft of logic, and (2) if it
were valid, would make existing C++ standard types non-portable.

Not even close to accurate.
That's nonsense.

Oh, how I wish you were right!
 
A

Alf P. Steinbach

* Jerry Coffin:
[ ... ]
You're arguing that because the C solution doesn't meet your arbitrary
C++ requirements, it's not portable: that conclusion does not follow.

And your arbitrary C++ requirements are not met by types such as
std::size_t or std::ptr_diff, and they're portable: that's a direct
counter-example (or two).

Not really -- these are typedefs oriented toward an
_intent_, which makes them entirely different.

I'm not going to discuss hypothetical intents and their even more
hypothetical effects.

To use
your example, overloading ptrdiff_t and whatever its base
type might be simply doesn't make sense, because even if
they have the same representation, they have different
uses.

It may be your right about ptrdiff_t (sorry about earlier typo), because
I've never needed to overload that.

On the other hand I have needed to overload std::size_t. One example is
a bug in Visual C++ where it spews out warnings about passing a
std::size_t to standard streams. Another example is an output function
that should choose the appropriate printf format specifier for the type
of argument, noting that std::size_t may or may not be a typedef of some
other built-in type (yes, there is a portable solution ;-)).

So your "doesn't make sense" doesn't make sense.

More fundamentally, it doesn't make sense to discuss a limitation of
adopting a C solution as-is in C++, as a limitation of the C solution:
it is a limitation of (the C solution + a chosen context and language
for which it was not designed). It's not even a limitation per se. It
is a requirement that you place on a C++ solution, and there's nothing
technically that prevents that requirement from being met.

In short, your argument here has (A) a false premise, and (B) is
irrelevant anyway.

That's not at all the case with int32_t and such -- these
are relatively ordinary integer types, without a
fundamentally different intent from short, int, long,
etc.

I'm not going to discuss hypothetical intents and their even more
hypothetical effects.

Not even close to accurate.

I demonstrated (1) and (2). If you disagree, and want me or others to
See The Light, please supply better counter-arguments. Above you tried
to attack (2), but failed as noted in (A) and (B).

Oh, how I wish you were right!

Just ask if you wonder why. :)
 
G

Greg

Pete said:
Really? What can I expect to happen when I dereference a null pointer
with MSVC 7.1? And where is it documented? After I've done this, what is
the state of my program?

Dereferencing a NULL pointer is typically defined by the applicable
architecture, not the compiler. Deferencing a NULL pointer on Windows
XP at any rate is a certain EXCEPTION_ACCESS_VIOLATION.

Unless explicitly handled, the state of your program will be
"terminated."

Microsoft has extensive developer documentation at msdn.microsoft.com
with information about these various runtime errors.
Really? How is it defined for MSVC 7.1? And where is it documented? Has
Microsoft promised that whatever this behavior is, it will never be changed?

In this case the underlying CPU determines the outcome. So I would
consult the Intel manuals for the governing behavior - but I would
expect the values wrap around.

The C++ standard makes no guarantee that it will not change in the
future. No standard guarantees against change in the future. Standards
are only ever good for the present. And the Intel instruction set and
architecture is a standard of Intel's. So whatever the integer overflow
behavior is is this case - is not due to happenstance. So the
likelihood that it would ever change in the future is very small.
I'm not disputing that you can often figure out what happens in
particular cases. But that's not a sufficient basis for saying that it's
well defined for that compiler. Unless the compiler's specification
tells you exactly what you can expect, you're guessing. That's often
appropriate, but it doesn't make what you're doing well defined.

Sure it does. The language, the compiler, the runtime together define
the set of governing behavior for a program. It doesn't really matter
to a programmer whether it's the C++ standard, the compiler or the OS
that defines NULL pointer dereferences as a memory access violation. No
matter who has defined it - it is the standard behavior as far as that
program is concerned.

Greg
 
B

Bo Persson

Alf P. Steinbach said:
But for the record, once more: there is no conflict, it's a false,
construed dichotomy.

Furthermore, as mentioned, C manages to support fixed size types,
and also as mentioned, that will probably also be supported in
C++0x, so the question about fixed size types is really moot. :)

C99 only manages to support fixed size types for hardware were they
fit exactly. The C99 standard requires int32_t to be defined on
implementations which have 32 bit integers, two's complement, no pad
bits.

On other implementations, the typedef is absent.

How portable is that?



Bo Persson
 
A

Alf P. Steinbach

* Bo Persson:
C99 only manages to support fixed size types for hardware were they
fit exactly. The C99 standard requires int32_t to be defined on
implementations which have 32 bit integers, two's complement, no pad
bits.

On other implementations, the typedef is absent.

How portable is that?

Maximum.
 
M

Mirek Fidler

Pete said:
Sure, you can kill performance and make compiler writers work harder in
order to improve "portability" for a small fraction of the code that
people write. Java did it with their floating-point math, and had to
undo it.

With all respect, what I propose is a little bit different. E.g. GCC,
Intel and Microsoft compilers already support such "substandard",
without any changes to their code (they all support modulo 2 integers
with harmless overflows, flat memory model, destructive moves of non-PODs).

BTW, speaking about floating point maths, all of them also contain
switches to speed up FP at the cost of droping standard compliance - so
your argument is pretty void here (standard is already too restrictive
for optimal FP).

Mirek
 
P

Pete Becker

Mirek said:
BTW, speaking about floating point maths, all of them also contain
switches to speed up FP at the cost of droping standard compliance - so
your argument is pretty void here (standard is already too restrictive
for optimal FP).

On the contrary: it demonstrates exactly what I said: that a standard
that imposes excessive restrictions won't be followed.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,774
Messages
2,569,598
Members
45,161
Latest member
GertrudeMa
Top