signed int overflow

J

JKop

You know how the saying goes that *unsigned* overflow is...
well.. defined. That means that if you add 1 to its maximum
value, then you know exactly what value it will have
afterward on all implementations.

But then you have signed integers. Let's say a signed
integer is set to its maximum positive value. If you add 1
to it, what happens?:

A) It's implementation defined what value it will
represent, eg. it could roll back around to 0, or it could
roll back around to the maximum negative number.

B) Undefined behaviour.


Please say A!


For instance:

int main()
{
//on a 32-Bit machine
signed int i = 2147483648;

++i;
}

Is that just plain old undefined behaviour, eg. the machine
can blow up and spit nitric acid in your face if it wants
to...

or is it simply benignly just implementation specific what
value "i" will represent after the incrementation?

If B is the case, it looks like I'm off to write "class
signed_dof_int", where dof = defined overflow. I'll use a
template for it, so you can do it with all of the integral
types. What's the best way to figure out the maximum value
of a particular type? I believe that the Standard Library
contains some global const variables, stuff like MAX_INT,
but I'd prefer a method I could use within a template.

-JKop
 
S

Sharad Kala

JKop said:
You know how the saying goes that *unsigned* overflow is...
well.. defined. That means that if you add 1 to its maximum
value, then you know exactly what value it will have
afterward on all implementations.

But then you have signed integers. Let's say a signed
integer is set to its maximum positive value. If you add 1
to it, what happens?:

A) It's implementation defined what value it will
represent, eg. it could roll back around to 0, or it could
roll back around to the maximum negative number.

B) Undefined behaviour.


Please say A!

Nah..IIRC, it's B.

Sharad
 
J

John Harrison

Sharad Kala said:
Nah..IIRC, it's B.

Sharad

Is unsigned overflow defined in all cases? Where in the C++ standard does it
say that?

The situation with signed overflow is ridiculous. The standard should say
overflow is defined as if twos complement arithmetic was being performed.

john
 
A

assaarpa

Please say A!

In machine architechture that implements two's complement arithmetics on
integer values, you can reinterpret the value as unsigned and reinterpret it
again after the increment to have a good guess what the value might be. It's
not required that compilers implement this however logical it may sound.

I repeat: even if you know how your platform works, how the machine
instructions on the architechture works, the language does not care, because
if it did, every platform would have to care and if that would involve
sequence of instructions which is not implementable efficiently, it would
suck a lot for people working on those platforms. Hence, undefined
behaviour.

But feel free to be more specific which 32 bit architechture you have in
mind and we can treat rest of the thread off-topic to comp.lang.c++ ;-)
 
R

Rolf Magnus

JKop said:
You know how the saying goes that *unsigned* overflow is...
well.. defined. That means that if you add 1 to its maximum
value, then you know exactly what value it will have
afterward on all implementations.

Actually, the overflow behavior is always undefined, in theory even for
unsigned integers. But that could never happen because the standard says
that unsigned integers don't overflow at all. The wrap-around is just part
of the normal unsigned integer behavior and not seen as overflow:
(2^n here of course means 2 raised to the power of n)

3.9.1 Fundamental types

....

4 Unsigned integers, declared /unsigned/, shall obey the laws of arithmetic
modulo 2^n where n is the number of bits in the value representation of
that particular size of integer. 41)

....
41) This implies that unsigned arithmetic does not overflow because a result
that cannot be represented by the resulting unsigned integer type is
reduced modulo the number that is one greater than the largest value that
can be represented by the resulting unsigned integer type.
But then you have signed integers. Let's say a signed
integer is set to its maximum positive value. If you add 1
to it, what happens?:

A) It's implementation defined what value it will
represent, eg. it could roll back around to 0, or it could
roll back around to the maximum negative number.

B) Undefined behaviour.


Please say A!

"I'm sorry Dave, I'm afraid I can't do that". The answer is B.
From the C++ standard:

If during the evaluation of an expression, the result is not mathematically
defined or not in the range of representable values for its type, the
behavior is undefined, unless such an expression is a constant expression,
in which case the program is ill-formed.
For instance:

int main()
{
//on a 32-Bit machine
signed int i = 2147483648;

++i;
}

Is that just plain old undefined behaviour, eg. the machine
can blow up and spit nitric acid in your face if it wants
to...

or is it simply benignly just implementation specific what
value "i" will represent after the incrementation?

If B is the case, it looks like I'm off to write "class
signed_dof_int", where dof = defined overflow. I'll use a
template for it, so you can do it with all of the integral
types. What's the best way to figure out the maximum value
of a particular type? I believe that the Standard Library
contains some global const variables, stuff like MAX_INT,
but I'd prefer a method I could use within a template.


std::numeric_limits<thetype>::max() from the <limits> header.
 
J

JKop

Hence, undefined behaviour.


What I'm trying to establish is whether signed integer
overflow is

A) Just plain old undefined behaviour. The program is well
within its rights to crash.

B) It's just simply implementation specific what value the
signed integer variable will represent after the
incrementation.

I'm currently writing a program that works with signed
integers. I want to know if my program will crash, or if it
will just give different values on different
implementations should an overflow occur.


-JKop
 
J

JKop

Okay let's say that Standard C++... allows... a program to crash should you
cause a signed int to overflow.

Well... what the hell kind of implementation would allow this?! Even if one
does exist, it would have been abandoned 17 years 6 months and 2 days ago.

Imagine it, boot up WinXP. Open a few documents, play minesweeper, CRASH
(Opps sorry, this computer is shit, it crashes if signed integers overflow).

I'm open to further discussion on this... but at the moment it looks like
I'm going to ignore the directive that signed int overflow is undefined
behaviour and thus that the program may crash. Come on, it's bullshit!


-JKop
 
S

Sharad Kala

John Harrison said:
Is unsigned overflow defined in all cases? Where in the C++ standard does it
say that?

Overflow or underflow doesn't occur for unsigned integral types. If an
out-of-range value is assigned to them, it is interpreted modulo TYPE_MAX +
1. You may want to check Section 4.7 of the Standard.

Sharad
 
R

Rolf Magnus

JKop said:
Okay let's say that Standard C++... allows... a program to crash should
you cause a signed int to overflow.

Well... what the hell kind of implementation would allow this?! Even if
one does exist, it would have been abandoned 17 years 6 months and 2 days
ago.

Why do you think a CPU that silently ignores overflows would be better than
one that signals such an error condition?
Anyway, a crash is not the only instance of undefined behavior. Another
could be an exception being thrown. However, if you don't catch that
exception, the result is similar to a crash - your program gets terminated.
AFAIK, there are implementations that throw an exception on a
division-by-zero, and I could imagine that there could be implementations
that throw on integer overflow.
Imagine it, boot up WinXP. Open a few documents, play minesweeper, CRASH
(Opps sorry, this computer is shit, it crashes if signed integers
overflow).

Then you could also say minesweeper is shit because it invokes undefined
behavor.
I'm open to further discussion on this... but at the moment it looks like
I'm going to ignore the directive that signed int overflow is undefined
behaviour and thus that the program may crash. Come on, it's bullshit!

I don't really get it. You want to overflow an integer and don't care for
the resulting value as long as you don't get a crash? What is the purpose
of that integer if the value doesn't matter?
 
J

JKop

I don't really get it. You want to overflow an integer and don't care
for the resulting value as long as you don't get a crash? What is the
purpose of that integer if the value doesn't matter?


The user enters a year:

2004
1582
1906
-6000 (6000 BCE)

I'm writing a program at the moment that deals with dates. I'm not going to
bother putting in safe-guards for signed integer overflow, it's not worth
the effort. As such, if the user enters the following year:

2147483645

I don't care if they get innacurate information, just so long as the machine
doesn't freeze or whatever.

(This program will be portable)

It seems though that according to the C++ Standard, it's well within its
rights to crash...


-JKop
 
R

Richard Herring

John Harrison said:
Is unsigned overflow defined in all cases? Where in the C++ standard does it
say that?

The situation with signed overflow is ridiculous. The standard should say
overflow is defined as if twos complement arithmetic was being performed.
That would be expensive on hardware that doesn't use 2's complement.
Which goes against the "don't pay for what you don't use" philosophy.
 
J

John Harrison

Richard Herring said:
That would be expensive on hardware that doesn't use 2's complement.
Which goes against the "don't pay for what you don't use" philosophy.

How much hardware like that is there? Couldn't a compiler for such hardware
provide a 'don't detect overflow' switch for the tiny number of users who
are running such hardware and care about the small expense. That way the
cost of unusual hardware is paid only by the people who have unusual
hardware, and all they have to do is to remember to use a compiler switch to
get the behaviour they want. At the moment everyone pays for the undefined
behaviour when only a very tiny minority of people would be inconvenienced
by a standard the enforces 2's complement and defined overflow behaviour.

John
 
R

Richard Herring

John Harrison said:
How much hardware like that is there?

I don't know, but the Standard implicitly provides for it.
Couldn't a compiler for such hardware
provide a 'don't detect overflow' switch for the tiny number of users

You don't know it's tiny.
who
are running such hardware and care about the small expense.

You don't know it's small.
That way the
cost of unusual hardware is paid only by the people who have unusual
hardware, and all they have to do is to remember to use a compiler switch to
get the behaviour they want.

Well, I suppose that would take care of the 1's complement and
signed-magnitude hardware. Now, how are you going to deal with the
hardware that generates an exception on overflow?
 
R

Rob Williscroft

John Harrison wrote in in
comp.lang.c++:
How much hardware like that is there? Couldn't a compiler for such
hardware provide a 'don't detect overflow' switch for the tiny number
of users who are running such hardware and care about the small
expense. That way the cost of unusual hardware is paid only by the
people who have unusual hardware, and all they have to do is to
remember to use a compiler switch to get the behaviour they want. At
the moment everyone pays for the undefined behaviour when only a very
tiny minority of people would be inconvenienced by a standard the
enforces 2's complement and defined overflow behaviour.

I suspect there is a mutch better solution than this, which is
to define a 2's-complement signed type, then people that need
determanistic overflow can have it people that don't care just
use the most optimal type provided by the hardware.

Here is my unsigned_int<> type:

http://www.victim-prime.dsl.pipex.com/docs/unsigned_int/index.html

The reason I haven't done signed_int<> yet is I had some problem's
deciding the best way to do it, inheritance worked fine with MSVC 7.1
(it emulated the signed to unsigned promotion almost perfectly), but
gcc 3.2 wasn't having it.

Rob.
 
J

John Harrison

Richard Herring said:
I don't know, but the Standard implicitly provides for it.


You don't know it's tiny.


You don't know it's small.

I've never come across such hardware. I've heard of a few very old machines
that used ones complement. I'd be interested to hear of any machines still
in use that use anything other than twos complement.
Well, I suppose that would take care of the 1's complement and
signed-magnitude hardware. Now, how are you going to deal with the
hardware that generates an exception on overflow?

Well such hardware would already have to deal an exception on unsigned
overflow. Or are you suggesting hardware that generates an exception on
signed overflow only? In any case I don't see any great problem.

john
 
A

assaarpa

It seems though that according to the C++ Standard, it's well within its
rights to crash...

It only means that depending on what architechture the arithmetic is done,
the result is DIFFERENT. Therefore, the standard cannot enforce a specific
rule, what the result should be, because that would force every
non-conforming implementation to add a lot of instructions for verification
and fixing the "situation".

Hence, undefined behaviour is called to help! Since the results are not
predictable (vary from architechture to another) the language washes it's
hands on the issue: it doesn't care! As long as it doesn't care, crashing is
perfectly valid result of signed integer addition overflow! This doesn't
require that crash occurs, but if one did, the standard would be perfectly
happy as... you should get it by now.. it doesn't care!

If you know your architechture and your compiler, what instruction sequences
it generates, you are not invoking undefined behaviour. Behaviour of adding
two signed integers is WELL DEFINED operation in IA32 assembly language. Now
all you need to know what your compiler does and you're ALL SET! It's
PERFECTLY LEGAL! But when you look at it from pure c++ standards point of
view, CRASHING IS ALSO PERFECTLY LEGAL! (the point is that on selected
platforms such operation is NOT undefined!)

You just have to know the context and, voila', you can get the job done (at
cost of the resulting code being LESS portable, which is a relative metric
anyway). Strange, lately a lot of implementation specific issues have
cropped up, what's going on!? :)
 
R

Rolf Magnus

assaarpa said:
It only means that depending on what architechture the arithmetic is done,
the result is DIFFERENT. Therefore, the standard cannot enforce a specific
rule, what the result should be, because that would force every
non-conforming implementation to add a lot of instructions for
verification and fixing the "situation".

However, the result could still be implementation-defined or unspecified.
Hence, undefined behaviour is called to help! Since the results are not
predictable (vary from architechture to another)

But on one specific architecture, they are usually predictable.
 
A

Alf P. Steinbach

* JKop:
You know how the saying goes that *unsigned* overflow is...
well.. defined. That means that if you add 1 to its maximum
value, then you know exactly what value it will have
afterward on all implementations.

But then you have signed integers. Let's say a signed
integer is set to its maximum positive value. If you add 1
to it, what happens?:

A) It's implementation defined what value it will
represent, eg. it could roll back around to 0, or it could
roll back around to the maximum negative number.

B) Undefined behaviour.

Please say A!

Sorry, it's B.

You may want to check out the recent thread "int overflow gives UB"
in [comp.std.c++] where this is discussed to death.

Summary: there doesn't seem to be a strong enough need for this to
be defined behavior to outweight the cost of changing the standard
(and motivate someone to do the work); on the other hand, there does
not now seem to be any valid technical arguments against standardizing
the behavior, and that includes the issue of hardware support.


For instance:

int main()
{
//on a 32-Bit machine
signed int i = 2147483648;

++i;
}

Is that just plain old undefined behaviour, eg. the machine
can blow up and spit nitric acid in your face if it wants
to...

or is it simply benignly just implementation specific what
value "i" will represent after the incrementation?

If B is the case, it looks like I'm off to write "class
signed_dof_int", where dof = defined overflow.

You don't have to because there's not one single existing C++
implementation (that I know of) where the result isn't two's
complement wrapping -- and ironically and paradoxically that
extreme case of existing practice is part of the reason why it's
not going to be standardized...
 
A

Andre Heinen

* JKop:

You don't have to because there's not one single existing C++
implementation (that I know of) where the result isn't two's
complement wrapping -- and ironically and paradoxically that
extreme case of existing practice is part of the reason why it's
not going to be standardized...

Floating point is a different case, though. Some compilers use
some "NaN" of "Inf" special values to represent errors.
 
G

Greg Comeau

The user enters a year:

2004
1582
1906
-6000 (6000 BCE)

I'm writing a program at the moment that deals with dates. I'm not going to
bother putting in safe-guards for signed integer overflow, it's not worth
the effort. As such, if the user enters the following year:

2147483645

I don't care if they get innacurate information, just so long as the machine
doesn't freeze or whatever.

(This program will be portable)

It seems though that according to the C++ Standard, it's well within its
rights to crash...

Indeed it is.

Playing devil advocate:
BTW, would you care if instead of dates, you were programming
the systems at the bank where your bank account is, and those
values were your paycheck? Or even sticking with dates,
the dates your pension should kick it and direct deposit
your account? This program will be portable (sic) :)
(I'm not necessarily presenting a solution here (yet),
mostly just raising more situations.)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top