integer or long overflow...

D

deancoo

When you increase the number contained within an integer or long data type
beyond its capacity, what happens? I've run into this situation, and it
continues right along. Is it simply just rolling over when it gets to its
limit? Curious.

d
 
P

Phil Staite

deancoo said:
When you increase the number contained within an integer or long data type
beyond its capacity, what happens? I've run into this situation, and it
continues right along. Is it simply just rolling over when it gets to its
limit? Curious.

I believe/suspect that like so many "edge effects" in C++ it is left
undefined. However, a 2's compliment architecture (as most C++
environments seem to be) will simply keep right on going - no
exception/trap as from say dedicated floating point hardware
experiencing a problem or say dividing by zero.

The gotcha with 2's compliment is that if you increase a signed integer
type (int or long) beyond it's limit, it becomes a very large negative
number and keeps "increasing" - becoming less negative, headed towards zero.
 
D

deancoo

deancoo said:
When you increase the number contained within an integer or long data type
beyond its capacity, what happens? I've run into this situation, and it
continues right along. Is it simply just rolling over when it gets to its
limit? Curious.

d

And how about emum types? Ultimately they're stored and compared as
numbers, but what is their size? Short?

d
 
D

deancoo

Phil Staite said:
I believe/suspect that like so many "edge effects" in C++ it is left
undefined. However, a 2's compliment architecture (as most C++
environments seem to be) will simply keep right on going - no
exception/trap as from say dedicated floating point hardware experiencing
a problem or say dividing by zero.

The gotcha with 2's compliment is that if you increase a signed integer
type (int or long) beyond it's limit, it becomes a very large negative
number and keeps "increasing" - becoming less negative, headed towards
zero.

Would anyone in their right mind overflow a data type and still use it?
Does it overflow consistently? I guess what I'm asking is, should I use it?
What other options exist if you need to generate a number larger than 2^32,
which is only used as a key for unique identification? I've already
confirmed that for my finite set of data, the overflowed data type maintains
uniqueness.

d
 
P

Prog37

deancoo said:
Would anyone in their right mind overflow a data type and still use it?
Does it overflow consistently? I guess what I'm asking is, should I use it?
What other options exist if you need to generate a number larger than 2^32,
which is only used as a key for unique identification? I've already
confirmed that for my finite set of data, the overflowed data type maintains
uniqueness.

Does your unique key fit in 32 bits or not?
Some c++ compilers gcc for instance offer the long long type.
On a 32 bit platform this will be a 64 bit integer.
The main drawback to using compiler extentions is the loss of portability.

Another option would be to define a c++ class say:

class BigInt
{
public:
BigInt(int bits_storage);
BigInt& operator + (BigInt);
int& operator + (int);

private:
unsigned long *storage;
int significant_bits;
}

I will leave the implementation as an excercise for the reader.
 
D

deancoo

Prog37 said:
Does your unique key fit in 32 bits or not?

You see, the key that's generated has a large range, but is very disjointed.
There are about 2.6M elements in a range as large as aprox. 4.7B. The key
generator is super simple and super fast, so I don't want to change it. The
only drawback, of course, is how disjointed the keys are, requiring a large
container to hold them. When I said that the key maintains its uniqueness,
what I meant was that even when the key generator produces a number in
excess of 2^32, the resultant number stays unique compared to all other
generated keys. So really, is it so bad to use this overflowed data type?
Remember, I said finite set of data, and I've verified each possible key.
 
R

Richard Cavell

Would anyone in their right mind overflow a data type and still use it?

I have an application where I create bitmasks by doing this:

unsigned int i32_BitMask = ( 1 << x ) - 1 ;

where x is the number of 1 bits in the bitmask.

In the case where x is 32, the 1 shifts off the left end of the 32-bit
datum, and gives me precisely what I want.
What other options exist if you need to generate a number larger than 2^32,

You can use a larger int type like long long (GCC) or _int64 (MSVC).
 
P

Prog37

deancoo said:
You see, the key that's generated has a large range, but is very disjointed.
There are about 2.6M elements in a range as large as aprox. 4.7B. The key
generator is super simple and super fast, so I don't want to change it. The
only drawback, of course, is how disjointed the keys are, requiring a large
container to hold them. When I said that the key maintains its uniqueness,
what I meant was that even when the key generator produces a number in
excess of 2^32, the resultant number stays unique compared to all other
generated keys. So really, is it so bad to use this overflowed data type?
Remember, I said finite set of data, and I've verified each possible key.


If I understand you correctly, you are saying that the key generator
generates keys that have a dynamic range of zero to 4.7 billion.
If this is correct you need a 33 bit unsigned integer to store the
keys without discarding any bits of key data.

ln(4.7x10^9)/ln(2) = 32.130013610776536155133366190146

or

2^32.130013610776536155133366190146 == 4.7 x 10^9


But since the number of keys generated is small you think you can
"risk it" and use the "overflowed data type".

The obvious problem is that unless your key generator prevents it
somehow there is a finite probability that multiple valid keys
will map to the same "overflowed data" value.

The only way your key generator could prevent that is if there
were key ranges that were illegal for some reason. As an example
if your key generator only produced even number keys that would
reduce the randomness of the key generation by 1 bit so you could
simply store key/2 in a 32 bit unsigned variable.

If your key generation algorithm really requires 33 bits of storage
it would be foolish to try to store it in 32 bits. You would be laying
the groundwork for a bug that only surfaces once in a while. You can
calculate the probabilities based on the number of keys generated.

I hate trying to debug problems that only happen once in a while,
I would much rather debug a repeatable bug. Just my 2 cents.

knock 2 bits of randomness off of the key generation.
In the example provided although the key could require 33 bits
 
P

Prog37

Richard said:
I have an application where I create bitmasks by doing this:

unsigned int i32_BitMask = ( 1 << x ) - 1 ;

where x is the number of 1 bits in the bitmask.

In the case where x is 32, the 1 shifts off the left end of the 32-bit
datum, and gives me precisely what I want.



You can use a larger int type like long long (GCC) or _int64 (MSVC).

For the problem he is describing he needs more than 32 bits (see my
other post).

One way of dealing with the portability issues introduced by 64 bit ints
is to use conditional compilation. He could do something like.

#ifdef WIN32
typedef _int64 big_integer;
#else
typedef long long big_integer;
#endif
 
P

Pete Becker

deancoo said:
When you increase the number contained within an integer or long data type
beyond its capacity, what happens? I've run into this situation, and it
continues right along. Is it simply just rolling over when it gets to its
limit? Curious.

The behavior of overflow for signed itegral types is undefined. But for
what you're doing it sounds like you should be using an unsigned type.
On overflow the result is reduced modulo 2^n, where n is the number of
bits in the value's representation.
 
S

Siemel Naran

The behavior of overflow for signed itegral types is undefined. But for
what you're doing it sounds like you should be using an unsigned type.
On overflow the result is reduced modulo 2^n, where n is the number of
bits in the value's representation.

You mean LONG_MAX+1 would be zero, right? No overlfow exceptions?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top