Bart C said:
We know the minimum value range of our data types - why would we need to
know more than that?
You want to do math in a ring, without knowing what ring you are in?
More specifically: exactly how integers wrap around matters to a large
class of applications (like those that accept input.)
The reason why questions like this are cropping up is because of the
transition from 32 bits to 64 bits which is happening right now.
People want to know if they can still get away with using just 32 bits
with just a little more work, and not break their backwards
compatibility by pushing everything to 64 bits -- but doing so means
they need to *know* when their integers wrap around.
Okay, that's one reason. Any more? Huh? Huh?
How about creating a big integer library? How are you supposed to
capture/detect a carry if your underlying system happens to be
capturing it for you in extra integer bits it happened to magically
give you?
This also seriously affects some algorithms like primality testing.
If you know the size of your integers is less than 36 bits, there are
well known fast algorithms that can test for primality
deterministically in finite time. If its more bits, then they only
work *nearly all the time*. So if you implement:
long factor (long n) {
long f;
if (isPrimeUpTo36bits (n)) return 1; /* Not factorable */
f = quickDivisor (n);
if (f > 1) return f;
for (f=3; ; f+=2) {
if (0 == (n % f)) return f;
}
}
How do you even know if the algorithm terminates? If the system
decides that long is 40 bits, then it does not terminate. If the
system decides that long is 32 bits then it does.
We could force this to work by putting an extra condition in the for
loop which might cost in terms of performance. (It actually doesn't
in this case, but the (n % f) can be modified in a sort of "strength
reduction" way (using code much to complicated for a quick USENET
post) where it *does* matter.)