efficiency concern: when to really use unsigned ints and when not to

N

Nick Landsberg

Dan Pop wrote:

[snip}
How many times have you defined an object occupying half of your address
space or more?

Dan

It may be rare for others, but I work with an in-memory database
which uses over 2.5 GB of 4GB available memory.

We're moving to a 64-bit machine to get around the limitation,
too.
 
G

glen herrmannsfeldt

Jack said:
On 5 Feb 2004 13:18:31 -0800, (e-mail address removed) (Neil Zanella) wrote
(snip)
Speed or efficiency of operation is no different between pure signed
or unsigned types on any architecture developed in the last quarter
century at least.

ESA/390, which is much less than 25 years old, didn't have unsigned
divide or multiply, but the more recent z/Architecture does.

There might be others more recent, or otherwise still running
that don't have unsigned multiply and divide of the appropriate
width. If you are only doing add/subtract/relational/index
operations on the values then I don't see any reason not to
use unsigned.

I don't know all the architectures to know, but multiply and
divide are the ones I would look for first.

-- glen
 
G

glen herrmannsfeldt

Dan said:
In <[email protected]> (e-mail address removed) (Neil Zanella) writes:

(snip regarding when to use unsigned integers)
Signed integers and unsigned integers are fairly different beasts,
intended for different purposes. Avoid unsigned integers unless you need
their special semantics (or the additional range), even if you're only
manipulating positive values. After all, the prototype of main() isn't
int main(unsigned argc, char **argv);
despite the fact that argc is not supposed to have negative values.
If it's intended for usual arithmetic operations, use signed integer.
If it's intended for bit manipulation operations and/or modulo arithmetic,
use unsigned integer.

Despite what I am about to say, I agree 100% with these statements.
(With the possible exception of modulo arithmetic not modulo
a power of two.)
Avoid as much as possible mixing the two flavours in the same
expression, because very nasty bugs may arise.
The fact that size_t is unsigned is a real pain in the ass, because this
type is seldom used in a genuine unsigned context. It should have been
signed, for the same reason that argc is signed.

Well, yes. Except that there are cases where size_t values might
be between INT_MAX and UINT_MAX, which could cause problems.
(Assuming that the implementation does it right for those values.)

argc > INT_MAX should be pretty rare.

(snip)
Speed is not a concern. On two's complement architectures, the two are
usually handled identically. It's the intended purpose of the variable
that dictates the choice (again, unless you need the extra range provided
by the unsigned flavour, but this doesn't seem to be the case).

Well, for multiply and divide they are different, and it isn't so easy
to do unsigned divide using a signed divide operation. There are still
machines around without unsigned multiply and divide.

-- glen
 
D

Dan Pop

In said:
FWIW in the olden days, when 512K was standard on Amstrad PCs, I did find
myself allocating 256K in one go. Hardly a truly enormous object when you
think about it.

And hardly a problem for a hypothetically signed size_t that could
cover objects up to 2 GB, but you were not supposed to be able to
understand that...

Dan
 
M

Mark McIntyre

And if you have done so, did you run into problems because ptrdiff_t is
not capable of representing differences between pointers in all cases?

when your memory space is 512Kb or less, this is not an issue.
 
C

Christian Bau

Mark McIntyre said:
when your memory space is 512Kb or less, this is not an issue.

Usually ptrdiff_t covers exactly half the range as size_t, because
ptrdiff_t is signed and size_t is unsigned. If you use more than half of
the capacity of size_t for a single object, then you are in trouble.

Apparently you meant "half of available memory", not "half of the amount
of memory that can be specified by size_t".
 
A

anony*mouse

So, the question is, when you know an integer is not going to be negative, is that
good enough reason to declare it as unsigned

Yes.

Several critical security vulnerabilites have been discovered in the
last year or so that have been caused by integer manipulation issues.
These have occurred not only in Microsoft software(the latest ASN.1
critical vulnerability is one example), but in several open source
software packages and closed source packages from other vendors.

The contents of this thread suggests that, even amongst experienced C
programmers, there is still a disturbing lack of awareness of this
issue.

If you have any doubt on the seriousness of integer manipulation bugs
follow this link:
http://www.google.com/search?q=integer+overflow

I suggest every C\C++ programmer starts by reading these articles:
http://msdn.microsoft.com/library/en-us/dncode/html/secure04102003.asp
http://msdn.microsoft.com/library/en-us/dncode/html/secure09112003.asp

As an exercise consider the additional checks that need to be taken in
the check() function below. How many would be removed by making a and
b unsigned?

int check(signed int a, signed int b)
{
/* a and b are considered untrusted numbers. */

if (a + b < 50)
return 1;
else
return 0;
}

void somefunc(void)
{
/* a_cnt, b_cnt, a_str, b_str from somewhere */
char buf[50];

if (!check(a_cnt, b_cnt) return;

strncpy(buf, a_str, a_cnt);
strncpy(buf, b_str, b_cnt);
}
 
C

CBFalconer

anony*mouse said:
.... snip ...

As an exercise consider the additional checks that need to be
taken in the check() function below. How many would be removed
by making a and b unsigned?

int check(signed int a, signed int b)
{
/* a and b are considered untrusted numbers. */
if ((a < 0) || (b < 0)) return 0;
if (a + b < 50)
return 1;
else
return 0;
}

The above suffices for any system which has integer overflow
detection. Now consider the ugly tests required if a and b are
unsigned, so that overflow of a + b cannot occur.
 
F

Flash Gordon

if ((a < 0) || (b < 0)) return 0;

The above suffices for any system which has integer overflow
detection. Now consider the ugly tests required if a and b are
unsigned, so that overflow of a + b cannot occur.

int check(unsigned int a, unsigned int b)
{
/* a and b are considered untrusted numbers. */
/* no check for < 0 as now using unsigned int which, by definition, is

if ((a < 50) && (b < 50) && (a + b < 50))
return 1;
else
return 0;
}

Doesn't look very ugly to me and with short circuit evaluation the
addition is only done if both a and b are less than 50 and so
guaranteed not to overflow. Admittedly, ugliness is in the eye of the
beholder.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,042
Latest member
icassiem

Latest Threads

Top