warning - comparing a signed value to an unsinged value

K

Kevin Goodsell

What do you think is the best way to handle a compiler warning about
comparing an unsigned value to a signed value? Cast to silence it?
Disable that warning altogether? Or just live with it?

On one hand, the warning *could* be useful. Most of the time I get it in
cases where I know the comparison is safe, but it's not hard to imagine
that this won't always be the case. This makes disabling it undesirable.
Casting is a workable solution, but I worry that changes in the code
later could introduce errors that go undetected due to the cast. And I
think we all hate not having a "clean" compile (if only because having a
bunch of warnings that you expected makes it more difficult to spot the
ones you didn't expect).

What is your opinion?

Thanks.

-Kevin
 
T

T.M. Sommers

Kevin said:
What do you think is the best way to handle a compiler warning about
comparing an unsigned value to a signed value? Cast to silence it?
Disable that warning altogether? Or just live with it?

You could always fix the code so that you aren't comparing a signed to
an unsigned.
 
D

David Rubin

Kevin said:
What do you think is the best way to handle a compiler warning about
comparing an unsigned value to a signed value? Cast to silence it?
Disable that warning altogether? Or just live with it?

IME, this usually occurs when you have a function which checks an int
value against some range, typically derived via sizeof (size_t, which is
unsigned). My suggestion, in this case, is to fix your functions so that
you use size_t for ranges and indices.

/david
 
K

Kevin Goodsell

T.M. Sommers said:
You could always fix the code so that you aren't comparing a signed to
an unsigned.

That's not always possible without introducing new variables.

int SomeFunc(int *dest); /* returns error code, writes value to *dest */

int i;
if (SUCCESS == SomeFunc(&i))
{
if (i < sizeof(some_type))
{
/* ... */
}
}

I can't very well get an unsigned type from SomeFunc, nor can I cause
sizeof() to result in a signed type. The only way to make the comparison
deal with like types would be to add a new variable that logically
shouldn't exist. This is hardly any better than casting in the comparison.

-Kevin
 
C

CBFalconer

Kevin said:
What do you think is the best way to handle a compiler warning
about comparing an unsigned value to a signed value? Cast to
silence it? Disable that warning altogether? Or just live with it?

On one hand, the warning *could* be useful. Most of the time I get
it in cases where I know the comparison is safe, but it's not hard
to imagine that this won't always be the case. This makes
disabling it undesirable. Casting is a workable solution, but I
worry that changes in the code later could introduce errors that
go undetected due to the cast. And I think we all hate not having
a "clean" compile (if only because having a bunch of warnings that
you expected makes it more difficult to spot the ones you didn't
expect).

What is your opinion?

Spend a little time thinking. Assume we are talking about signed
and unsigned ints. Now, if the unsigned is larger than INT_MAX,
it is obviously larger than the int. If the int is negative, it
is obviously smaller than the unsigned. Having eliminated these
cases you can safely cast the int into unsigned, and then
compare. In fact, all you need to eliminate is the negative case.
 
G

Glen Herrmannsfeldt

CBFalconer said:
(snip)

Spend a little time thinking. Assume we are talking about signed
and unsigned ints. Now, if the unsigned is larger than INT_MAX,
it is obviously larger than the int. If the int is negative, it
is obviously smaller than the unsigned. Having eliminated these
cases you can safely cast the int into unsigned, and then
compare. In fact, all you need to eliminate is the negative case.

Say one wants to compare a return value from a function to sizeof(some
struct). It would seem safer to cast sizeof to int, unless one could
realistically expect sizeof to be greater than MAX_INT. Casting the int to
unsigned will cause negative return values to fail a less than test. Though
in some cases that might be a good thing.

I believe one should understand the result of both types of comparisons, and
choose the appropriate one.

-- glen
 
S

Sheldon Simms

That's not always possible without introducing new variables.

int SomeFunc(int *dest); /* returns error code, writes value to *dest */

int i;
if (SUCCESS == SomeFunc(&i))
{
if (i < sizeof(some_type))
{
/* ... */
}
}

I can't very well get an unsigned type from SomeFunc, nor can I cause
sizeof() to result in a signed type. The only way to make the comparison
deal with like types would be to add a new variable that logically
shouldn't exist. This is hardly any better than casting in the comparison.

In this case, you are comparing the value in 'i' with a size_t.
You shouldn't be doing this at all unless you *know* that the
value in 'i' is unsigned. If you know this, then you should make
'i' unsigned (and change the name to something better like 'size').

Your problem is really that SomeFunc() wants a pointer to (signed)
int, when the value it is storing there is unsigned. The solution is
to change SomeFunc(). If you can't do that, but you know *for sure*
that SomeFunc() never writes a signed value into 'i', then cast the
argument to SomeFunc().

If, however, you don't know for sure that SomeFunc() behaves properly
(you don't have the souce code, for example), then you need to give
SomeFunc() a pointer to int, and then check to make sure that it isn't
negative before comparing it to sizeof(some_type).

So take your pick:

unsigned int size;
if (SUCCESS == SomeFunc(&(int)size))
{
if (size < sizeof(some_type))
...
}


int signed_size; /* Bogus, but SomeFunc() author is an idiot */
if (SUCCESS == SomeFunc(&signed_size))
{
if (signed_size < 0) return ERROR;
if ((unsigned int)signed_size < sizeof(some_type))
...
}

-Sheldon
 
C

Chris Torek

In this case, you are comparing the value in [signed int] 'i' with
a size_t. You shouldn't be doing this at all unless you *know* that the
value in 'i' is unsigned.

Not necessarily: another option is "if a negative value of i is
supposed to be considered greater than the size_t value, which you
know is not `overflow-able'". That is, you know for certain that,
in:

i < u

the value in u is less than INT_MIN + UINT_MAX + 1, so that
all values of i in [INT_MIN..-1] result in values in the range
[INT_MIN + UINT_MAX + 1 .. UINT_MAX], which is less than u.

This test can be (re)written as:

i < 0 || i < (int)u

but this is probably even worse. Or you could write it as:

i >= 0 && (unsigned)i < u

but this is redundant.

(In the BSD kernel, we have a lot of code of the form:

if (i >= fdp->max || (fp = fdp->openfiles) == NULL)
return (EBADF);

where fdp->max is known to be unsigned, so that signed-int interfaces
cannot use negative-valued file descriptors to escape system
security. Casting i gets rid of the warning; adding the redundant
"i < 0 ||" does not get rid of the warning and just makes the
compiler work hard to remove the redundant test. So, despite
my distaste for unnecessary casts, we either cast or live with
the warning.)
If you know this, then you should make
'i' unsigned (and change the name to something better like 'size').

That would be nice, but the interface is set in stone (or in POSIX
anyway, which is close enough to stone :) ).
Your problem is really that SomeFunc() wants a pointer to (signed)
int, when the value it is storing there is unsigned. The solution is
to change SomeFunc(). If you can't do that, but you know *for sure*
that SomeFunc() never writes a signed value into 'i', then cast the
argument to SomeFunc().

Personally, I consider adding pointer casts -- i.e.,

unsigned int ui;
x = SomeFunc((int *)&ui);
... now use the "unsigned" value in ui ...

-- to be the worst of the various alternatives.
If, however, you don't know for sure that SomeFunc() behaves properly
(you don't have the souce code, for example), then you need to give
SomeFunc() a pointer to int, and then check to make sure that it isn't
negative before comparing it to sizeof(some_type).

Or, as I said above, guarantee that the test "i < 0 ||" is redundant
(as we do), and then either cast or live with the warning.
 
C

CBFalconer

Glen said:
Say one wants to compare a return value from a function to
sizeof(somestruct). It would seem safer to cast sizeof to int,
unless one could realistically expect sizeof to be greater than
MAX_INT. Casting the int to unsigned will cause negative return
values to fail a less than test. Though in some cases that might
be a good thing.

I didn't say that. If it comes up regularly you could make a
function something like:

/* Guaranteed overflow free comparison */
/* returns -1, 0, +1 on i < u, i == u, i > 0 respectively */
int cmpint_vs_u(int i, unsigned int u)
{
if (i < 0) return -1; /* i < u */
else if (u > INT_MAX) return -1; /* i < u */
else return (i > (int)u) - (i < (int)u); /* 1 for i > u */
}
I believe one should understand the result of both types of
comparisons, and choose the appropriate one.

No, one should understand the data one is working with, and
program accordingly.
 
P

pete

Kevin said:
That's not always possible without introducing new variables.

int SomeFunc(int *dest); /* returns error code, writes value to *dest */

int i;
if (SUCCESS == SomeFunc(&i))
{
if (i < sizeof(some_type))
{
/* ... */
}
}

I can't very well get an unsigned type from SomeFunc, nor can I cause
sizeof() to result in a signed type.
The only way to make the comparison
deal with like types would be to add a new variable that logically
shouldn't exist.
This is hardly any better than casting in the comparison.

Why does SomeFunc take a pointer to int,
instead of a pointer to size_t ?

int SomeFunc(size_t *dest);
/* returns error code, writes value to *dest */

size_t i;
if (SUCCESS == SomeFunc(&i))
{
if (i < sizeof(some_type))
{
/* ... */
}
}
 
K

Kevin Goodsell

pete said:
Why does SomeFunc take a pointer to int,
instead of a pointer to size_t ?

How about because I didn't define the interface? Is that a good enough
reason?

It just so happens that in the specific case that prompted the question,
I was comparing the result of a strlen() call to an int that was used as
the destination variable for a sscanf %n format specifier. If you can
suggest a way to persuade sscanf to use size_t instead of int for %n, or
a way to persuade strlen() to return int, then I suppose your answer
would be useful.

I asked a pretty simple question. Why is everyone answering by
suggesting there's something wrong with the code? Let me clarify: I have
two variables that need to be compared. Assume the types are dictated
by something beyond my control. One is a signed type and one is an
unsigned type. I know the comparison is safe because the signed variable
is non-negative. But the compiler warns about it. I was seeking opinions
on how to handle this. Changing the types is not an option.

-Kevin
 
S

Sheldon Simms

In this case, you are comparing the value in [signed int] 'i' with
a size_t. You shouldn't be doing this at all unless you *know* that the
value in 'i' is unsigned.

Not necessarily: another option is "if a negative value of i is
supposed to be considered greater than the size_t value, which you
know is not `overflow-able'". That is, you know for certain that,
in:

i < u

the value in u is less than INT_MIN + UINT_MAX + 1, so that
all values of i in [INT_MIN..-1] result in values in the range
[INT_MIN + UINT_MAX + 1 .. UINT_MAX], which is less than u.
^^^^^^^^^

I suppose you mean "greater than" here.
This test can be (re)written as:

i < 0 || i < (int)u

but this is probably even worse.

I think it's better -- more below.
Or you could write it as:

i >= 0 && (unsigned)i < u

but this is redundant.

True, but only to the compiler -- more below.
(In the BSD kernel, we have a lot of code of the form:

if (i >= fdp->max || (fp = fdp->openfiles) == NULL)
return (EBADF);

where fdp->max is known to be unsigned, so that signed-int interfaces
cannot use negative-valued file descriptors to escape system
security. Casting i gets rid of the warning; adding the redundant
"i < 0 ||" does not get rid of the warning and just makes the
compiler work hard to remove the redundant test. So, despite
my distaste for unnecessary casts, we either cast or live with
the warning.)


Your reasoning is entirely correct as far as standard C goes, and in
an OS Kernel perhaps the best alternative. I would still cast i in
your kernel example, however. I don't agree that the cast is
unnecessary. Yes, the compiler will convert i anyway, but the cast
makes sure that the readers of the code know what is going on, even
if they haven't fully digested the idea of usual arithmetic
conversions.

Furthermore, it is my opinion that most C programmers are not capable
of consistently making sure that comparisons between signed and
unsigned integers are correct. Most of the time that I have seen
such comparisons, they are just one of a raftload of sloppy
constructions in a program. I think the typical C programmer would
do better to avoid comparing two values that have different
nominal ranges, because he probably doesn't understand the
ramifications of what he's doing.

Even if the programmer in question is someone like you, who does
know what he's doing; I, as a (hypothetical) maintainance programmer,
won't know that and when I come across the comparison, I'll feel
obligated to spend the time making sure that the comparison is valid.
That might take a lot of time, and if I can't be 100% sure about it
then I'll feel obligated to "correct" the "problem".

-Sheldon
 
K

Kevin Goodsell

Sheldon said:
In this case, you are comparing the value in 'i' with a size_t.
You shouldn't be doing this at all unless you *know* that the
value in 'i' is unsigned.

'unsigned' is a property of types, not values. 'i' must be signed
because the interface dictates it. See my reply to pete.
If you know this, then you should make
'i' unsigned (and change the name to something better like 'size').

Changing the type is not an option. As I told pete, 'SomeFunc' is
actually sscanf() with a %n format specifier.
Your problem is really that SomeFunc() wants a pointer to (signed)
int, when the value it is storing there is unsigned.

The value is non-negative, but the interface requires an int. This is
beyond my control. If you want to file a defect report suggesting that
the %n format specifier for the *scanf functions should expect a size_t
instead of an int, be my guest. While you're at it, I believe there are
a few other standard library functions that use ints where size_t would
probably be more appropriate.

Maybe I should have been more clear in my post. I intended for you to
assume the types were dissimilar for some good reason and could not be
easily changed to similar types.
The solution is
to change SomeFunc(). If you can't do that, but you know *for sure*
that SomeFunc() never writes a signed value into 'i', then cast the
argument to SomeFunc().

If, however, you don't know for sure that SomeFunc() behaves properly
(you don't have the souce code, for example), then you need to give
SomeFunc() a pointer to int, and then check to make sure that it isn't
negative before comparing it to sizeof(some_type).

So take your pick:

unsigned int size;
if (SUCCESS == SomeFunc(&(int)size))

This isn't even legal.
{
if (size < sizeof(some_type))
...
}


int signed_size; /* Bogus, but SomeFunc() author is an idiot */

Yes, in your defect report be sure to include a note stating that Dennis
Ritchie (or whoever defined it) is an idiot for defining *scanf() the
way he did.
if (SUCCESS == SomeFunc(&signed_size))
{
if (signed_size < 0) return ERROR;
if ((unsigned int)signed_size < sizeof(some_type))

So in short, your answer is "In my opinion, use a cast". Thanks for your
input.

-Kevin
 
C

Christian Bau

Sheldon Simms said:
if (SUCCESS == SomeFunc((int *)size))
=====================================

Nice proof that casts are dangerous and should be avoided when possible.
 
C

Christian Bau

pete said:
Why does SomeFunc take a pointer to int,
instead of a pointer to size_t ?

Wrong logic. What if I write

if (i > 1 && i < sizeof (some_type)) ...

Would you want two versions of SomeFunc ? And maybe SomeFunc ()
sometimes produces a negative value?
 
K

Kevin Goodsell

CBFalconer said:
No, one should understand the data one is working with, and
program accordingly.

And I do understand the data. But the problem persists.

#include <stdio.h>
#include <string.h>
#include <stdlib.h>

int main(void)
{
/* in the real program, buffer will be filled at run-time */
char buffer[] = " 0324 ";
long value;
int converted_items, scanned_chars = 0;

converted_items = sscanf(buffer, "%li %n", &value, &scanned_chars);

/* check if conversion failed: */
if (converted_items < 1 ||
(scanned_chars != 0 && scanned_chars < strlen(buffer)) )
{
return EXIT_FAILURE;
}

return EXIT_SUCCESS;
}

Unless I'm missing something, this code is perfectly fine. It should
convert the value from buffer to a long, allowing octal and hex
representations, and fail if the format is incorrect in any way. But the
comparison 'scanned_chars < strlen(buffer)' still causes a warning. The
question I am posing is, what do you consider the best way to handle a
situation like this?

-Kevin
 
C

Christian Bau

Kevin Goodsell said:
How about because I didn't define the interface? Is that a good enough
reason?

It just so happens that in the specific case that prompted the question,
I was comparing the result of a strlen() call to an int that was used as
the destination variable for a sscanf %n format specifier. If you can
suggest a way to persuade sscanf to use size_t instead of int for %n, or
a way to persuade strlen() to return int, then I suppose your answer
would be useful.

I asked a pretty simple question. Why is everyone answering by
suggesting there's something wrong with the code? Let me clarify: I have
two variables that need to be compared. Assume the types are dictated
by something beyond my control. One is a signed type and one is an
unsigned type. I know the comparison is safe because the signed variable
is non-negative. But the compiler warns about it. I was seeking opinions
on how to handle this. Changing the types is not an option.

For correct, slightly inefficient, warning-free code:

if (i < 0 || (unsigned int) i < u)

For correct, but not future proof code:

if ((unsigned int) i < u)

The second one is not future proof because you may know today that i
cannot be negative, but the code around this might change and in the
future it might become negative. Slightly more efficient, but I hate
that kind of style

if ((unsigned int) i < u || i < 0)

If i is negative, then the value of i is less than the value of u, so
you get the correct result. Since the type unsigned int can hold all
positive values of type int, (unsigned int) i has the same value as i if
i >= 0, and you also get the correct result.
 
G

Glen Herrmannsfeldt

CBFalconer said:
Glen Herrmannsfeldt wrote:

(snip including examples of comparing int and size_t)
No, one should understand the data one is working with, and
program accordingly.

Yes. One can only program accordingly if one understands the results of the
comparisons.

-- glen
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top