Clint said:
I just wrote some quick tests and my compiler (gcc) doesn't seem to flag
passing signed arguments to functions expecting unsigned types (and
presumably vice versa). Is there any particular reason why this is not
considered at least something worthy of a warning? I know that some
implementations complain when you don't get the formatting arguments
correct for printf(), but I would think something like this should be
enforced more strictly.
First off, it's perfectly legal C. Function prototypes
don't require that you supply an expression whose type matches
the argument, instead, they cause the supplied value to be
converted to the argument type. You'll get a diagnostic for
impossible conversions (e.g., sqrt("two")), but sqrt(2) and
sqrt(2.0) are effectively identical.
There's a hint in Henry Spencer's "Ten Commandments for C
Programmers" that there may have been debate when prototypes
were being invented about whether they should behave this way.
That is, there may have been a faction that wanted sqrt(2) to
generate an error instead of generating a conversion. If there
was such a debate (I wasn't there), it must have been resolved
the other way, possibly on grounds of convenience and possibly
to avoid breaking old programs.
The convenience and potential breakage would probably have
been large, too. For example, here's a piece of innocent-looking
code that would break if prototypes were "enforceable:"
char buff[20];
memset (buff, 0, 20);
.... because `20' is an `int', and the third argument to memset()
must be a `size_t'. Now, it's easy to argue (and I'd agree)
that `sizeof buff' is a far better way to spell `20', but it's
not always that easy:
right_justify(string, length)
char *string;
int length;
{
int datalen = strlen(string);
memmove (string + length - datalen, string, datalen + 1);
memset (string, ' ', length - datalen);
}
Again, superior practice would use `size_t' values instead of `int',
but `size_t' didn't even exist when these debates (may have) taken
place; masses and masses of pre-existing code used `int' to count
things, and the acceptance of the then-nascent Standard would have
been adversely affected if all those masses of code suddenly stopped
compiling. "For no good reason," at that. As the Rationale puts it:
Existing code is important, existing implementations
are not. A large body of C code exists of considerable
commercial value. Every attempt has been made to ensure
that the bulk of this code will be acceptable to any
implementation conforming to the Standard. The C89
Committee did not want to force most programmers to
modify their C programs just to have them accepted by
a conforming translator.
Of course, a decade and a half have elapsed since prototypes
came on the scene, and the landscape may have changed -- Y2K
alone may have put quite a lot of older code to rest. A practice
that was virtually universal in Olden Days may now have become
rare enough to warrant the "suspicious" label today. (On the
other hand, it may have become more common: sqrt(2) is all right
now, but was an error before prototypes came along.) If you have
the patience, try using gcc with the "-Wconversion" option to
compile some large body of existing code; the big question is
whether the resulting diagnostics will be useful or just noisy.