M
Mike Wahler
Malcolm said:size_t is correct, int is traditional and what any C programmer would use.
Not what I'd use, nor do I think would most C programmers.
Why int? Because int is the natural integer type for the program to use.
Why? IMO context determines what the 'natural' type to use is.
In the case of object sizes or array subscripts, the 'natural'
(and more important, guaranteed to work) type is 'size_t'.
If
a string is so long that an int won't hold its length, then its likely that
the program is hopelessly broken anyway.
I fail to see any logic in that assertion.
Is a program that reads e.g. 40,000 bytes from
a file into an array of characters 'broken'?
Note I said likely.
What makes it likely? Do you for some reason find it
unlikely for an array to have more than 32767 elements?
Why?
You can of course construct an artifical program which
uses a single massive string
I suppose 'massive' is a subjective term. But I have
written real production code which used such 'massive'
arrays.
and laugh when the int i solution fails on it.
Undefined behavior is no laughing matter in real code.
Why not always use 'size_t' and *know* it will *always*
work, rather than using 'int' and having it *possibly*
or *probably* work? There are already too many uncertainties
in life. I'll take the guarantee, thank you very much.
-Mike