Alan Curry a écrit :
I assume characters are codes from one to 255. This is a bad assumption
maybe, in some kind of weird logic when you assign a sign to a character
code.
So don't do that. If the values are relevant at all, you should be using
unsigned char explicitly, not plain char.
There is a well established confusion in C between characters (that are
encoded as integers) and integer VALUES.
Indeed, you can get confused if you rely too much on the fact that char is an
integer type.
Besides, when I convert it into a bigger type, I would like to get
152, and not 4294967192.
There's an easy answer for that: never convert plain char to a bigger type.
My rule on plain chars is that they should only be used for real characters,
which are things that are read from and/or written to a text stream. If your
char variable is not really a character (i.e. it didn't come from a text
stream and it will never be printed to a text stream) it should be declared
explicitly as signed or unsigned.
The standard library does add some confusion with the ctype.h functions that
work on characters as characters but require them to be unsigned. Don't look
in ctype.h for examples of good design.
Since size_t is unsigned, converting to unsigned is a fairly common operation.
How does a character value (which is charset-dependent anyway) become a size?
I can't see how that makes sense.
warning: "comparison between signed and unsigned".
I see a lot of those when compiling other people's code, and sometimes my own
too, and usually I fix it by changing whichever thing was signed to unsigned,
and this is usually an improvement.
I've done that so many times, it makes me think that perhaps C got the
default integer signedness wrong. If plain int, short, and long had all been
unsigned, with the "signed" keyword being required to declare signed
variables, there might be fewer problems.