Plain char is appropriate for textual data.
The text I deal with from day to day is all ISO-8859-1.
Should I consider such text as being 'textual data'?
He means all the characters in the character execution set, which are
all guaranteed to have a value in the range [1, CHAR_MAX]
That limitation applies only to the BASIC execution character set
(6.2.5p3). The extended execution character set is not covered by that
requirement.
There's no valid and invalid characters, however the value read by
getchar for example, can't be negative.
7.19.2p2 says that, under certain strict conditions, if you write data
to a file and read it back again, the result must compare equal to the
original. There's no way that can be possible, given the way that the
standard has defined fputc() and fgetc(), unless
uc == (unsigned char)(int)uc
for all possible values uc of unsigned char.
An implementation is allowed to choose UCHAR_MAX > INT_MAX (this causes
problems for users of character-oriented I/O functions, but none that
can't be dealt with by use of feof() and ferror() - it violates no
requirements imposed by the standard). In this case, the conversion of
unsigned char values > INT_MAX to int is implementation-defined.
However, the requirements of 7.19.2p2 impose significant constraints on
that conversion. Specifically, (int)uc must produce a different value
for every unsigned char value "uc". Since all unsigned char values from
0 to INT_MAX must be converted to the same values in 'int', any unsigned
char value greater than INT_MAX must be converted to a negative value.