William said:
Has it ever been proposed or posited within any C committee to define or
discuss (in a standard's document) the transformation of Unicode text I/O
according to a Unicode Normalization Form (assuming a locale which employs
a Unicode representation)? Is such a capability implicit?
This is a complicated algorithm. In a demo normalizer I wrote, my
implementation took 107K of object code. It could likely be optimized,
but I would be surprised if you could get it under 30K or so. (Its
mostly because you need to encode the UniData table, of course.)
Besides, C does not support Unicode. So there is no reason to expect
such functionality from the language.
In a bizarre twist, however, someone has actually proposed some kind of
UTF codecs into the next C standard. I think, of course, this is
probably meant to be a joke to see if the ANSI C people are utterly and
completely incompetent or not -- since their actions to date do not
make it clear whether or not they are.
UTF encoders are, of course, trivial pieces of code anyone can write in
a few hours at most. But the key point is that they don't achieve any
useful functionality if you don't have other unicode support, such as a
normalizer, as you suggest above. But a unicode normalizer is
expensive (as I mentioned above) to implement. So either they go whole
hog and do a complete Unicode implementation (some implementations, of
course, map wchar_t to unicode -- so this is plausible), or they should
do nothing. There is no point in half measures that can't be really
used in practice.
This "depth" of reasoning may very well be beyond the capabilities of
the ANSI C committee, so its not clear to me at all what they will end
up doing.