Should I comment on the irony of your newsreader having converted that
to ISO8859-1?
That's a feature, not a bug. Usenet is (except for the binaries groups)
a text medium: The content of a usenet posting consists of characters,
not bytes. Of course for transport it has to be encoded into some
sequence of bytes, but as long as the encoding/decoding process is
lossless, the NUA is free to employ any encoding it likes.
In my case I have configured the following outgoing charsets:
us-ascii,iso-8859-1,iso-8859-15,utf-8
The order is significant, so since my posting contained characters
which could not be represented in us-ascii, but could be represented in
iso-8859-1, the latter was used. If I had also used a euro sign, it
would have used iso-8859-15; and if I had used typographical quotes, it
would have used utf-8.
(This is why I'm slightly suspicious of the whole idea of non-ASCII
source code. It's fine as long as it's just in a file, but tends to be
much less likely to survive diffs/mailing-list posts/&c. without being
mangled.)
That can usually be avoided by attaching the diffs or code instead of
including them in the main text part. It also makes them easier to hande
for the receiver.
Also Non-ASCII characters aren't the only ones mangled by common
NUAs/MUAs. Many fold long lines, some remove leading whitespace, some
change tabs into spaces, ...
At least an unintended charset conversion can be easily undone with
iconv or similar tools - other changes which MUAs are likely to inflict
on a text are generally not reversible.
hp