Re: Managing Google Groups headaches

Discussion in 'Python' started by rusi, Nov 28, 2013.

  1. rusi

    rusi Guest

    Here's a 1-click pure python solution.

    As I said I dont know how to manage errors!

    1. Put it in a file say cleangg.py and make it executable
    2. Install it as the 'editor' for the "Its all text" firefox addon
    3. Click the edit and you should get a cleaned out post

    ------------------------------
    #!/usr/bin/env python3

    from sys import argv
    import re
    from re import sub

    def clean(s):
    s1 = sub("^> *\n> *$", "¶", s, flags=re.M)
    s2 = sub("^> *\n", "", s1, flags=re.M)
    s3 = sub("¶\n", ">\n", s2, flags=re.M)
    return s3

    def main():
    print ("argv[1] %s" % argv[1])
    with open(argv[1]) as f:
    s = f.read()
    with open(argv[1], "w") as f:
    f.write(clean(s))

    main()
    rusi, Nov 28, 2013
    #1
    1. Advertising

  2. (comments from a lurker on python-list)

    - Google "groups" is a disaster. It's extremely poorly-run, and is in
    fact a disservice to Usenet -- which is alive and well, tyvm, and still used
    by many of the most senior and experienced people on the Internet. (While
    some newsgroups are languishing and some have almost no traffic, others
    are thriving. As it should be.) I could catalog the litany of egregious
    mistakes that Google has made, but what's the point? They're clearly
    uninterested in fixing them. Their only interest is in slapping the
    "Google" label on Usenet -- which is far more important in the evolution
    of the Internet than Google will ever be -- so that they can use it
    as a marketing vehicle. Worse, Google has completely failed to control
    outbound abuse from Google groups, which is why many consider it a
    best practice to simply drop all Usenet traffic originating there.

    - That said, there is value in bidirectionally gatewaying mailing lists
    with corresponding Usenet newsgroups. Usenet's propagation properties often
    make it the medium of choice for many people, particularly those in areas
    with slow, expensive, erratic, etc. connectivity. Conversely, delivery
    of Usenet traffic via email is a better solution for others. Software
    like Mailman facilitates this fairly well, even given the impedance
    mismatch between SMTP and NNTP.

    - Mailing lists/Usenet newsgroups remain, as they've been for a very
    long time, the solutions of choice for online discussions. Yes, I'm
    aware of web forums: I've used hundreds of them. They suck. They ALL
    suck, they just all suck differently. I could spend the next several
    thousand lines explaining why, but instead I'll just abbreviate: they
    don't handle threading, they don't let me use my editor of choice,
    they don't let me build my own archive that I can search MY way including
    when I'm offline, they are brittle and highly vulnerable to abuse
    and security breaches, they encourage worst practices in writing
    style (including top-posting and full-quoting), they translate poorly
    to other formats, they are difficult to archive, they're even more
    difficult to migrate (whereas Unix mbox format files from 30 years ago
    are still perfectly usable today), they aren't standardized, they
    aren't easily scalable, they're overly complex, they don't support
    proper quoting, they don't support proper attribution, they can't
    be easily forwarded, they...oh, it just goes on. My point being that
    there's a reason that the IETF and the W3C and NANOG and lots of other
    groups that could use anything they want use mailing lists: they work.

    - That said, they work *if configured properly*, which unfortunately
    these days includes a hefty dose of anti-abuse controls. This list
    (for the most part) isn't particularly targeted, but it is occasionally
    and in the spirit of trying to help out, I can assist with that. (I think
    it's fair to say I have a little bit of email expertise.) If any of
    the list's owners are reading this and want help, please let me know.

    - They also work well *if used properly*, which means that participants
    should use proper email/news etiquette: line wrap, sane quoting style,
    reasonable editing of followups, preservation of threads, all that stuff.
    The more people do more of that, the smoother things work. On the other
    hand, if nobody does that, the result is impaired communication and
    quite often, a chorus of "mailing lists suck" even though the problem
    is not the mailing lists: it's the bad habits of the users on them.
    (And of course changing mediums won't fix that.)

    - To bring this back around to one of the starting points for this
    discussion: I think the current setup is functioning well, even given
    the sporadic stresses placed on it. I think it would be best to invest
    effort in maintaining/improving it as it stands (which is why I volunteered
    to do so, see above) rather than migrating to something else.

    ---rsk
    Rich Kulawiec, Dec 4, 2013
    #2
    1. Advertising

  3. On Dec 4, 2013, at 6:52 AM, Rich Kulawiec <> wrote:

    > Yes, I'm
    > aware of web forums: I've used hundreds of them. They suck. They ALL
    > suck, they just all suck differently. I could spend the next several
    > thousand lines explaining why, but instead I'll just abbreviate: they
    > don't handle threading, they don't let me use my editor of choice,
    > they don't let me build my own archive that I can search MY way including
    > when I'm offline, they are brittle and highly vulnerable to abuse
    > and security breaches, they encourage worst practices in writing
    > style (including top-posting and full-quoting), they translate poorly
    > to other formats, they are difficult to archive, they're even more
    > difficult to migrate (whereas Unix mbox format files from 30 years ago
    > are still perfectly usable today), they aren't standardized, they
    > aren't easily scalable, they're overly complex, they don't support
    > proper quoting, they don't support proper attribution, they can't
    > be easily forwarded, they...oh, it just goes on. My point being that
    > there's a reason that the IETF and the W3C and NANOG and lots of other
    > groups that could use anything they want use mailing lists: they work.


    One of the best rants I’ve ever read. Full mental harmonic resonance while I read this. Hope you don’t mind, but I think I’ll be plagiarizing your comments in the future. Maybe I’ll post it on a couple of the web forums I currently have the luxury of regularly hating.
    Travis Griggs, Dec 4, 2013
    #3
  4. rusi

    Roy Smith Guest

    In article <>,
    Rich Kulawiec <> wrote:

    > Yes, I'm
    > aware of web forums: I've used hundreds of them. They suck. They ALL
    > suck, they just all suck differently. I could spend the next several
    > thousand lines explaining why, but instead I'll just abbreviate: they
    > don't handle threading, they don't let me use my editor of choice,
    > they don't let me build my own archive that I can search MY way including
    > when I'm offline, they are brittle and highly vulnerable to abuse
    > and security breaches, they encourage worst practices in writing
    > style (including top-posting and full-quoting), they translate poorly
    > to other formats, they are difficult to archive, they're even more
    > difficult to migrate (whereas Unix mbox format files from 30 years ago
    > are still perfectly usable today), they aren't standardized, they
    > aren't easily scalable, they're overly complex, they don't support
    > proper quoting, they don't support proper attribution, they can't
    > be easily forwarded, they...oh, it just goes on.


    The real problem with web forums is they conflate transport and
    presentation into a single opaque blob, and are pretty much universally
    designed to be a closed system. Mail and usenet were both engineered to
    make a sharp division between transport and presentation, which meant it
    was possible to evolve each at their own pace.

    Mostly that meant people could go off and develop new client
    applications which interoperated with the existing system. But, it also
    meant that transport layers could be switched out (as when NNTP
    gradually, but inexorably, replaced UUCP as the primary usenet transport
    layer).
    Roy Smith, Dec 5, 2013
    #4
  5. rusi

    rusi Guest

    On Thursday, December 5, 2013 6:28:54 AM UTC+5:30, Roy Smith wrote:
    > Rich Kulawiec wrote:


    > > Yes, I'm
    > > aware of web forums: I've used hundreds of them. They suck. They ALL
    > > suck, they just all suck differently. I could spend the next several
    > > thousand lines explaining why, but instead I'll just abbreviate: they
    > > don't handle threading, they don't let me use my editor of choice,
    > > they don't let me build my own archive that I can search MY way including
    > > when I'm offline, they are brittle and highly vulnerable to abuse
    > > and security breaches, they encourage worst practices in writing
    > > style (including top-posting and full-quoting), they translate poorly
    > > to other formats, they are difficult to archive, they're even more
    > > difficult to migrate (whereas Unix mbox format files from 30 years ago
    > > are still perfectly usable today), they aren't standardized, they
    > > aren't easily scalable, they're overly complex, they don't support
    > > proper quoting, they don't support proper attribution, they can't
    > > be easily forwarded, they...oh, it just goes on.


    > The real problem with web forums is they conflate transport and
    > presentation into a single opaque blob, and are pretty much universally
    > designed to be a closed system. Mail and usenet were both engineered to
    > make a sharp division between transport and presentation, which meant it
    > was possible to evolve each at their own pace.


    > Mostly that meant people could go off and develop new client
    > applications which interoperated with the existing system. But, it also
    > meant that transport layers could be switched out (as when NNTP
    > gradually, but inexorably, replaced UUCP as the primary usenet transport
    > layer).


    There is a deep assumption hovering round-about the above -- what I
    will call the 'Unix assumption(s)'. But before that, just a check on
    terminology. By 'presentation' you mean what people normally call
    'mail-clients': thunderbird, mutt etc. And by 'transport' you mean
    sendmail, exim, qmail etc etc -- what normally are called
    'mail-servers.' Right??

    Assuming this is the intended meaning of the terminology (yeah its
    clearer terminology than the usual and yeah Im also a 'Unix-guy'),
    here's the 'Unix-assumption':

    - human communication…
    (is not very different from)
    - machine communication…
    (can be done by)
    - text…
    (for which)
    - ASCII is fine…
    (which is just)
    - bytes…
    (inside/between byte-memory-organized)
    - von Neumann computers

    To the extent that these assumptions are invalid, the 'opaque-blob'
    may well be preferable.
    rusi, Dec 6, 2013
    #5
  6. rusi

    Roy Smith Guest

    In article <>,
    rusi <> wrote:

    > On Thursday, December 5, 2013 6:28:54 AM UTC+5:30, Roy Smith wrote:


    > > The real problem with web forums is they conflate transport and
    > > presentation into a single opaque blob, and are pretty much universally
    > > designed to be a closed system. Mail and usenet were both engineered to
    > > make a sharp division between transport and presentation, which meant it
    > > was possible to evolve each at their own pace.

    >
    > > Mostly that meant people could go off and develop new client
    > > applications which interoperated with the existing system. But, it also
    > > meant that transport layers could be switched out (as when NNTP
    > > gradually, but inexorably, replaced UUCP as the primary usenet transport
    > > layer).

    >
    > There is a deep assumption hovering round-about the above -- what I
    > will call the 'Unix assumption(s)'.


    It has nothing to do with Unix. The separation of transport from
    presentation is just as valid on Windows, Mac, etc.

    > But before that, just a check on
    > terminology. By 'presentation' you mean what people normally call
    > 'mail-clients': thunderbird, mutt etc. And by 'transport' you mean
    > sendmail, exim, qmail etc etc -- what normally are called
    > 'mail-servers.' Right??


    Yes.

    > Assuming this is the intended meaning of the terminology (yeah its
    > clearer terminology than the usual and yeah Im also a 'Unix-guy'),
    > here's the 'Unix-assumption':
    >
    > - human communicationŠ
    > (is not very different from)
    > - machine communicationŠ
    > (can be done by)
    > - textŠ
    > (for which)
    > - ASCII is fineŠ
    > (which is just)
    > - bytesŠ
    > (inside/between byte-memory-organized)
    > - von Neumann computers
    >
    > To the extent that these assumptions are invalid, the 'opaque-blob'
    > may well be preferable.


    I think you're off on the wrong track here. This has nothing to do with
    plain text (ascii or otherwise). It has to do with divorcing how you
    store and transport messages (be they plain text, HTML, or whatever)
    from how a user interacts with them.

    Take something like Wikipedia (by which, I really mean, MediaWiki, which
    is the underlying software package). Most people think of Wikipedia as
    a web site. But, there's another layer below that which lets you get
    access to the contents of articles, navigate all the rich connections
    like category trees, and all sorts of metadata like edit histories.
    Which means, if I wanted to (and many examples of this exist), I can
    write my own client which presents the same information in different
    ways.
    Roy Smith, Dec 6, 2013
    #6
  7. rusi

    rusi Guest

    On Friday, December 6, 2013 1:06:30 PM UTC+5:30, Roy Smith wrote:
    > Rusi wrote:


    > > On Thursday, December 5, 2013 6:28:54 AM UTC+5:30, Roy Smith wrote:


    > > > The real problem with web forums is they conflate transport and
    > > > presentation into a single opaque blob, and are pretty much universally
    > > > designed to be a closed system. Mail and usenet were both engineeredto
    > > > make a sharp division between transport and presentation, which meantit
    > > > was possible to evolve each at their own pace.
    > > > Mostly that meant people could go off and develop new client
    > > > applications which interoperated with the existing system. But, it also
    > > > meant that transport layers could be switched out (as when NNTP
    > > > gradually, but inexorably, replaced UUCP as the primary usenet transport
    > > > layer).

    > > There is a deep assumption hovering round-about the above -- what I
    > > will call the 'Unix assumption(s)'.


    > It has nothing to do with Unix. The separation of transport from
    > presentation is just as valid on Windows, Mac, etc.


    > > But before that, just a check on
    > > terminology. By 'presentation' you mean what people normally call
    > > 'mail-clients': thunderbird, mutt etc. And by 'transport' you mean
    > > sendmail, exim, qmail etc etc -- what normally are called
    > > 'mail-servers.' Right??


    > Yes.


    > > Assuming this is the intended meaning of the terminology (yeah its
    > > clearer terminology than the usual and yeah Im also a 'Unix-guy'),
    > > here's the 'Unix-assumption':
    > > - human communication�
    > > (is not very different from)
    > > - machine communication�
    > > (can be done by)
    > > - text�
    > > (for which)
    > > - ASCII is fine�
    > > (which is just)
    > > - bytes�
    > > (inside/between byte-memory-organized)
    > > - von Neumann computers
    > > To the extent that these assumptions are invalid, the 'opaque-blob'
    > > may well be preferable.


    > I think you're off on the wrong track here. This has nothing to do with
    > plain text (ascii or otherwise). It has to do with divorcing how you
    > store and transport messages (be they plain text, HTML, or whatever)
    > from how a user interacts with them.



    Evidently (and completely inadvertently) this exchange has just
    illustrated one of the inadmissable assumptions:

    "unicode as a medium is universal in the same way that ASCII used to be"

    I wrote a number of ellipsis characters ie codepoint 2026 as in:

    - human communication…
    (is not very different from)
    - machine communication…

    Somewhere between my sending and your quoting those ellipses became
    the replacement character FFFD

    > > - human communication�
    > > (is not very different from)
    > > - machine communication�


    Leaving aside whose fault this is (very likely buggy google groups),
    this mojibaking cannot happen if the assumption "All text is ASCII"
    were to uniformly hold.

    Of course with unicode also this can be made to not happen, but that
    is fragile and error-prone. And that is because ASCII (not extended)
    is ONE thing in a way that unicode is hopelessly a motley inconsistent
    variety.

    With unicode there are in-memory formats, transportation formats eg
    UTF-8, strange beasties like FSR (which then hopelessly and
    inveterately tickle our resident trolls!) multi-layer encodings (in
    html), BOMS and unnecessary/inconsistent BOMS (in microsoft-notepad).
    With ASCII, ASCII is ASCII; ie "ABC" is 65,66,67 whether its in-core,
    in-file, in-pipe or whatever. Ok there are a few wrinkles to this
    eg. the null-terminator in C-strings. I think this is the exception to
    the rule that in classic Unix, ASCII is completely inter-operable and
    therefore a universal data-structure for inter-process or inter-machine
    communication.

    It is this universal data structure that makes classic unix pipes and
    filters possible and easy (of which your separation of presentation
    and transportation is just one case).

    Give it up and the composability goes with it.

    Go up from the ASCII -> Unicode level to the plain-text -> hypertext
    (aka html) level and these composability problems hit with redoubled
    force.

    > Take something like Wikipedia (by which, I really mean, MediaWiki, which
    > is the underlying software package). Most people think of Wikipedia as
    > a web site. But, there's another layer below that which lets you get
    > access to the contents of articles, navigate all the rich connections
    > like category trees, and all sorts of metadata like edit histories.
    > Which means, if I wanted to (and many examples of this exist), I can
    > write my own client which presents the same information in different
    > ways.


    Not sure whats your point.
    Html is a universal data-structuring format -- ok for presentation, bad for
    data-structuring
    SQL databases (assuming thats the mediawiki backend) is another -- ok for
    data-structuring bad for presentation.

    Mediawiki mediates between the two formats.

    Beyond that I lost you... what are you trying to say??
    rusi, Dec 6, 2013
    #7
  8. On Sat, Dec 7, 2013 at 12:03 AM, rusi <> wrote:
    > SQL databases (assuming thats the mediawiki backend) is another -- ok for
    > data-structuring bad for presentation.


    No, SQL databases don't store structured text. MediaWiki just stores a
    single blob (not in the database sense of that word) of text.

    ChrisA
    Chris Angelico, Dec 6, 2013
    #8
  9. rusi

    rusi Guest

    On Friday, December 6, 2013 6:49:04 PM UTC+5:30, Chris Angelico wrote:
    > On Sat, Dec 7, 2013 at 12:03 AM, rusi wrote:
    > > SQL databases (assuming thats the mediawiki backend) is another -- ok for
    > > data-structuring bad for presentation.


    > No, SQL databases don't store structured text. MediaWiki just stores a
    > single blob (not in the database sense of that word) of text.


    I guess we are using 'structured' in different ways. All I am saying
    is that mediawiki which seems to present as html, actually stores its
    stuff as SQL -- nothing more or less structured than the schemas here:
    http://www.mediawiki.org/wiki/Manual:MediaWiki_architecture#Database_and_text_storage
    rusi, Dec 6, 2013
    #9
  10. On Sat, Dec 7, 2013 at 12:32 AM, rusi <> wrote:
    > I guess we are using 'structured' in different ways. All I am saying
    > is that mediawiki which seems to present as html, actually stores its
    > stuff as SQL -- nothing more or less structured than the schemas here:
    > http://www.mediawiki.org/wiki/Manual:MediaWiki_architecture#Database_and_text_storage


    Yeah, but the structure is all about the metadata. Ultimately, there's
    one single text field containing the entire content as you would see
    it in the page editor: wiki markup in straight text. MediaWiki uses an
    SQL database to store that lump of text, but ultimately the
    relationship is between wikitext and HTML, no SQL involvement.

    Wiki markup is reasonable for text structuring. (Not for generic data
    structuring, but it's decent for text.) Same with reStructuredText,
    used for PEPs. An SQL database is a good way to store mappings of
    "this key, this tuple of data" and retrieve them conveniently,
    including (and this is the bit that's more complicated in a straight
    Python dictionary) using any value out of the tuple as the key, and
    (and this is where a dict *really* can't hack it) storing/retrieving
    more data than fits in memory. The two are orthogonal. Your point is
    better supported by wikitext than by SQL, here, except that there
    aren't fifty other systems that parse and display wikitext. In fact,
    what you're suggesting is a good argument for deprecating HTML email
    in favour of RST email, and using docutils to render the result either
    as HTML (for webmail users) or as some other format. And I wouldn't be
    against that :) But good luck convincing the world that Microsoft
    Outlook is doing the wrong thing.

    ChrisA
    Chris Angelico, Dec 6, 2013
    #10
  11. rusi

    rusi Guest

    On Friday, December 6, 2013 7:18:19 PM UTC+5:30, Chris Angelico wrote:
    > On Sat, Dec 7, 2013 at 12:32 AM, rusi wrote:
    > > I guess we are using 'structured' in different ways. All I am saying
    > > is that mediawiki which seems to present as html, actually stores its
    > > stuff as SQL -- nothing more or less structured than the schemas here:
    > > http://www.mediawiki.org/wiki/Manual:MediaWiki_architecture#Database_and_text_storage


    > Yeah, but the structure is all about the metadata.


    Ok (I'd drop the 'all')

    > Ultimately, there's one single text field containing the entire content


    Right

    > as you would see it in the page editor: wiki markup in straight text.


    Aha! There you are! Its 'page editor' here and not the html which
    'display source' (control-u) which a browser would show. And wikimedia
    is the software that mediates.

    The usual direction (seen by users of wikipedia) is that wikimedia
    takes this text, along with the other unrelated (metadata?) seen
    around -- sidebar, tabs etc, css settings and munges it all into html

    The other direction (seen by editors of wikipedia) is that you edit a
    page and that page and history etc will show the changes,
    reflecting the fact that the SQL content has changed.

    > MediaWiki uses an SQL database to store that lump of text, but
    > ultimately the relationship is between wikitext and HTML, no SQL
    > involvement.



    Dunno what you mean. Every time someone browses wikipedia, things are
    getting pulled out of the SQL and munged into the html (s)he sees.
    rusi, Dec 6, 2013
    #11
  12. On Sat, Dec 7, 2013 at 1:11 AM, rusi <> wrote:
    > Aha! There you are! Its 'page editor' here and not the html which
    > 'display source' (control-u) which a browser would show. And wikimedia
    > is the software that mediates.
    >
    > The usual direction (seen by users of wikipedia) is that wikimedia
    > takes this text, along with the other unrelated (metadata?) seen
    > around -- sidebar, tabs etc, css settings and munges it all into html
    >
    > The other direction (seen by editors of wikipedia) is that you edit a
    > page and that page and history etc will show the changes,
    > reflecting the fact that the SQL content has changed.


    MediaWiki is fundamentally very similar to a structure that I'm trying
    to deploy for a community web site that I host, approximately thus:

    * A git repository stores a bunch of RST files
    * A script auto-generates index files based on the presence of certain
    file names, and renders via rst2html
    * The HTML pages are served as static content

    MediaWiki is like this:

    * Each page has a history, represented by a series of state snapshots
    of wikitext
    * On display, the wikitext is converted to HTML and served.

    The main difference is that MediaWiki is optimized for rapid and
    constant editing, where what I'm pushing for is optimized for less
    common edits that might span multiple files. (MW has no facility for
    atomically changing multiple pages, and atomically reverting those
    changes, and so on. Each page stands alone.) They're still broadly
    doing the same thing: storing marked-up text and rendering HTML. The
    fact that one uses an SQL database and the other uses a git repository
    is actually quite insignificant - it's as significant as the choice of
    whether to store your data on a hard disk or an SSD. The system is no
    different.

    >> MediaWiki uses an SQL database to store that lump of text, but
    >> ultimately the relationship is between wikitext and HTML, no SQL
    >> involvement.

    >
    > Dunno what you mean. Every time someone browses wikipedia, things are
    > getting pulled out of the SQL and munged into the html (s)he sees.


    Yes, but that's just mechanics. The fact that the PHP scripts to
    operate Wikipedia are being pulled off a file system doesn't mean that
    MediaWiki is an ext3-to-HTML renderer. It's a wikitext-to-HTML
    renderer.

    Anyway. As I said, your point is still mostly there, as long as you
    use wikitext rather than SQL.

    ChrisA
    Chris Angelico, Dec 6, 2013
    #12
  13. ASCII and Unicode [was Re: Managing Google Groups headaches]

    On Fri, 06 Dec 2013 05:03:57 -0800, rusi wrote:

    > Evidently (and completely inadvertently) this exchange has just
    > illustrated one of the inadmissable assumptions:
    >
    > "unicode as a medium is universal in the same way that ASCII used to be"


    Ironically, your post was not Unicode.

    Seriously. I am 100% serious.

    Your post was sent using a legacy encoding, Windows-1252, also known as
    CP-1252, which is most certainly *not* Unicode. Whatever software you
    used to send the message correctly flagged it with a charset header:

    Content-Type: text/plain; charset=windows-1252

    Alas, the software Roy Smith uses, MT-NewsWatcher, does not handle
    encodings correctly (or at all!), it screws up the encoding then sends a
    reply with no charset line at all. This is one bug that cannot be blamed
    on Google Groups -- or on Unicode.


    > I wrote a number of ellipsis characters ie codepoint 2026 as in:


    Actually you didn't. You wrote a number of ellipsis characters, hex byte
    \x85 (decimal 133), in the CP1252 charset. That happens to be mapped to
    code point U+2026 in Unicode, but the two are as distinct as ASCII and
    EBCDIC.


    > Somewhere between my sending and your quoting those ellipses became the
    > replacement character FFFD


    Yes, it appears that MT-NewsWatcher is *deeply, deeply* confused about
    encodings and character sets. It doesn't just assume things are ASCII,
    but makes a half-hearted attempt to be charset-aware, but badly. I can
    only imagine that it was written back in the Dark Ages where there were a
    lot of different charsets in use but no conventions for specifying which
    charset was in use. Or perhaps the author was smoking crack while coding.


    > Leaving aside whose fault this is (very likely buggy google groups),
    > this mojibaking cannot happen if the assumption "All text is ASCII" were
    > to uniformly hold.


    This is incorrect. People forget that ASCII has evolved since the first
    version of the standard in 1963. There have actually been five versions
    of the ASCII standard, plus one unpublished version. (And that's not
    including the things which are frequently called ASCII but aren't.)

    ASCII-1963 didn't even include lowercase letters. It is also missing some
    graphic characters like braces, and included at least two characters no
    longer used, the up-arrow and left-arrow. The control characters were
    also significantly different from today.

    ASCII-1965 was unpublished and unused. I don't know the details of what
    it changed.

    ASCII-1967 is a lot closer to the ASCII in use today. It made
    considerable changes to the control characters, moving, adding, removing,
    or renaming at least half a dozen control characters. It officially added
    lowercase letters, braces, and some others. It replaced the up-arrow
    character with the caret and the left-arrow with the underscore. It was
    ambiguous, allowing variations and substitutions, e.g.:

    - character 33 was permitted to be either the exclamation
    mark ! or the logical OR symbol |

    - consequently character 124 (vertical bar) was always
    displayed as a broken bar ¦, which explains why even today
    many keyboards show it that way

    - character 35 was permitted to be either the number sign # or
    the pound sign £

    - character 94 could be either a caret ^ or a logical NOT ¬

    Even the humble comma could be pressed into service as a cedilla.

    ASCII-1968 didn't change any characters, but allowed the use of LF on its
    own. Previously, you had to use either LF/CR or CR/LF as newline.

    ASCII-1977 removed the ambiguities from the 1967 standard.

    The most recent version is ASCII-1986 (also known as ANSI X3.4-1986).
    Unfortunately I haven't been able to find out what changes were made -- I
    presume they were minor, and didn't affect the character set.

    So as you can see, even with actual ASCII, you can have mojibake. It's
    just not normally called that. But if you are given an arbitrary ASCII
    file of unknown age, containing code 94, how can you be sure it was
    intended as a caret rather than a logical NOT symbol? You can't.

    Then there are at least 30 official variations of ASCII, strictly
    speaking part of ISO-646. These 7-bit codes were commonly called "ASCII"
    by their users, despite the differences, e.g. replacing the dollar sign $
    with the international currency sign ¤, or replacing the left brace
    { with the letter s with caron Å¡.

    One consequence of this is that the MIME type for ASCII text is called
    "US ASCII", despite the redundancy, because many people expect "ASCII"
    alone to mean whatever national variation they are used to.

    But it gets worse: there are proprietary variations on ASCII which are
    commonly called "ASCII" but aren't, including dozens of 8-bit so-called
    "extended ASCII" character sets, which is where the problems *really*
    pile up. Invariably back in the 1980s and early 1990s people used to call
    these "ASCII" no matter that they used 8-bits and contained anything up
    to 256 characters.

    Just because somebody calls something "ASCII", doesn't make it so; even
    if it is ASCII, doesn't mean you know which version of ASCII; even if you
    know which version, doesn't mean you know how to interpret certain codes.
    It simply is *wrong* to think that "good ol' plain ASCII text" is
    unambiguous and devoid of problems.


    > With unicode there are in-memory formats, transportation formats eg
    > UTF-8,


    And the same applies to ASCII.

    ASCII is a *seven-bit code*. It will work fine on computers where the
    word-size is seven bits. If the word-size is eight bits, or more, you
    have to pad the ASCII code. How do you do that? Pad the most-significant
    end or the least significant end? That's a choice there. How do you pad
    it, with a zero or a one? That's another choice. If your word-size is
    more than eight bits, you might even pad *both* ends.

    In C, a char is defined as the smallest addressable unit of the machine
    that can contain basic character set, not necessarily eight bits.
    Implementations of C and C++ sometimes reserve 8, 9, 16, 32, or 36 bits
    as a "byte" and/or char. Your in-memory representation of ASCII "a" could
    easily end up as bits 001100001 or 0000000001100001.

    And then there is the question of whether ASCII characters should be Big
    Endian or Little Endian. I'm referring here to bit endianness, rather
    than bytes: should character 'a' be represented as bits 1100001 (most
    significant bit to the left) or 1000011 (least significant bit to the
    left)? This may be relevant with certain networking protocols. Not all
    networking protocols are big-endian, nor are all processors. The Ada
    programming language even supports both bit orders.

    When transmitting ASCII characters, the networking protocol could include
    various start and stop bits and parity codes. A single 7-bit ASCII
    character might be anything up to 12 bits in length on the wire. It is
    simply naive to imagine that the transmission of ASCII codes is the same
    as the in-memory or on-disk storage of ASCII.

    You're lucky to be active in a time when most common processors have
    standardized on a single bit-order, and when most (but not all) network
    protocols have done the same. But that doesn't mean that these issues
    don't exist for ASCII. If you get a message that purports to be ASCII
    text but looks like this:

    "\tS\x1b\x1b{\x01u{'\x1b\x13!"

    you should suspect strongly that it is "Hello World!" which has been
    accidentally bit-reversed by some rogue piece of hardware.


    --
    Steven
    Steven D'Aprano, Dec 6, 2013
    #13
  14. rusi

    Gene Heskett Guest

    Re: ASCII and Unicode [was Re: Managing Google Groups headaches]

    On Friday 06 December 2013 14:30:06 Steven D'Aprano did opine:

    > On Fri, 06 Dec 2013 05:03:57 -0800, rusi wrote:
    > > Evidently (and completely inadvertently) this exchange has just
    > > illustrated one of the inadmissable assumptions:
    > >
    > > "unicode as a medium is universal in the same way that ASCII used to
    > > be"

    >
    > Ironically, your post was not Unicode.
    >
    > Seriously. I am 100% serious.
    >
    > Your post was sent using a legacy encoding, Windows-1252, also known as
    > CP-1252, which is most certainly *not* Unicode. Whatever software you
    > used to send the message correctly flagged it with a charset header:
    >
    > Content-Type: text/plain; charset=windows-1252
    >
    > Alas, the software Roy Smith uses, MT-NewsWatcher, does not handle
    > encodings correctly (or at all!), it screws up the encoding then sends a
    > reply with no charset line at all. This is one bug that cannot be blamed
    > on Google Groups -- or on Unicode.
    >
    > > I wrote a number of ellipsis characters ie codepoint 2026 as in:

    > Actually you didn't. You wrote a number of ellipsis characters, hex byte
    > \x85 (decimal 133), in the CP1252 charset. That happens to be mapped to
    > code point U+2026 in Unicode, but the two are as distinct as ASCII and
    > EBCDIC.
    >
    > > Somewhere between my sending and your quoting those ellipses became
    > > the replacement character FFFD

    >
    > Yes, it appears that MT-NewsWatcher is *deeply, deeply* confused about
    > encodings and character sets. It doesn't just assume things are ASCII,
    > but makes a half-hearted attempt to be charset-aware, but badly. I can
    > only imagine that it was written back in the Dark Ages where there were
    > a lot of different charsets in use but no conventions for specifying
    > which charset was in use. Or perhaps the author was smoking crack while
    > coding.
    >
    > > Leaving aside whose fault this is (very likely buggy google groups),
    > > this mojibaking cannot happen if the assumption "All text is ASCII"
    > > were to uniformly hold.

    >
    > This is incorrect. People forget that ASCII has evolved since the first
    > version of the standard in 1963. There have actually been five versions
    > of the ASCII standard, plus one unpublished version. (And that's not
    > including the things which are frequently called ASCII but aren't.)
    >
    > ASCII-1963 didn't even include lowercase letters. It is also missing
    > some graphic characters like braces, and included at least two
    > characters no longer used, the up-arrow and left-arrow. The control
    > characters were also significantly different from today.
    >
    > ASCII-1965 was unpublished and unused. I don't know the details of what
    > it changed.
    >
    > ASCII-1967 is a lot closer to the ASCII in use today. It made
    > considerable changes to the control characters, moving, adding,
    > removing, or renaming at least half a dozen control characters. It
    > officially added lowercase letters, braces, and some others. It
    > replaced the up-arrow character with the caret and the left-arrow with
    > the underscore. It was ambiguous, allowing variations and
    > substitutions, e.g.:
    >
    > - character 33 was permitted to be either the exclamation
    > mark ! or the logical OR symbol |
    >
    > - consequently character 124 (vertical bar) was always
    > displayed as a broken bar ¦, which explains why even today
    > many keyboards show it that way
    >
    > - character 35 was permitted to be either the number sign # or
    > the pound sign £
    >
    > - character 94 could be either a caret ^ or a logical NOT ¬
    >
    > Even the humble comma could be pressed into service as a cedilla.
    >
    > ASCII-1968 didn't change any characters, but allowed the use of LF on
    > its own. Previously, you had to use either LF/CR or CR/LF as newline.
    >
    > ASCII-1977 removed the ambiguities from the 1967 standard.
    >
    > The most recent version is ASCII-1986 (also known as ANSI X3.4-1986).
    > Unfortunately I haven't been able to find out what changes were made --
    > I presume they were minor, and didn't affect the character set.
    >
    > So as you can see, even with actual ASCII, you can have mojibake. It's
    > just not normally called that. But if you are given an arbitrary ASCII
    > file of unknown age, containing code 94, how can you be sure it was
    > intended as a caret rather than a logical NOT symbol? You can't.
    >
    > Then there are at least 30 official variations of ASCII, strictly
    > speaking part of ISO-646. These 7-bit codes were commonly called "ASCII"
    > by their users, despite the differences, e.g. replacing the dollar sign
    > $ with the international currency sign ¤, or replacing the left brace
    > { with the letter s with caron Å¡.
    >
    > One consequence of this is that the MIME type for ASCII text is called
    > "US ASCII", despite the redundancy, because many people expect "ASCII"
    > alone to mean whatever national variation they are used to.
    >
    > But it gets worse: there are proprietary variations on ASCII which are
    > commonly called "ASCII" but aren't, including dozens of 8-bit so-called
    > "extended ASCII" character sets, which is where the problems *really*
    > pile up. Invariably back in the 1980s and early 1990s people used to
    > call these "ASCII" no matter that they used 8-bits and contained
    > anything up to 256 characters.
    >
    > Just because somebody calls something "ASCII", doesn't make it so; even
    > if it is ASCII, doesn't mean you know which version of ASCII; even if
    > you know which version, doesn't mean you know how to interpret certain
    > codes. It simply is *wrong* to think that "good ol' plain ASCII text"
    > is unambiguous and devoid of problems.
    >
    > > With unicode there are in-memory formats, transportation formats eg
    > > UTF-8,

    >
    > And the same applies to ASCII.
    >
    > ASCII is a *seven-bit code*. It will work fine on computers where the
    > word-size is seven bits. If the word-size is eight bits, or more, you
    > have to pad the ASCII code. How do you do that? Pad the most-significant
    > end or the least significant end? That's a choice there. How do you pad
    > it, with a zero or a one? That's another choice. If your word-size is
    > more than eight bits, you might even pad *both* ends.
    >
    > In C, a char is defined as the smallest addressable unit of the machine
    > that can contain basic character set, not necessarily eight bits.
    > Implementations of C and C++ sometimes reserve 8, 9, 16, 32, or 36 bits
    > as a "byte" and/or char. Your in-memory representation of ASCII "a"
    > could easily end up as bits 001100001 or 0000000001100001.
    >
    > And then there is the question of whether ASCII characters should be Big
    > Endian or Little Endian. I'm referring here to bit endianness, rather
    > than bytes: should character 'a' be represented as bits 1100001 (most
    > significant bit to the left) or 1000011 (least significant bit to the
    > left)? This may be relevant with certain networking protocols. Not all
    > networking protocols are big-endian, nor are all processors. The Ada
    > programming language even supports both bit orders.
    >
    > When transmitting ASCII characters, the networking protocol could
    > include various start and stop bits and parity codes. A single 7-bit
    > ASCII character might be anything up to 12 bits in length on the wire.
    > It is simply naive to imagine that the transmission of ASCII codes is
    > the same as the in-memory or on-disk storage of ASCII.
    >
    > You're lucky to be active in a time when most common processors have
    > standardized on a single bit-order, and when most (but not all) network
    > protocols have done the same. But that doesn't mean that these issues
    > don't exist for ASCII. If you get a message that purports to be ASCII
    > text but looks like this:
    >
    > "\tS\x1b\x1b{\x01u{'\x1b\x13!"
    >
    > you should suspect strongly that it is "Hello World!" which has been
    > accidentally bit-reversed by some rogue piece of hardware.


    You can lay a lot of the ASCII ambiguity on D.E.C. and their vt series
    terminals, anything newer than a vt100 made liberal use of the msbit in a
    character. Having written an emulator for the vt-220, I can testify that
    really getting it right, was a right pain in the ass. And then I added
    zmodem triggers and detections.

    Cheers, Gene
    --
    "There are four boxes to be used in defense of liberty:
    soap, ballot, jury, and ammo. Please use in that order."
    -Ed Howdershelt (Author)
    Genes Web page <http://geneslinuxbox.net:6309/gene>

    Mother Earth is not flat!
    A pen in the hand of this president is far more
    dangerous than 200 million guns in the hands of
    law-abiding citizens.
    Gene Heskett, Dec 6, 2013
    #14
  15. rusi

    Roy Smith Guest

    Re: ASCII and Unicode [was Re: Managing Google Groups headaches]

    Steven D'Aprano <steve+comp.lang.python <at> pearwood.info> writes:

    > Yes, it appears that MT-NewsWatcher is *deeply, deeply* confused about
    > encodings and character sets. It doesn't just assume things are ASCII,
    > but makes a half-hearted attempt to be charset-aware, but badly. I can
    > only imagine that it was written back in the Dark Ages


    Indeed. The basic codebase probably goes back 20 years. I'm posting this
    from gmane, just so people don't think I'm a total luddite.

    > When transmitting ASCII characters, the networking protocol could include
    > various start and stop bits and parity codes. A single 7-bit ASCII
    > character might be anything up to 12 bits in length on the wire.


    Not to mention that some really old hardware used 1.5 stop bits!
    Roy Smith, Dec 6, 2013
    #15
  16. rusi wrote:
    > On Friday, December 6, 2013 1:06:30 PM UTC+5:30, Roy Smith wrote:
    >
    >>Which means, if I wanted to (and many examples of this exist), I can
    >>write my own client which presents the same information in different
    >>ways.

    >
    > Not sure whats your point.


    The point is the existence of an alternative interface that's
    designed for use by other programs rather than humans.

    This is what web forums are missing. If it existed, one could
    easily create an alternative client with a newsreader-like
    interface. Without it, such a client would have to be a
    monstrosity that worked by screen-scraping the html.

    It's not about the format of the messages themselves -- that
    could be text, or html, or reST, or bbcode or whatever. It's
    about the *framing* of the messages, and being able to
    query them by their metadata.

    --
    Greg
    Gregory Ewing, Dec 6, 2013
    #16
  17. Re: ASCII and Unicode [was Re: Managing Google Groups headaches]

    On Sat, Dec 7, 2013 at 6:00 AM, Steven D'Aprano
    <> wrote:
    > - character 33 was permitted to be either the exclamation
    > mark ! or the logical OR symbol |
    >
    > - consequently character 124 (vertical bar) was always
    > displayed as a broken bar ¦, which explains why even today
    > many keyboards show it that way
    >
    > - character 35 was permitted to be either the number sign # or
    > the pound sign £
    >
    > - character 94 could be either a caret ^ or a logical NOT ¬


    Yeah, good fun stuff. I first met several of these ambiguities in the
    OS/2 REXX documentation, which detailed the language's operators by
    specifying their byte values as well as their characters - for
    instance, this quote from the docs (yeah, I still have it all here):

    """
    Note: Depending upon your Personal System keyboard and the code page
    you are using, you may not have the solid vertical bar to select. For
    this reason, REXX also recognizes the use of the split vertical bar as
    a logical OR symbol. Some keyboards may have both characters. If so,
    they are not interchangeable; only the character that is equal to the
    ASCII value of 124 works as the logical OR. This type of mismatch can
    also cause the character on your screen to be different from the
    character on your keyboard.
    """
    (The front material on the docs says "(C) Copyright IBM Corp. 1987,
    1994. All Rights Reserved.")

    It says "ASCII value" where on this list we would be more likely to
    call it "byte value", and I'd prefer to say "represented by" rather
    than "equal to", but nonetheless, this is still clearly distinguishing
    characters and bytes. The language spec is on characters, but
    ultimately the interpreter is going to be looking at bytes, so when
    there's a problem, it's byte 124 that's the one defined as logical OR.
    Oh, and note the copyright date. The byte/char distinction isn't new.

    ChrisA
    Chris Angelico, Dec 6, 2013
    #17
  18. On 12/6/13 8:03 AM, rusi wrote:
    >> I think you're off on the wrong track here. This has nothing to do with
    >> >plain text (ascii or otherwise). It has to do with divorcing how you
    >> >store and transport messages (be they plain text, HTML, or whatever)
    >> >from how a user interacts with them.

    >
    > Evidently (and completely inadvertently) this exchange has just
    > illustrated one of the inadmissable assumptions:
    >
    > "unicode as a medium is universal in the same way that ASCII used to be"
    >
    > I wrote a number of ellipsis characters ie codepoint 2026 as in:
    >
    > - human communication…
    > (is not very different from)
    > - machine communication…
    >
    > Somewhere between my sending and your quoting those ellipses became
    > the replacement character FFFD
    >
    >>> > > - human communication�
    >>> > >(is not very different from)
    >>> > > - machine communication�

    > Leaving aside whose fault this is (very likely buggy google groups),
    > this mojibaking cannot happen if the assumption "All text is ASCII"
    > were to uniformly hold.
    >
    > Of course with unicode also this can be made to not happen, but that
    > is fragile and error-prone. And that is because ASCII (not extended)
    > is ONE thing in a way that unicode is hopelessly a motley inconsistent
    > variety.


    You seem to be suggesting that we should stick to ASCII. There are of
    course languages that need more than just the Latin alphabet. How would
    you suggest we support them? Or maybe I don't understand?

    --Ned.
    Ned Batchelder, Dec 7, 2013
    #18
  19. rusi

    rusi Guest

    Re: ASCII and Unicode [was Re: Managing Google Groups headaches]

    On Saturday, December 7, 2013 12:30:18 AM UTC+5:30, Steven D'Aprano wrote:
    > On Fri, 06 Dec 2013 05:03:57 -0800, rusi wrote:


    > > Evidently (and completely inadvertently) this exchange has just
    > > illustrated one of the inadmissable assumptions:
    > > "unicode as a medium is universal in the same way that ASCII used to be"


    > Ironically, your post was not Unicode.


    > Seriously. I am 100% serious.


    > Your post was sent using a legacy encoding, Windows-1252, also known as
    > CP-1252, which is most certainly *not* Unicode. Whatever software you
    > used to send the message correctly flagged it with a charset header:


    > Content-Type: text/plain; charset=windows-1252


    > Alas, the software Roy Smith uses, MT-NewsWatcher, does not handle
    > encodings correctly (or at all!), it screws up the encoding then sends a
    > reply with no charset line at all. This is one bug that cannot be blamed
    > on Google Groups -- or on Unicode.


    > > I wrote a number of ellipsis characters ie codepoint 2026 as in:


    > Actually you didn't. You wrote a number of ellipsis characters, hex byte
    > \x85 (decimal 133), in the CP1252 charset. That happens to be mapped to
    > code point U+2026 in Unicode, but the two are as distinct as ASCII and
    > EBCDIC.


    > > Somewhere between my sending and your quoting those ellipses became the
    > > replacement character FFFD


    > Yes, it appears that MT-NewsWatcher is *deeply, deeply* confused about
    > encodings and character sets. It doesn't just assume things are ASCII,
    > but makes a half-hearted attempt to be charset-aware, but badly. I can
    > only imagine that it was written back in the Dark Ages where there were a
    > lot of different charsets in use but no conventions for specifying which
    > charset was in use. Or perhaps the author was smoking crack while coding.


    > > Leaving aside whose fault this is (very likely buggy google groups),
    > > this mojibaking cannot happen if the assumption "All text is ASCII" were
    > > to uniformly hold.


    > This is incorrect. People forget that ASCII has evolved since the first
    > version of the standard in 1963. There have actually been five versions
    > of the ASCII standard, plus one unpublished version. (And that's not
    > including the things which are frequently called ASCII but aren't.)


    > ASCII-1963 didn't even include lowercase letters. It is also missing some
    > graphic characters like braces, and included at least two characters no
    > longer used, the up-arrow and left-arrow. The control characters were
    > also significantly different from today.


    > ASCII-1965 was unpublished and unused. I don't know the details of what
    > it changed.


    > ASCII-1967 is a lot closer to the ASCII in use today. It made
    > considerable changes to the control characters, moving, adding, removing,
    > or renaming at least half a dozen control characters. It officially added
    > lowercase letters, braces, and some others. It replaced the up-arrow
    > character with the caret and the left-arrow with the underscore. It was
    > ambiguous, allowing variations and substitutions, e.g.:


    > - character 33 was permitted to be either the exclamation
    > mark ! or the logical OR symbol |


    > - consequently character 124 (vertical bar) was always
    > displayed as a broken bar ¦, which explains why even today
    > many keyboards show it that way


    > - character 35 was permitted to be either the number sign # or
    > the pound sign £


    > - character 94 could be either a caret ^ or a logical NOT ¬


    > Even the humble comma could be pressed into service as a cedilla.


    > ASCII-1968 didn't change any characters, but allowed the use of LF on its
    > own. Previously, you had to use either LF/CR or CR/LF as newline.


    > ASCII-1977 removed the ambiguities from the 1967 standard.


    > The most recent version is ASCII-1986 (also known as ANSI X3.4-1986).
    > Unfortunately I haven't been able to find out what changes were made -- I
    > presume they were minor, and didn't affect the character set.


    > So as you can see, even with actual ASCII, you can have mojibake. It's
    > just not normally called that. But if you are given an arbitrary ASCII
    > file of unknown age, containing code 94, how can you be sure it was
    > intended as a caret rather than a logical NOT symbol? You can't.


    > Then there are at least 30 official variations of ASCII, strictly
    > speaking part of ISO-646. These 7-bit codes were commonly called "ASCII"
    > by their users, despite the differences, e.g. replacing the dollar sign $
    > with the international currency sign ¤, or replacing the left brace
    > { with the letter s with caron Å¡.


    > One consequence of this is that the MIME type for ASCII text is called
    > "US ASCII", despite the redundancy, because many people expect "ASCII"
    > alone to mean whatever national variation they are used to.


    > But it gets worse: there are proprietary variations on ASCII which are
    > commonly called "ASCII" but aren't, including dozens of 8-bit so-called
    > "extended ASCII" character sets, which is where the problems *really*
    > pile up. Invariably back in the 1980s and early 1990s people used to call
    > these "ASCII" no matter that they used 8-bits and contained anything up
    > to 256 characters.


    > Just because somebody calls something "ASCII", doesn't make it so; even
    > if it is ASCII, doesn't mean you know which version of ASCII; even if you
    > know which version, doesn't mean you know how to interpret certain codes.
    > It simply is *wrong* to think that "good ol' plain ASCII text" is
    > unambiguous and devoid of problems.


    > > With unicode there are in-memory formats, transportation formats eg
    > > UTF-8,


    > And the same applies to ASCII.


    > ASCII is a *seven-bit code*. It will work fine on computers where the
    > word-size is seven bits. If the word-size is eight bits, or more, you
    > have to pad the ASCII code. How do you do that? Pad the most-significant
    > end or the least significant end? That's a choice there. How do you pad
    > it, with a zero or a one? That's another choice. If your word-size is
    > more than eight bits, you might even pad *both* ends.


    > In C, a char is defined as the smallest addressable unit of the machine
    > that can contain basic character set, not necessarily eight bits.
    > Implementations of C and C++ sometimes reserve 8, 9, 16, 32, or 36 bits
    > as a "byte" and/or char. Your in-memory representation of ASCII "a" could
    > easily end up as bits 001100001 or 0000000001100001.


    > And then there is the question of whether ASCII characters should be Big
    > Endian or Little Endian. I'm referring here to bit endianness, rather
    > than bytes: should character 'a' be represented as bits 1100001 (most
    > significant bit to the left) or 1000011 (least significant bit to the
    > left)? This may be relevant with certain networking protocols. Not all
    > networking protocols are big-endian, nor are all processors. The Ada
    > programming language even supports both bit orders.


    > When transmitting ASCII characters, the networking protocol could include
    > various start and stop bits and parity codes. A single 7-bit ASCII
    > character might be anything up to 12 bits in length on the wire. It is
    > simply naive to imagine that the transmission of ASCII codes is the same
    > as the in-memory or on-disk storage of ASCII.


    > You're lucky to be active in a time when most common processors have
    > standardized on a single bit-order, and when most (but not all) network
    > protocols have done the same. But that doesn't mean that these issues
    > don't exist for ASCII. If you get a message that purports to be ASCII
    > text but looks like this:


    > "\tS\x1b\x1b{\x01u{'\x1b\x13!"


    > you should suspect strongly that it is "Hello World!" which has been
    > accidentally bit-reversed by some rogue piece of hardware.


    OOf! Thats a lot of data to digest! Thanks anyway.

    There's one thing I want to get into:

    > Your post was sent using a legacy encoding, Windows-1252, also known as
    > CP-1252, which is most certainly *not* Unicode. Whatever software you
    > used to send the message correctly flagged it with a charset header:


    What the hell! I am using firefox 25.0 in debian-testing and posting via GG..

    $ locale
    shows me:
    LANG=en_US.UTF-8

    and a bunch of other things all en_US.UTF-8.

    For the most part when I point FF at any site and go to view ->
    character-encoding, it says Unicode (UTF-8).

    However when I go to anything in the python archives:
    https://mail.python.org/pipermail/python-list/2013-December/

    FF shows it as Western (Windows-1252)

    That seems to suggest that something is not right with the python
    mailing list config. No??
    rusi, Dec 7, 2013
    #19
  20. Re: ASCII and Unicode [was Re: Managing Google Groups headaches]

    On Sat, Dec 7, 2013 at 1:33 PM, rusi <> wrote:
    > That seems to suggest that something is not right with the python
    > mailing list config. No??


    If in doubt, blame someone else, eh?

    I'd first check what your browser's actually sending. Firebug will
    help there. See if your form fill-out is encoded as UTF-8 or CP-1252.
    That's the first step.

    ChrisA
    Chris Angelico, Dec 7, 2013
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. anonymous
    Replies:
    1
    Views:
    4,564
    Francisco Padron
    May 8, 2005
  2. Andrew Thompson

    FAQ - references to Google/Google Groups

    Andrew Thompson, Jun 20, 2005, in forum: Java
    Replies:
    0
    Views:
    599
    Andrew Thompson
    Jun 20, 2005
  3. Desperado

    managing local groups

    Desperado, Nov 29, 2006, in forum: C++
    Replies:
    1
    Views:
    293
    Victor Bazarov
    Nov 29, 2006
  4. Chris Angelico

    Re: Managing Google Groups headaches

    Chris Angelico, Nov 28, 2013, in forum: Python
    Replies:
    49
    Views:
    313
    Mark Lawrence
    Dec 4, 2013
  5. Cameron Simpson

    Re: Managing Google Groups headaches

    Cameron Simpson, Dec 4, 2013, in forum: Python
    Replies:
    4
    Views:
    98
Loading...

Share This Page