catch UnicodeDecodeError

Discussion in 'Python' started by jaroslav.dobrek@gmail.com, Jul 25, 2012.

  1. Guest

    Hello,

    very often I have the following problem: I write a program that processes many files which it assumes to be encoded in utf-8. Then, some day, I there is a non-utf-8 character in one of several hundred or thousand (new) files.The program exits with an error message like this:

    UnicodeDecodeError: 'utf8' codec can't decode byte 0xe4 in position 60: invalid continuation byte

    I usually solve the problem by moving files around and by recoding them.

    What I really want to do is use something like

    try:
    # open file, read line, or do something else, I don't care
    except UnicodeDecodeError:
    sys.exit("Found a bad char in file " + file + " line " + str(line_number)

    Yet, no matter where I put this try-except, it doesn't work.

    How should I use try-except with UnicodeDecodeError?

    Jaroslav
     
    , Jul 25, 2012
    #1
    1. Advertising

  2. Andrew Berg Guest

    On 7/25/2012 6:05 AM, wrote:
    > What I really want to do is use something like
    >
    > try:
    > # open file, read line, or do something else, I don't care
    > except UnicodeDecodeError:
    > sys.exit("Found a bad char in file " + file + " line " + str(line_number)
    >
    > Yet, no matter where I put this try-except, it doesn't work.
    >
    > How should I use try-except with UnicodeDecodeError?

    The same way you handle any other exception. The traceback will tell you
    the exact line that raised the exception. It helps us help you if you
    include the full traceback and give more detail than "it doesn't work".

    --
    CPython 3.3.0b1 | Windows NT 6.1.7601.17803
     
    Andrew Berg, Jul 25, 2012
    #2
    1. Advertising

  3. Hi Jaroslav,

    you can catch a UnicodeDecodeError just like any other exception. Can
    you provide a full example program that shows your problem?

    This works fine on my system:


    import sys
    open('tmp', 'wb').write(b'\xff\xff')
    try:
    buf = open('tmp', 'rb').read()
    buf.decode('utf-8')
    except UnicodeDecodeError as ude:
    sys.exit("Found a bad char in file " + "tmp")


    Note that you cannot possibly determine the line number if you don't
    know what encoding the file is in (and what EOL it uses).

    What you can do is count the number of bytes with the value 10 before
    ude.start, like this:

    lineGuess = buf[:ude.start].count(b'\n') + 1

    - Philipp

    On 07/25/2012 01:05 PM, wrote:
    > it doesn't work



    -----BEGIN PGP SIGNATURE-----
    Version: GnuPG v1.4.12 (GNU/Linux)

    iEYEAREKAAYFAlAP2e8ACgkQ9eq1gvr7CFxjIgCfZDryZu+HIQl4wSfH62sAEJl/
    IlgAoJUqLDDWYZREqYe9O5PKYdlsMBki
    =cGOq
    -----END PGP SIGNATURE-----
     
    Philipp Hagemeister, Jul 25, 2012
    #3
  4. Guest

    On Wednesday, July 25, 2012 1:35:09 PM UTC+2, Philipp Hagemeister wrote:
    > Hi Jaroslav,
    >
    > you can catch a UnicodeDecodeError just like any other exception. Can
    > you provide a full example program that shows your problem?
    >
    > This works fine on my system:
    >
    >
    > import sys
    > open('tmp', 'wb').write(b'\xff\xff')
    > try:
    > buf = open('tmp', 'rb').read()
    > buf.decode('utf-8')
    > except UnicodeDecodeError as ude:
    > sys.exit("Found a bad char in file " + "tmp")
    >


    Thank you. I got it. What I need to do is explicitly decode text.

    But I think trial and error with moving files around will in most cases be faster. Usually, such a problem occurs with some (usually complex) program that I wrote quite a long time ago. I don't like editing old and complex programs that work under all normal circumstances.

    What I am missing (especially for Python3) is something like:

    try:
    for line in sys.stdin:
    except UnicodeDecodeError:
    sys.exit("Encoding problem in line " + str(line_number))

    I got the point that there is no such thing as encoding-independent lines. But if no line ending can be found, then the file simply has one single line.
     
    , Jul 25, 2012
    #4
  5. Guest

    On Wednesday, July 25, 2012 1:35:09 PM UTC+2, Philipp Hagemeister wrote:
    > Hi Jaroslav,
    >
    > you can catch a UnicodeDecodeError just like any other exception. Can
    > you provide a full example program that shows your problem?
    >
    > This works fine on my system:
    >
    >
    > import sys
    > open('tmp', 'wb').write(b'\xff\xff')
    > try:
    > buf = open('tmp', 'rb').read()
    > buf.decode('utf-8')
    > except UnicodeDecodeError as ude:
    > sys.exit("Found a bad char in file " + "tmp")
    >


    Thank you. I got it. What I need to do is explicitly decode text.

    But I think trial and error with moving files around will in most cases be faster. Usually, such a problem occurs with some (usually complex) program that I wrote quite a long time ago. I don't like editing old and complex programs that work under all normal circumstances.

    What I am missing (especially for Python3) is something like:

    try:
    for line in sys.stdin:
    except UnicodeDecodeError:
    sys.exit("Encoding problem in line " + str(line_number))

    I got the point that there is no such thing as encoding-independent lines. But if no line ending can be found, then the file simply has one single line.
     
    , Jul 25, 2012
    #5
  6. Dave Angel Guest

    On 07/25/2012 08:09 AM, wrote:
    > On Wednesday, July 25, 2012 1:35:09 PM UTC+2, Philipp Hagemeister wrote:
    >> Hi Jaroslav,
    >>
    >> you can catch a UnicodeDecodeError just like any other exception. Can
    >> you provide a full example program that shows your problem?
    >>
    >> This works fine on my system:
    >>
    >>
    >> import sys
    >> open('tmp', 'wb').write(b'\xff\xff')
    >> try:
    >> buf = open('tmp', 'rb').read()
    >> buf.decode('utf-8')
    >> except UnicodeDecodeError as ude:
    >> sys.exit("Found a bad char in file " + "tmp")
    >>

    > Thank you. I got it. What I need to do is explicitly decode text.
    >
    > But I think trial and error with moving files around will in most cases be faster. Usually, such a problem occurs with some (usually complex) program that I wrote quite a long time ago. I don't like editing old and complex programs that work under all normal circumstances.
    >
    > What I am missing (especially for Python3) is something like:
    >
    > try:
    > for line in sys.stdin:
    > except UnicodeDecodeError:
    > sys.exit("Encoding problem in line " + str(line_number))
    >
    > I got the point that there is no such thing as encoding-independent lines. But if no line ending can be found, then the file simply has one single line.


    i can't understand your question. if the problem is that the system
    doesn't magically produce a variable called line_number, then generate
    it yourself, by counting
    in the loop.

    Don't forget that you can tell the unicode decoder to ignore bad
    characters, or to convert them to a specified placeholder.



    --

    DaveA
     
    Dave Angel, Jul 25, 2012
    #6
  7. On Jul 25, 8:50 pm, Dave Angel <> wrote:
    > On 07/25/2012 08:09 AM, wrote:
    >
    >
    >
    >
    >
    >
    >
    >
    >
    > > On Wednesday, July 25, 2012 1:35:09 PM UTC+2, Philipp Hagemeister wrote:
    > >> Hi Jaroslav,

    >
    > >> you can catch a UnicodeDecodeError just like any other exception. Can
    > >> you provide a full example program that shows your problem?

    >
    > >> This works fine on my system:

    >
    > >> import sys
    > >> open('tmp', 'wb').write(b'\xff\xff')
    > >> try:
    > >>     buf = open('tmp', 'rb').read()
    > >>     buf.decode('utf-8')
    > >> except UnicodeDecodeError as ude:
    > >>     sys.exit(&quot;Found a bad char in file &quot; + &quot;tmp&quot;)

    >
    > > Thank you. I got it. What I need to do is explicitly decode text.

    >
    > > But I think trial and error with moving files around will in most casesbe faster. Usually, such a problem occurs with some (usually complex) program that I wrote quite a long time ago. I don't like editing old and complex programs that work under all normal circumstances.

    >
    > > What I am missing (especially for Python3) is something like:

    >
    > > try:
    > >     for line in sys.stdin:
    > > except UnicodeDecodeError:
    > >     sys.exit("Encoding problem in line " + str(line_number))

    >
    > > I got the point that there is no such thing as encoding-independent lines. But if no line ending can be found, then the file simply has one singleline.

    >
    > i can't understand your question.  if the problem is that the system
    > doesn't magically produce a variable called line_number, then generate
    > it yourself, by counting
    > in the loop.



    That was just a very incomplete and general example.

    My problem is solved. What I need to do is explicitly decode text when
    reading it. Then I can catch exceptions. I might do this in future
    programs.

    I dislike about this solution that it complicates most programs
    unnecessarily. In programs that open, read and process many files I
    don't want to explicitly decode and encode characters all the time. I
    just want to write:

    for line in f:

    or something like that. Yet, writing this means to *implicitly* decode
    text. And, because the decoding is implicit, you cannot say

    try:
    for line in f: # here text is decoded implicitly
    do_something()
    except UnicodeDecodeError():
    do_something_different()

    This isn't possible for syntactic reasons.

    The problem is that vast majority of the thousands of files that I
    process are correctly encoded. But then, suddenly, there is a bad
    character in a new file. (This is so because most files today are
    generated by people who don't know that there is such a thing as
    encodings.) And then I need to rewrite my very complex program just
    because of one single character in one single file.
     
    Jaroslav Dobrek, Jul 26, 2012
    #7
  8. Jaroslav Dobrek, 26.07.2012 09:46:
    > My problem is solved. What I need to do is explicitly decode text when
    > reading it. Then I can catch exceptions. I might do this in future
    > programs.


    Yes, that's the standard procedure. Decode on the way in, encode on the way
    out, use Unicode everywhere in between.


    > I dislike about this solution that it complicates most programs
    > unnecessarily. In programs that open, read and process many files I
    > don't want to explicitly decode and encode characters all the time. I
    > just want to write:
    >
    > for line in f:


    And the cool thing is: you can! :)

    In Python 2.6 and later, the new Py3 open() function is a bit more hidden,
    but it's still available:

    from io import open

    filename = "somefile.txt"
    try:
    with open(filename, encoding="utf-8") as f:
    for line in f:
    process_line(line) # actually, I'd use "process_file(f)"
    except IOError, e:
    print("Reading file %s failed: %s" % (filename, e))
    except UnicodeDecodeError, e:
    print("Some error occurred decoding file %s: %s" % (filename, e))


    Ok, maybe with a better way to handle the errors than "print" ...

    For older Python versions, you'd use "codecs.open()" instead. That's a bit
    messy, but only because it was finally cleaned up for Python 3.


    > or something like that. Yet, writing this means to *implicitly* decode
    > text. And, because the decoding is implicit, you cannot say
    >
    > try:
    > for line in f: # here text is decoded implicitly
    > do_something()
    > except UnicodeDecodeError():
    > do_something_different()
    >
    > This isn't possible for syntactic reasons.


    Well, you'd normally want to leave out the parentheses after the exception
    type, but otherwise, that's perfectly valid Python code. That's how these
    things work.


    > The problem is that vast majority of the thousands of files that I
    > process are correctly encoded. But then, suddenly, there is a bad
    > character in a new file. (This is so because most files today are
    > generated by people who don't know that there is such a thing as
    > encodings.) And then I need to rewrite my very complex program just
    > because of one single character in one single file.


    Why would that be the case? The places to change should be very local in
    your code.

    Stefan
     
    Stefan Behnel, Jul 26, 2012
    #8
  9. On Thu, Jul 26, 2012 at 5:46 PM, Jaroslav Dobrek
    <> wrote:
    > My problem is solved. What I need to do is explicitly decode text when
    > reading it. Then I can catch exceptions. I might do this in future
    > programs.


    Apologies if it's already been said (I'm only skimming this thread),
    but ISTM that you want to open the file in binary mode. You'll then
    get back a bytes() instead of a str(), and you can attempt to decode
    it separately. You may then need to do your own division into lines
    that way, though.

    ChrisA
     
    Chris Angelico, Jul 26, 2012
    #9
  10. Guest

    On Thursday, July 26, 2012 9:46:27 AM UTC+2, Jaroslav Dobrek wrote:
    > On Jul 25, 8:50 pm, Dave Angel &lt;&gt; wrote:
    > &gt; On 07/25/2012 08:09 AM, wrote:
    > &gt;
    > &gt;
    > &gt;
    > &gt;
    > &gt;
    > &gt;
    > &gt;
    > &gt;
    > &gt;
    > &gt; &gt; On Wednesday, July 25, 2012 1:35:09 PM UTC+2, Philipp Hagemeister wrote:
    > &gt; &gt;&gt; Hi Jaroslav,
    > &gt;
    > &gt; &gt;&gt; you can catch a UnicodeDecodeError just like any other exception. Can
    > &gt; &gt;&gt; you provide a full example program that shows your problem?
    > &gt;
    > &gt; &gt;&gt; This works fine on my system:
    > &gt;
    > &gt; &gt;&gt; import sys
    > &gt; &gt;&gt; open(&amp;#39;tmp&amp;#39;, &amp;#39;wb&amp;#39;).write(b&amp;#39;\xff\xff&amp;#39;)
    > &gt; &gt;&gt; try:
    > &gt; &gt;&gt;     buf = open(&amp;#39;tmp&amp;#39;, &amp;#39;rb&amp;#39;).read()
    > &gt; &gt;&gt;     buf.decode(&amp;#39;utf-8&amp;#39;)
    > &gt; &gt;&gt; except UnicodeDecodeError as ude:
    > &gt; &gt;&gt;     sys.exit(&amp;quot;Found a bad char in file &amp;quot; + &amp;quot;tmp&amp;quot;)
    > &gt;
    > &gt; &gt; Thank you. I got it. What I need to do is explicitly decode text.
    > &gt;
    > &gt; &gt; But I think trial and error with moving files around will in most cases be faster. Usually, such a problem occurs with some (usually complex) program that I wrote quite a long time ago. I don't like editing old and complex programs that work under all normal circumstances.
    > &gt;
    > &gt; &gt; What I am missing (especially for Python3) is something like:
    > &gt;
    > &gt; &gt; try:
    > &gt; &gt;     for line in sys.stdin:
    > &gt; &gt; except UnicodeDecodeError:
    > &gt; &gt;     sys.exit(&quot;Encoding problem in line &quot; + str(line_number))
    > &gt;
    > &gt; &gt; I got the point that there is no such thing as encoding-independent lines. But if no line ending can be found, then the file simply has one single line.
    > &gt;
    > &gt; i can't understand your question.  if the problem is that the system
    > &gt; doesn't magically produce a variable called line_number, then generate
    > &gt; it yourself, by counting
    > &gt; in the loop.
    >
    >
    > That was just a very incomplete and general example.
    >
    > My problem is solved. What I need to do is explicitly decode text when
    > reading it. Then I can catch exceptions. I might do this in future
    > programs.
    >
    > I dislike about this solution that it complicates most programs
    > unnecessarily. In programs that open, read and process many files I
    > don't want to explicitly decode and encode characters all the time. I
    > just want to write:
    >
    > for line in f:
    >
    > or something like that. Yet, writing this means to *implicitly* decode
    > text. And, because the decoding is implicit, you cannot say
    >
    > try:
    > for line in f: # here text is decoded implicitly
    > do_something()
    > except UnicodeDecodeError():
    > do_something_different()
    >
    > This isn't possible for syntactic reasons.
    >
    > The problem is that vast majority of the thousands of files that I
    > process are correctly encoded. But then, suddenly, there is a bad
    > character in a new file. (This is so because most files today are
    > generated by people who don't know that there is such a thing as
    > encodings.) And then I need to rewrite my very complex program just
    > because of one single character in one single file.


    In my mind you are taking the problem the wrong way.

    Basically there is no "real UnicodeDecodeError", you are
    just wrongly attempting to read a file with the wrong
    codec. Catching a UnicodeDecodeError will not correct
    the basic problem, it will "only" show, you are using
    a wrong codec.
    There is still the possibility, you have to deal with an
    ill-formed utf-8 codding, but I doubt it is the case.

    Do not forget, a "bit of text" has only a meaning if you
    know its coding.

    In short, all your files are most probably ok, you do not read
    them correctly.

    >>> b'abc\xeadef'.decode('utf-8')

    Traceback (most recent call last):
    File "<eta last command>", line 1, in <module>
    UnicodeDecodeError: 'utf-8' codec can't decode byte 0xea in
    position 3: invalid continuation byte
    >>> # but
    >>> b'abc\xeadef'.decode('cp1252')

    'abcêdef'
    >>> b'abc\xeadef'.decode('mac-roman')

    'abcÍdef'
    >>> b'abc\xeadef'.decode('iso-8859-1')

    'abcêdef'

    jmf
     
    , Jul 26, 2012
    #10
  11. > And the cool thing is: you can! :)
    >
    > In Python 2.6 and later, the new Py3 open() function is a bit more hidden,
    > but it's still available:
    >
    >     from io import open
    >
    >     filename = "somefile.txt"
    >     try:
    >         with open(filename, encoding="utf-8") as f:
    >             for line in f:
    >                 process_line(line)  # actually, I'd use"process_file(f)"
    >     except IOError, e:
    >         print("Reading file %s failed: %s" % (filename, e))
    >     except UnicodeDecodeError, e:
    >         print("Some error occurred decoding file %s: %s" % (filename, e))


    Thanks. I might use this in the future.

    > > try:
    > >     for line in f: # here text is decoded implicitly
    > >        do_something()
    > > except UnicodeDecodeError():
    > >     do_something_different()

    >
    > > This isn't possible for syntactic reasons.

    >
    > Well, you'd normally want to leave out the parentheses after the exception
    > type, but otherwise, that's perfectly valid Python code. That's how these
    > things work.


    You are right. Of course this is syntactically possible. I was too
    rash, sorry. In confused
    it with some other construction I once tried. I can't remember it
    right now.

    But the code above (without the brackets) is semantically bad: The
    exception is not caught.


    > > The problem is that vast majority of the thousands of files that I
    > > process are correctly encoded. But then, suddenly, there is a bad
    > > character in a new file. (This is so because most files today are
    > > generated by people who don't know that there is such a thing as
    > > encodings.) And then I need to rewrite my very complex program just
    > > because of one single character in one single file.

    >
    > Why would that be the case? The places to change should be very local in
    > your code.


    This is the case in a program that has many different functions which
    open and parse different
    types of files. When I read and parse a directory with such different
    types of files, a program that
    uses

    for line in f:

    will not exit with any hint as to where the error occurred. I just
    exits with a UnicodeDecodeError. That
    means I have to look at all functions that have some variant of

    for line in f:

    in them. And it is not sufficient to replace the "for line in f" part.
    I would have to transform many functions that
    work in terms of lines into functions that work in terms of decoded
    bytes.

    That is why I usually solve the problem by moving fles around until I
    find the bad file. Then I recode or repair
    the bad file manually.
     
    Jaroslav Dobrek, Jul 26, 2012
    #11
  12. Jaroslav Dobrek, 26.07.2012 12:51:
    >>> try:
    >>> for line in f: # here text is decoded implicitly
    >>> do_something()
    >>> except UnicodeDecodeError():
    >>> do_something_different()

    >
    > the code above (without the brackets) is semantically bad: The
    > exception is not caught.


    Sure it is. Just to repeat myself: if the above doesn't catch the
    exception, then the exception did not originate from the place where you
    think it did. Again: look at the traceback.


    >>> The problem is that vast majority of the thousands of files that I
    >>> process are correctly encoded. But then, suddenly, there is a bad
    >>> character in a new file. (This is so because most files today are
    >>> generated by people who don't know that there is such a thing as
    >>> encodings.) And then I need to rewrite my very complex program just
    >>> because of one single character in one single file.

    >>
    >> Why would that be the case? The places to change should be very local in
    >> your code.

    >
    > This is the case in a program that has many different functions which
    > open and parse different
    > types of files. When I read and parse a directory with such different
    > types of files, a program that
    > uses
    >
    > for line in f:
    >
    > will not exit with any hint as to where the error occurred. I just
    > exits with a UnicodeDecodeError.


    .... that tells you the exact code line where the error occurred. No need to
    look around.

    Stefan
     
    Stefan Behnel, Jul 26, 2012
    #12
  13. Guest

    > that tells you the exact code line where the error occurred. No need to
    > look around.



    You are right:

    try:
    for line in f:
    do_something()
    except UnicodeDecodeError:
    do_something_different()

    does exactly what one would expect it to do.

    Thank you very much for pointing this out and sorry for all the posts. This is one of the days when nothing seems to work and when I don't seem to able to read the simplest error message.
     
    , Jul 26, 2012
    #13
  14. Guest

    > that tells you the exact code line where the error occurred. No need to
    > look around.



    You are right:

    try:
    for line in f:
    do_something()
    except UnicodeDecodeError:
    do_something_different()

    does exactly what one would expect it to do.

    Thank you very much for pointing this out and sorry for all the posts. This is one of the days when nothing seems to work and when I don't seem to able to read the simplest error message.
     
    , Jul 26, 2012
    #14
  15. On 07/26/2012 01:15 PM, Stefan Behnel wrote:
    >> exits with a UnicodeDecodeError.

    > ... that tells you the exact code line where the error occurred.


    Which property of a UnicodeDecodeError does include that information?

    On cPython 2.7 and 3.2, I see only start and end, both of which refer to
    the number of bytes read so far.

    I used the followin test script:

    e = None
    try:
    b'a\xc3\xa4\nb\xff0'.decode('utf-8')
    except UnicodeDecodeError as ude:
    e = ude
    print(e.start) # 5 for this input, 3 for the input b'a\nb\xff0'
    print(dir(e))

    But even if you would somehow determine a line number, this would only
    work if the actual encoding uses 0xa for newline. Most encodings (101
    out of 108 applicable ones in cPython 3.2) do include 0x0a in their
    representation of '\n', but multi-byte encodings routinely include 0x0a
    bytes in their representation of non-newline characters. Therefore, the
    most you can do is calculate an upper bound for the line number.

    - Philipp






    -----BEGIN PGP SIGNATURE-----
    Version: GnuPG v1.4.12 (GNU/Linux)

    iEYEAREKAAYFAlARNWoACgkQ9eq1gvr7CFy8kACeMeAslB7dwOIlDSOlZd7fq0TO
    0o0AnAz9yvd2pErICNfJTvh+ilrqsMhC
    =DVt/
    -----END PGP SIGNATURE-----
     
    Philipp Hagemeister, Jul 26, 2012
    #15
  16. Philipp Hagemeister, 26.07.2012 14:17:
    > On 07/26/2012 01:15 PM, Stefan Behnel wrote:
    >>> exits with a UnicodeDecodeError.

    >> ... that tells you the exact code line where the error occurred.

    >
    > Which property of a UnicodeDecodeError does include that information?
    >
    > On cPython 2.7 and 3.2, I see only start and end, both of which refer to
    > the number of bytes read so far.


    Read again: "*code* line". The OP was apparently failing to see that the
    error did not originate in the source code lines that he had wrapped with a
    try-except statement but somewhere else, thus leading to the misguided
    impression that the exception was not properly caught by the except clause.

    Stefan
     
    Stefan Behnel, Jul 26, 2012
    #16
  17. On 07/26/2012 02:24 PM, Stefan Behnel wrote:
    > Read again: "*code* line". The OP was apparently failing to see that >

    the error did not originate in the source code lines that he had
    > wrapped with a try-except statement but somewhere else, thus leading

    to the misguided impression that the exception was not properly caught
    by the except clause.

    Oops, over a dozen posts and I still haven't grasped the OP's problem.
    Sorry! and thanks for noting that.

    - Philipp


    -----BEGIN PGP SIGNATURE-----
    Version: GnuPG v1.4.12 (GNU/Linux)

    iEYEAREKAAYFAlARO3IACgkQ9eq1gvr7CFzB7wCfc2+iRzUQVmaysDBNPq8JmM86
    /FIAoIDrIzKLc3CpdsONnOyg+3Xi6MRh
    =Awd6
    -----END PGP SIGNATURE-----
     
    Philipp Hagemeister, Jul 26, 2012
    #17
  18. Robert Miles Guest

    On 7/26/2012 5:51 AM, Jaroslav Dobrek wrote:
    >> And the cool thing is: you can! :)
    >>
    >> In Python 2.6 and later, the new Py3 open() function is a bit more hidden,
    >> but it's still available:
    >>
    >> from io import open
    >>
    >> filename = "somefile.txt"
    >> try:
    >> with open(filename, encoding="utf-8") as f:
    >> for line in f:
    >> process_line(line) # actually, I'd use "process_file(f)"
    >> except IOError, e:
    >> print("Reading file %s failed: %s" % (filename, e))
    >> except UnicodeDecodeError, e:
    >> print("Some error occurred decoding file %s: %s" % (filename, e))

    >
    > Thanks. I might use this in the future.
    >
    >>> try:
    >>> for line in f: # here text is decoded implicitly
    >>> do_something()
    >>> except UnicodeDecodeError():
    >>> do_something_different()

    >>
    >>> This isn't possible for syntactic reasons.

    >>
    >> Well, you'd normally want to leave out the parentheses after the exception
    >> type, but otherwise, that's perfectly valid Python code. That's how these
    >> things work.

    >
    > You are right. Of course this is syntactically possible. I was too
    > rash, sorry. In confused
    > it with some other construction I once tried. I can't remember it
    > right now.
    >
    > But the code above (without the brackets) is semantically bad: The
    > exception is not caught.
    >
    >
    >>> The problem is that vast majority of the thousands of files that I
    >>> process are correctly encoded. But then, suddenly, there is a bad
    >>> character in a new file. (This is so because most files today are
    >>> generated by people who don't know that there is such a thing as
    >>> encodings.) And then I need to rewrite my very complex program just
    >>> because of one single character in one single file.

    >>
    >> Why would that be the case? The places to change should be very local in
    >> your code.

    >
    > This is the case in a program that has many different functions which
    > open and parse different
    > types of files. When I read and parse a directory with such different
    > types of files, a program that
    > uses
    >
    > for line in f:
    >
    > will not exit with any hint as to where the error occurred. I just
    > exits with a UnicodeDecodeError. That
    > means I have to look at all functions that have some variant of
    >
    > for line in f:
    >
    > in them. And it is not sufficient to replace the "for line in f" part.
    > I would have to transform many functions that
    > work in terms of lines into functions that work in terms of decoded
    > bytes.
    >
    > That is why I usually solve the problem by moving fles around until I
    > find the bad file. Then I recode or repair
    > the bad file manually.



    Would it be reasonable to use pieces of the old program to write a
    new program that prints the name for an input file, then searches
    that input file for bad characters? If it doesn't find any, it can
    then go on to the next input file, or show a message saying that no
    bad characters were found.
     
    Robert Miles, Aug 30, 2012
    #18
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. John Black
    Replies:
    8
    Views:
    4,163
    Xenos
    Aug 20, 2004
  2. Ruslan
    Replies:
    1
    Views:
    506
    =?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=
    Sep 7, 2004
  3. Adam
    Replies:
    9
    Views:
    577
    red floyd
    Feb 2, 2006
  4. Marteno Rodia

    catch doesn't catch a thrown exception

    Marteno Rodia, Aug 3, 2009, in forum: Java
    Replies:
    5
    Views:
    572
    Daniel Pitts
    Aug 5, 2009
  5. Jaroslav Dobrek

    Re: catch UnicodeDecodeError

    Jaroslav Dobrek, Jul 26, 2012, in forum: Python
    Replies:
    0
    Views:
    238
    Jaroslav Dobrek
    Jul 26, 2012
Loading...

Share This Page