2.3 encoding parsing bug

E

Edward K. Ream

The documentation for encoding lines at

C:\Python23\Doc\Python-Docs-2.3.1\whatsnew\section-encodings.html

states:

"Encodings are declared by including a specially formatted comment in the
first or second line of the source file."

In fact, contrary to the implication, the Python 2.3 parser does not look
for lines of the form:

# -*- coding: <encoding> -*-

For example, Python improperly scans the following line for an encoding

#@+leo-ver=4-encoding=iso-8859-1.

and reports that iso-8859-1. (note trailing dot) is an invalid encoding!

The workaround for my app is to precede this line with the following line:

# -*- coding: iso-8859-1 -*-

This makes Python 2.3 happy.

To make myself perfectly clear: Python has absolutely no right to complain
about comment lines that do not have the form:

# -*- coding: <encoding> -*-

Edward
 
T

Thomas Heller

Edward K. Ream said:
The documentation for encoding lines at

C:\Python23\Doc\Python-Docs-2.3.1\whatsnew\section-encodings.html

states:

"Encodings are declared by including a specially formatted comment in the
first or second line of the source file."

In fact, contrary to the implication, the Python 2.3 parser does not look
for lines of the form:

# -*- coding: <encoding> -*-

For example, Python improperly scans the following line for an encoding

#@+leo-ver=4-encoding=iso-8859-1.

and reports that iso-8859-1. (note trailing dot) is an invalid encoding!

The workaround for my app is to precede this line with the following line:

# -*- coding: iso-8859-1 -*-

This makes Python 2.3 happy.

To make myself perfectly clear: Python has absolutely no right to complain
about comment lines that do not have the form:

# -*- coding: <encoding> -*-

You should probably file a bug report for this.

Thomas
 
E

Edward K. Ream

To make myself perfectly clear: Python has absolutely no right to
complain
It does. Please see

http://www.python.org/doc/current/ref/encodings.html

This is the precise specification; Python looks for a certain regular
expression.

Ah jeez :)

The regular expression 'coding[=:]\s*([\w-_.]+)' matches so much more than
the "recommended" lines,

# -*- coding: <encoding> -*-

and

# vim:fileencoding=<encoding-name>

This is most annoying. It looks like Leo will have to change file formats
to accommodate this. I could hack a special case for .py files, I suppose,
but any such hack still amounts to a change in file format.

Is there any chance of modifying the re to reduce the possibility of
confusion and "false matches"? For example, matching only 'coding' and
'fileeencoding' (as words).

Thanks for your clarification of the situation. I suppose I'll have to look
more closely at PEP's in the future. These over-general encoding
declarations seem like a pretty low blow.

Edward

P.S. I just looked at pep 263:

To define a source code encoding, a magic comment must
be placed into the source files either as first or second
line in the file:

#!/usr/bin/python
# -*- coding: <encoding name> -*-

More precise, the first or second line must match the regular
expression "coding[:=]\s*([\w-_]+)".
[end quote]

This was just a really bad idea, put forward in stealth, buried in an re.
Having a _restricted_ kind of special-purpose comment is one thing: having
a way-too-general kind of special-purpose comment is wrong, wrong, wrong.
It needlessly invalidates comments that _should_ have been none of Python's
business.

My guess is that I could have read this many times without having the
slightest hint of danger: the re bears almost no relation to the English
words. I'm gnashing my teeth.

EKR
 
D

David Bolen

Edward K. Ream said:
The workaround for my app is to precede this line with the following line:

# -*- coding: iso-8859-1 -*-

This makes Python 2.3 happy.

Presumably it would also work if you just included a pair of blank
lines (or perhaps to make it harder to accidentally remove, blank
comment lines), since Python is only going to check the first two
lines of the file.

It's still annoying, but at least you aren't then forced to bother
replicating a Python-matching line that actually contains encoding
information.

-- David
 
E

Edward K. Ream

Presumably it would also work if you just included a pair of blank

Inserting two blanks lines to defeat the encoding convention would be very
bad programming style. Instead of blank lines I would insert:

# first line to defeat Python's encoding convention.
# second line to defeat Python's encoding convention.

Naturally, this is hardly an improvement over a line that explicitly
specifies the encoding.

Regardless of various workarounds, the essence of this situation is that pep
263 drastically changed the Python language. Indeed, it invalidated a file
format that I as an application designer should have had the right to
define, and _did_ have the right to define until Python 2.3 interfered. I
am not happy.

Edward
 
?

=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=

Edward said:
Is there any chance of modifying the re to reduce the possibility of
confusion and "false matches"? For example, matching only 'coding' and
'fileeencoding' (as words).

Certainly. Propose a change to the specification, and suggest it to
python-dev. If the proposed change is acceptable, and somebodey
volunteers to provide an implementation, it will get implemented in
2.4. There is no chance of changing 2.3 in an incompatible way.
And there is, of course, no chance of changing the copies of
Python 2.3 that have already been installed.
Thanks for your clarification of the situation. I suppose I'll have to look
more closely at PEP's in the future. These over-general encoding
declarations seem like a pretty low blow.

I personally would have preferred a proper statement to declare the
encoding, such as

pragma encoding "iso-8859-1"

However, this approach was rejected as too intrusive, and a stealth
declaration in comments was considered more appropriate.
This was just a really bad idea, put forward in stealth, buried in an re.
Having a _restricted_ kind of special-purpose comment is one thing: having
a way-too-general kind of special-purpose comment is wrong, wrong, wrong.
It needlessly invalidates comments that _should_ have been none of Python's
business.

OTOH, LEO _should_ not have come up with its own syntax to specify an
encoding. Instead, LEO should have used established conventions, such
as

-*- coding: said:
My guess is that I could have read this many times without having the
slightest hint of danger: the re bears almost no relation to the English
words.

That is not true. The English language gives specific, recommended
examples. Users (i.e. python programmers) should use the recommended
syntax, instead of coming up with their own syntax that still matches
the regular expression.

The regular expression is introduced with the words "more precisely",
which always should make readers of formal specifications cautious.
In particular, this aspect is directed at developers of tools that
edit Python source, as this is the regular expression they need to
use to determine the encoding of the file. If LEO can read Python
files, this regular expression should have been used ever since
support for coding declarations was implemented.

Regards,
Martin
 
E

Edward K. Ream

OTOH, LEO _should_ not have come up with its own syntax to specify an
encoding. Instead, LEO should have used established conventions, such
as

-*- coding: <codingname> -*-

Leo came up with its own syntax because the $@+leo line is doing something
very different from most -*- coding lines. Furthermore, Leo's file format
predated Python 2.3. It should have been reasonable to assume that a line
that starts with #@+leo would not ever be interpreted in any way by Python.

http://www.python.org/doc/2.3/whatsnew/section-encodings.html
completely misstates the truth and is actively misleading. Sure, pep 263 is
strictly accurate, but I probably never looked at 263 before today.

Pep 263 does not discuss the issue that bites Leo: the more general the re
the more likely there is to be unintended consequences. True, one doesn't
want to limit the matches that one _does_ want with encoding lines produced
by other editors. So it's not clear exactly what re to choose.

For me, this discussion is moot. The damage has been done, Leo's file
formats must change, and Leo 4.1 will not ever work as smoothly with Python
2.3 as I would like.

Edward
 
J

Jeff Epler

Edward,

It's unfortunate that you didn't contribute to the discussion
of PEP 263, which was created in June 2001[1], mentioned on
comp.lang.python.announce/[email protected] as early as August
2001[2], discussed on comp.lang.python/[email protected] back in
February 2002[3], available as a patch in March 2002[4], and present
in the Python CVS around August 2002[5]. Alpha releases of Python
(including binary releases for Windows) with the feature were available
on December 31, 2002[6]. Leo, on the other hand, added support for its
own encoding cookie on January 21, 2002[7]. The fatal (for LEO) dot
in the regular expression was added on February 28, 2002[8]. I didn't
find a thread that explains why this was done, but I believe it was to
support encodings like 'japanese.sjis'[9]

Since dotted encodings reflect a namespace hierarchy, ones with trailing
dots are nonsense. It seems to me that the easiest fix for this problem
would be to ignore a trailing dot, if it is present in the encoding
cookie. I'm at least +1/2 on that idea.

References:
[1] http://www.python.org/peps/pep-0263.html
[2] http://groups.google.com/[email protected]
[3] http://groups.google.com/[email protected]
[4] http://python.org/sf/526840 "Date Submitted"
[5] http://python.org/sf/534304 "Date Closed"
[6] http://www.python.org/2.3/NEWS.txt "What's New in Python 2.3 alpha 1?"
[7] http://cvs.sourceforge.net/viewcvs.py/leo/leo/Attic/leoAtFile.py?r1=1.106&r2=1.107
"Line 540"
[8] http://cvs.sourceforge.net/viewcvs.py/python/python/nondist/peps/pep-0263.txt?r1=1.7&r2=1.8
[9] http://dist.shot.cx/SnapShot/PEP263/pep0263-2.2.1c2-03/sjis_sample.py

Jeff
 
?

=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=

Edward said:
Pep 263 does not discuss the issue that bites Leo: the more general the re
the more likely there is to be unintended consequences. True, one doesn't
want to limit the matches that one _does_ want with encoding lines produced
by other editors. So it's not clear exactly what re to choose.

AFAIR, this effect is actually *intended*. Originally, only the -*-
syntax was proposed, but then there were objections "what about other
editors?" Then the re expression was designed.

Regards,
Martin
 
E

Edward K. Ream

First, my apologies to python-dev et. al. for my irritable remarks re pep
263, http://www.python.org/peps/pep-0263.html and thanks to Michael Hudson
and Jeff Epler for their warm-hearted and generous responses to my
outbursts. It's so much easier to think now that there is no "vendetta"
going on :)

This morning in the shower I realized that far from being "abused" by pep
263, Leo is, or will be, the beneficiary of pep 263. Indeed, having Python
recognize an encoding field in an #@+leo line is exactly what Leo's users
would want: it saves them from writing their own # -*- coding: <encoding
name> -*- line.

The reason Leo ran afoul of Python 2.3 is that Leo presently terminates the
encoding field with a period. Alas, periods may appear in encoding names.
Leo's convention is just wrong, so regardless of pep 263 Leo's file formats
will have to change in order to properly handle names such as
'japanese.sjis'.

The only remaining question in my mind is this: how likely is it for a user
to "innocently" match the regular expression "coding[:=]\s*([\w-_.]+)" by
mistake? I see now that Leo doesn't refute the assertion that it's not very
likely. Indeed, Leo's syntax _should_ have matched this re: the problem
arose not from any defect in pep 263 but from a very real bug in Leo.

In short, my opinion of pep 263 has undergone an almost 180 degree
turnaround. I like it, Leo's users will benefit from it, and it seems
unlikely that other people's existing code will suffer. Indeed, one would
typically expect an initial line containing "coding:" or "coding=" to be
followed by a valid Unicode encoding.

Two other thoughts:

1. The summaries of pep 263 such as
http://www.python.org/doc/2.3.3/whatsnew/section-encodings.html are not
accurate, that is, they do not affect what really happens. IMO, it would
make more sense to describe the re in English (as well as give the actual
re) and to give the rationale for making the re fairly general.

2. I wonder if it makes sense to do something besides throwing a syntax
error if the encoding isn't recognized. I suspect this topic has already
been discussed. Can anyone summarize it for me?

Many thanks to all who have responded, publicly or in private, to me on this
subject.

Edward
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,754
Messages
2,569,527
Members
45,000
Latest member
MurrayKeync

Latest Threads

Top