Code that ought to run fast, but can't due to Python limitations.

J

John Nagle

As an example of code that really needs to run fast, but is
speed-limited by Python's limitations, see "tokenizer.py" in

http://code.google.com/p/html5lib/

This is a parser for HTML 5, a piece of code that will be needed
in many places and will process large amounts of data. It's written
entirely in Python. Take a look at how much work has to be performed
per character.

This is a good test for Python implementation bottlenecks. Run
that tokenizer on HTML, and see where the time goes.

("It should be written in C" is not an acceptable answer.)

Python doesn't have a "switch" or "case" statement, and when
you need a state machine with many states, that makes for painful,
slow code. There's a comment in the code that it would be useful
to run a few billion lines of HTML through an instrumented version
of the parser to decide in which order the IF statements should be
executed. You shouldn't have to do that.

Yes, I've read PEP 3103. The big problem is the difficulty of figuring
out what's a constant and what might change. If all the cases are constants,
case statements are easy. But in Python, the compiler can't tell.

Parsers have many named compile-time constants. Python doesn't support
named compile-time constants, and this is one of the places where we
have to pay the bill for that limitation.

Something to think about when you need three more racks of servers
because the HTML parser is slow.

John Nagle
 
B

Benjamin Kaplan

  As an example of code that really needs to run fast, but is
speed-limited by Python's limitations, see "tokenizer.py" in

       http://code.google.com/p/html5lib/

This is a parser for HTML 5, a piece of code that will be needed
in many places and will process large amounts of data. It's written
entirely in Python.  Take a look at how much work has to be performed
per character.

This is a good test for Python implementation bottlenecks.  Run
that tokenizer on HTML, and see where the time goes.

("It should be written in C" is not an acceptable answer.)

Python doesn't have a "switch" or "case" statement, and when
you need a state machine with many states, that makes for painful,
slow code.  There's a comment in the code that it would be useful
to run a few billion lines of HTML through an instrumented version
of the parser to decide in which order the IF statements should be
executed.  You shouldn't have to do that.

If you're cases are hashable, just use a dict instead of the if chain.
Then you get the constant time access to it.

def func_a() :
...
def func_b():
...
def func_c():
....

case = {"a":func_a, "b":func_b, "c":func_c}

case[value]()
 
M

Mel

John Nagle wrote:
[ ... ]
Parsers have many named compile-time constants. Python doesn't support
named compile-time constants, and this is one of the places where we
have to pay the bill for that limitation.

Something to think about when you need three more racks of servers
because the HTML parser is slow.

One technique used in such a case is to dispatch different case-handling
functions via a dictionary lookup.

Mel.
 
P

Paul Rubin

John Nagle said:
Python doesn't have a "switch" or "case" statement, and when
you need a state machine with many states, that makes for painful,
slow code. ...
There's a comment in the code that it would be useful
to run a few billion lines of HTML through an instrumented version
of the parser to decide in which order the IF statements should be
executed. You shouldn't have to do that.

In that particular program it would probably be better to change those
if/elif/elif/else constructs to dictionary lookups. I see the program
already does that for some large tables.
 
J

John Nagle

Paul said:
In that particular program it would probably be better to change those
if/elif/elif/else constructs to dictionary lookups. I see the program
already does that for some large tables.

A dictionary lookup (actually, several of them) for every
input character is rather expensive. Tokenizers usually index into
a table of character classes, then use the character class index in
a switch statement.

This is an issue that comes up whenever you have to parse some formal
structure, from XML/HTML to Pickle to JPEG images to program source.

If Python could figure out what's a constant and what isn't during
compilation, this sort of thing could be much more efficient. In fact,
you don't even need a switch statement at the source level, if the
language is such that the compiler can figure out when "elif" clauses
are mutually exclusive.

The temptation is to write tokenizers in C, but that's an admission
of language design failure.

(A general problem with Python is "hidden dynamism". That is,
changes to variables that can't be found by examining the source.
This is a killer for optimizations. One could take the position that any
module variable with exactly one visible assignment to it is, in fact,
only assigned in one place, and if the right hand side is a constant,
the variable is a constant. This would break some programs doing funny
stuff with "eval", or using some of the backdoor ways to modify variables,
but that's very rare in practice. In return, you get the ability to
hard-compile more of Python into fast code. I'm thinking Shed Skin here,
not yet another attempt at a JIT system.)

On the other hand, trying to do this in Perl, where you can't even index
strings, is far worse.

John Nagle
 
P

Paul Rubin

John Nagle said:
A dictionary lookup (actually, several of them) for every
input character is rather expensive. Tokenizers usually index into
a table of character classes, then use the character class index in
a switch statement.

Maybe you could use a regexp (and then have -two- problems...) to
find the token boundaries, then a dict to identify the actual token.
Tables of character classes seem a bit less attractive in the Unicode
era than in the old days.
 
N

Nobody

The temptation is to write tokenizers in C, but that's an admission
of language design failure.

The only part that really needs to be written in C is the DFA loop. The
code to construct the state table from regexps could be written
entirely in Python, but I don't see any advantage to doing so.
 
J

John Nagle

Paul said:
Maybe you could use a regexp (and then have -two- problems...) to
find the token boundaries, then a dict to identify the actual token.
Tables of character classes seem a bit less attractive in the Unicode
era than in the old days.

I want to see a regular expression that expresses the HTML 5 token
parsing rules, including all the explicitly specified error handling.

Here's some actual code, from "tokenizer.py". This is called once
for each character in an HTML document, when in "data" state (outside
a tag). It's straightforward code, but look at all those
dictionary lookups.

def dataState(self):
data = self.stream.char()

# Keep a charbuffer to handle the escapeFlag
if self.contentModelFlag in\
(contentModelFlags["CDATA"], contentModelFlags["RCDATA"]):
if len(self.lastFourChars) == 4:
self.lastFourChars.pop(0)
self.lastFourChars.append(data)

# The rest of the logic
if data == "&" and self.contentModelFlag in\
(contentModelFlags["PCDATA"], contentModelFlags["RCDATA"]) and not\
self.escapeFlag:
self.state = self.states["entityData"]
elif data == "-" and self.contentModelFlag in\
(contentModelFlags["CDATA"], contentModelFlags["RCDATA"]) and not\
self.escapeFlag and "".join(self.lastFourChars) == "<!--":
self.escapeFlag = True
self.tokenQueue.append({"type": "Characters", "data":data})
elif (data == "<" and (self.contentModelFlag == contentModelFlags["PCDATA"]
or (self.contentModelFlag in
(contentModelFlags["CDATA"],
contentModelFlags["RCDATA"]) and
self.escapeFlag == False))):
self.state = self.states["tagOpen"]
elif data == ">" and self.contentModelFlag in\
(contentModelFlags["CDATA"], contentModelFlags["RCDATA"]) and\
self.escapeFlag and "".join(self.lastFourChars)[1:] == "-->":
self.escapeFlag = False
self.tokenQueue.append({"type": "Characters", "data":data})
elif data == EOF:
# Tokenization ends.
return False
elif data in spaceCharacters:
# Directly after emitting a token you switch back to the "data
# state". At that point spaceCharacters are important so they are
# emitted separately.
self.tokenQueue.append({"type": "SpaceCharacters", "data":
data + self.stream.charsUntil(spaceCharacters, True)})
# No need to update lastFourChars here, since the first space will
# have already broken any <!-- or --> sequences
else:
chars = self.stream.charsUntil(("&", "<", ">", "-"))
self.tokenQueue.append({"type": "Characters", "data":
data + chars})
self.lastFourChars += chars[-4:]
self.lastFourChars = self.lastFourChars[-4:]
return True



John Nagle
 
A

Aahz

Here's some actual code, from "tokenizer.py". This is called once
for each character in an HTML document, when in "data" state (outside
a tag). It's straightforward code, but look at all those
dictionary lookups.

def dataState(self):
data = self.stream.char()

# Keep a charbuffer to handle the escapeFlag
if self.contentModelFlag in\
(contentModelFlags["CDATA"], contentModelFlags["RCDATA"]):
if len(self.lastFourChars) == 4:
self.lastFourChars.pop(0)
self.lastFourChars.append(data)

# The rest of the logic
if data == "&" and self.contentModelFlag in\
(contentModelFlags["PCDATA"], contentModelFlags["RCDATA"]) and not\
self.escapeFlag:
self.state = self.states["entityData"]
elif data == "-" and self.contentModelFlag in\
(contentModelFlags["CDATA"], contentModelFlags["RCDATA"]) and not\
self.escapeFlag and "".join(self.lastFourChars) == "<!--":
self.escapeFlag = True
self.tokenQueue.append({"type": "Characters", "data":data})
elif (data == "<" and (self.contentModelFlag == contentModelFlags["PCDATA"]
or (self.contentModelFlag in
(contentModelFlags["CDATA"],
contentModelFlags["RCDATA"]) and
self.escapeFlag == False))):
self.state = self.states["tagOpen"]
elif data == ">" and self.contentModelFlag in\
(contentModelFlags["CDATA"], contentModelFlags["RCDATA"]) and\
self.escapeFlag and "".join(self.lastFourChars)[1:] == "-->":
self.escapeFlag = False
self.tokenQueue.append({"type": "Characters", "data":data})
elif data == EOF:
# Tokenization ends.
return False
elif data in spaceCharacters:
# Directly after emitting a token you switch back to the "data
# state". At that point spaceCharacters are important so they are
# emitted separately.
self.tokenQueue.append({"type": "SpaceCharacters", "data":
data + self.stream.charsUntil(spaceCharacters, True)})
# No need to update lastFourChars here, since the first space will
# have already broken any <!-- or --> sequences
else:
chars = self.stream.charsUntil(("&", "<", ">", "-"))
self.tokenQueue.append({"type": "Characters", "data":
data + chars})
self.lastFourChars += chars[-4:]
self.lastFourChars = self.lastFourChars[-4:]
return True

Every single "self." is a dictionary lookup. Were you referring to
those? If not, I don't see your point. If yes, well, that's kind of the
whole point of using Python. You do pay a performance penalty. You can
optimize out some lookups, but you need to switch to C for some kinds of
computationally intensive algorithms. In this case, you can probably get
a large boost out of Pysco or Cython or Pyrex.
 
C

Carl Banks

    The temptation is to write tokenizers in C, but that's an admission
of language design failure.

No it isn't. It's only a failure of Python to be the language that
does everything *you* want.

Carl Banks
 
H

Hendrik van Rooyen

John Nagle said:
Python doesn't have a "switch" or "case" statement, and when
you need a state machine with many states, that makes for painful,
slow code. There's a comment in the code that it would be useful
to run a few billion lines of HTML through an instrumented version
of the parser to decide in which order the IF statements should be
executed. You shouldn't have to do that.

You do not have to implement a state machine in a case statement,
or in a series of if... elifs.

Python is not C.

Use a dispatch dict, and have each state return the next state.
Then you can use strings representing state names, and
everybody will be able to understand the code.

toy example, not tested, nor completed:

protocol = {"start":initialiser,"hunt":hunter,"classify":classifier,....other
states}

def state_machine():
next_step = protocol["start"]()
while True:
next_step = protocol[next_step]()

Simple, and almost as fast as if you did the same thing
in assembler using pointers.

And each state will have a finite set of reasons to
either stay where its at, or move on. Not a lot you
can do about that, but test for them one at a time.
But at least you will have split the problem up,
and you won't be doing irrelevant tests.

You can even do away with the dict, by having
each state return the actual next state routine:

next_state = protocol_initialiser()
while True:
next_state = next_state()
time.sleep(0.001) # this prevents thrashing when monitoring real events

If you are using a gui, and you have access
to an after callback, then you can make a
ticking stutter thread to run some monitoring
machine in the background, using the same
"tell me what to do next" technique.

To take the timing thing further, you can do:

wait_time, next_state = protocol_initialiser()
while True:
if wait_time:
time.sleep(wait_time)
wait_time, next_state = next_state()

This gives you control over busy-wait loops,
and lets you speed up when it is needed.

Python really is not C.

- Hendrik
 
S

Steven D'Aprano

I want to see a regular expression that expresses the HTML 5 token
parsing rules, including all the explicitly specified error handling.

Obviously the regex can't do the error handling. Nor should you expect a
single regex to parse an entire HTML document. But you could (perhaps)
use regexes to parse pieces of the document, as needed.

Have you investigated the pyparsing module? Unless you have some reason
for avoiding it, for any complicated parsing job I'd turn to that before
trying to roll your own.

Here's some actual code, from "tokenizer.py". This is called once
for each character in an HTML document, when in "data" state (outside a
tag). It's straightforward code, but look at all those dictionary
lookups.

Okay, we get it. Parsing HTML 5 is a bitch. What's your point? I don't
see how a case statement would help you here: you're not dispatching on a
value, but running through a series of tests until one passes. There are
languages where you can write something like:

case:
x > 0: process_positive(x)
x < 0: process_negative(x)
x == 0: process_zero(x)

but that's generally just syntactic sugar for the obvious if...elif...
block. (Although clever compilers might recognise that it's the same x in
each expression, and do something clever to optimize the code.)


Nor do I see why you were complaining about Python not having true
constants. I don't see how that would help you... most of your explicit
dict lookups are against string literals e.g. contentModelFlags["RCDATA"].

So while I feel your pain, I'm not sure I understand why you're blaming
this on *Python*.
 
S

Stefan Behnel

John said:
Python doesn't have a "switch" or "case" statement, and when
you need a state machine with many states, that makes for painful,
slow code.

Cython has a built-in optimisation that maps if-elif-else chains to C's
switch statement if they only test a single int/char variable, even when
you write things like "elif x in [1,5,9,12]". This works in Cython, because
we know that the comparison to a C int/char is side-effect free. It may not
always be side-effect free in Python, so this won't work in general. It
would be perfect for your case, though.

Stefan
 
P

Paul Rubin

Steven D'Aprano said:
Okay, we get it. Parsing HTML 5 is a bitch. What's your point? I don't
see how a case statement would help you here: you're not dispatching on a
value, but running through a series of tests until one passes.

A case statement switch(x):... into a bunch of constant case labels
would be able to use x as an index into a jump vector, and/or do an
unrolled logarithmic (bisection-like) search through the tests,
instead of a linear search.
 
S

Stefan Behnel

John said:
Here's some actual code, from "tokenizer.py". This is called once
for each character in an HTML document, when in "data" state (outside
a tag). It's straightforward code, but look at all those
dictionary lookups.

def dataState(self):
data = self.stream.char()

# Keep a charbuffer to handle the escapeFlag
if self.contentModelFlag in\
(contentModelFlags["CDATA"], contentModelFlags["RCDATA"]):

Is the tuple

(contentModelFlags["CDATA"], contentModelFlags["RCDATA"])

constant? If that is the case, I'd cut it out into a class member (or
module-local variable) first thing in the morning. And I'd definitely keep
the result of the "in" test in a local variable for reuse, seeing how many
times it's used in the rest of the code.

Writing inefficient code is not something to blame the language for.

Stefan
 
S

Stefan Behnel

John said:
A dictionary lookup (actually, several of them) for every
input character is rather expensive.

Did you implement this and prove your claim in benchmarks? Taking a look at
the current implementation, I'm pretty sure a dict-based implementation
would outrun it in your first try.

Stefan
 
S

Steven D'Aprano

Python is not C.

John Nagle is an old hand at Python. He's perfectly aware of this, and
I'm sure he's not trying to program C in Python.

I'm not entirely sure *what* he is doing, and hopefully he'll speak up
and say, but whatever the problem is it's not going to be as simple as
that.
 
S

Steven D'Aprano

A case statement switch(x):... into a bunch of constant case labels
would be able to use x as an index into a jump vector, and/or do an
unrolled logarithmic (bisection-like) search through the tests, instead
of a linear search.

Yes, I'm aware of that, but that's not what John's code is doing -- he's
doing a series of if expr ... elif expr tests. I don't think a case
statement can do much to optimize that.
 
P

Paul Rubin

Steven D'Aprano said:
Yes, I'm aware of that, but that's not what John's code is doing -- he's
doing a series of if expr ... elif expr tests. I don't think a case
statement can do much to optimize that.

The series of tests is written that way because there is no case
statement available. It is essentially switching on a bunch of
character constants and then doing some additional tests in each
branch.

It could be that using ord(c) as an index into a list of functions
might be faster than a dict lookup on c to get a function. I think
John is hoping to avoid a function call and instead get an indexed
jump within the Python bytecode for the big function.
 
S

Stefan Behnel

Stefan said:
John said:
Here's some actual code, from "tokenizer.py". This is called once
for each character in an HTML document, when in "data" state (outside
a tag). It's straightforward code, but look at all those
dictionary lookups.

def dataState(self):
data = self.stream.char()

# Keep a charbuffer to handle the escapeFlag
if self.contentModelFlag in\
(contentModelFlags["CDATA"], contentModelFlags["RCDATA"]):

Is the tuple

(contentModelFlags["CDATA"], contentModelFlags["RCDATA"])

constant? If that is the case, I'd cut it out into a class member (or
module-local variable) first thing in the morning.

Ah, and there's also this little trick to make it a (fast) local variable
in that method:

def some_method(self, some_const=(1,2,3,4)):
...

Stefan
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top