Question: How efficient is using generators for coroutine-likeproblems?

C

Carlos Ribeiro

As a side track of my latest investigations, I began to rely heavily
on generators for some stuff where I would previsouly use a more
conventional approach. Whenever I need to process a list, I'm tending
towards the use of generators. One good example is if I want to print
a report, or to work over a list with complex processing for each
item. In both cases, a simple list comprehension can't be used. The
conventional approach involves setting up a empty result list, looping
over the list while appending the result, and finally returning it.
For strings, it's something like this:

def myfunc(items):
result = []
result += ["Start of processing"]
for item in items:
# do some processing with item
...
result += [str(item)]
result += ["End of processing"]
return result

This code is not only ugly to read, but it's inefficient because of
the way list and string concatenations are handled. Other alternatives
involve adding directly to the string, or using CStringIO. But I think
you've got the point. Now, using generators, the function code gets
simpler:

def myfunc(items):
yield "Start of processing"
for item in items:
# do some processing with item
...
yield str(item)
yield "End of processing"

I can print the results either way:

a) for line in myfunc([...]): print line

b) print "\n".list(myfunc([...]))

And I have other advantages -- no concatenations in the function code,
and the list can be generated as needed. This has the potential to
make the system less resource intensive (specially for large lists)
and more responsive.

Now, just because I can do it does not mean it's a good idea :) For
particular cases, a measurement can be done. But I'm curious about the
generic case. What is the performance penalty of using generators in
situations as the ones shown above?

--
Carlos Ribeiro
Consultoria em Projetos
blog: http://rascunhosrotos.blogspot.com
blog: http://pythonnotes.blogspot.com
mail: (e-mail address removed)
mail: (e-mail address removed)
 
P

Peter Hansen

Carlos said:
Now, just because I can do it does not mean it's a good idea :) For
particular cases, a measurement can be done. But I'm curious about the
generic case. What is the performance penalty of using generators in
situations as the ones shown above?

There shouldn't be a significant performance penalty, since
generators set up the stack frame when they are created and
reuse it, rather than creating a new one for each function
call. They allow you to restructure your code more cleanly
but without taking a big hit from extra function calls.

But if you're concerned, there's always the timeit module
to tell you exactly what you're facing...

-Peter
 
A

Alex Martelli

Carlos Ribeiro said:
result += ["Start of processing"]

I would suggest result.append('Start of processing'). As you're
concerned with performance, see...:

kallisti:~/cb alex$ python2.4 timeit.py -s'r=[]' 'r+=["goo"]'
1000000 loops, best of 3: 1.45 usec per loop
kallisti:~/cb alex$ python2.4 timeit.py -s'r=[]' 'r.append("goo")'
1000000 loops, best of 3: 0.953 usec per loop
kallisti:~/cb alex$ python2.4 timeit.py -s'r=[]; a=r.append' 'a("goo")'
1000000 loops, best of 3: 0.556 usec per loop

i.e. you can get result.append into a local once at the start and end up
almost 3 times faster than with your approach, but even without this
you're still gaining handsomely with good old result.append vs your
preferred approach (where a singleton list gets created each time).

Now, just because I can do it does not mean it's a good idea :) For
particular cases, a measurement can be done. But I'm curious about the
generic case. What is the performance penalty of using generators in
situations as the ones shown above?

Sorry, there's no "generic case" that I can think of. Since
implementations of generators, list appends, etc, are allowed to change
and get optimized at any transition 2.3 -> 2.4 -> 2.5 -> ... I see even
conceptually no way to compare performance except on a specific case.

"Generally" I would expect: if you're just looping on the result, a
generator should _gain_ performance wrt making a list. Consider cr.py:

x = map(str, range(333))

def withapp(x=x):
result = []
a = result.append
for item in x: a(item)
return result

def withgen(x=x):
for item in x: yield item

kallisti:~/cb alex$ python2.4 timeit.py -s'import cr' 'for x in
cr.withapp(): pass'
1000 loops, best of 3: 220 usec per loop
kallisti:~/cb alex$ python2.4 timeit.py -s'import cr' 'for x in
cr.withgen(): pass'
1000 loops, best of 3: 200 usec per loop

The difference is more pronounced in 2.3, with the generator clocking in
at 280 usec, the appends at 370 (anybody who's interested in speed has
hopefully already downloaded 2.4 and is busy trying it out -- I can't
see any reason why not... even though one obviously can't yet deliver to
customers stuff based on what's still an alpha release, of course, at
least one can TRY it and pine for its general speedups;-).

A join is different...:

kallisti:~/cb alex$ python2.4 timeit.py -s'import cr'
'"\n".join(cr.withgen())'
1000 loops, best of 3: 274 usec per loop
kallisti:~/cb alex$ python2.4 timeit.py -s'import cr'
'"\n".join(cr.withapp())'
1000 loops, best of 3: 225 usec per loop

(speed difference was less pronounced in 2.3, 360 vs 350). Yeah, I
know, it's not easy to conceptualize -- if looping is faster why is
joining slower? "Implementation details" of course, and just the kind
of thing that might change any time, if some nice free optimization can
be obtained hither or yon...!


Alex
 
C

Carlos Ribeiro

i.e. you can get result.append into a local once at the start and end up
almost 3 times faster than with your approach, but even without this
you're still gaining handsomely with good old result.append vs your
preferred approach (where a singleton list gets created each time).

Thanks for the info -- specially regarding the "a=result.append"
trick. It's a good one, and also helps to make the code less
cluttered.
Sorry, there's no "generic case" that I can think of. Since
implementations of generators, list appends, etc, are allowed to change
and get optimized at any transition 2.3 -> 2.4 -> 2.5 -> ... I see even
conceptually no way to compare performance except on a specific case.

"Generally" I would expect: if you're just looping on the result, a
generator should _gain_ performance wrt making a list. Consider cr.py:

That's my basic assumption. BTW, I had an alternative idea. I really
don't need to use "\n".join(). All that I need is to concatenate the
strings yielded by the generator, and the strings themselves should
include the necessary linebreaks. I'm not a timeit wizard, and neither
is a slow Windows machine reliable enough for timing tight loops, but
anyway here are my results:

---- testgen.py ----
def mygen():
for x in range(100):
yield "%d: this is a test string.\n" % x

def test_genlist():
return "".join(list(mygen()))

def test_addstr():
result = ""
for x in range(40):
# there is no append method for strings :(
result = result + "%d: this is a test string.\n" % x
return result

def test_gen_addstr():
result = ""
for x in mygen(): result = result + x
return result

I've added the "%" operator because most of the time we are going to
work with generated strings, not with constants, and I thought it
would give a better idea of the timing.
python c:\python23\lib\timeit.py -s"import timegen" "timegen.test_genlist()"
1000 loops, best of 3: 698 usec per loop
python c:\python23\lib\timeit.py -s"import timegen" "timegen.test_addstr()"
1000 loops, best of 3: 766 usec per loop
python c:\python23\lib\timeit.py -s"import timegen" "timegen.test_gen_addstr()"
1000 loops, best of 3: 854 usec per loop

The test has shown that adding strings manually is just a tad slower
than joining. But then I've decided to simplify my tests, and to add
small constant strings. The results have surprised me:

---- testgen2.py ----
def mygen():
for x in range(100):
yield "."

def test_genlist():
return "".join(list(mygen()))

def test_addstr():
result = ""
for x in range(100):
# there is no append method for strings :(
result = result + "."
return result

def test_gen_addstr():
result = ""
for x in mygen(): result = result + x
return result
python c:\python23\lib\timeit.py -s"import timegen2" "timegen2.test_genlist()"
1000 loops, best of 3: 368 usec per loop
python c:\python23\lib\timeit.py -s"import timegen2" "timegen2.test_addstr()"
1000 loops, best of 3: 263 usec per loop
python c:\python23\lib\timeit.py -s"import timegen2"
"timegen2.test_gen_addstr()"
1000 loops, best of 3: 385 usec per loop

Now, it turns out that for this case, adding strings was *faster* than
joining lines. But in all cases, the generator was slower than adding
strings.

Up to this point, the answer to my question is: generators _have_ a
measurable performance penalty, and using a generator to return text
line by line to append later is slower than appending to the return
string line by line. But also, the difference between testbeds 1 and 2
show that this is not the dominating factor in performance -- simply
adding some string manipulation made the test run much slower.
Finally, I *haven't* done any testing using 2.4, though, and
generators are supposed to perform better with 2.4.

.... and I was about to finish it here, but I decided to check another thing.

There's still a catch. Generators were slower, because there is a
implicit function call whenever a new value is requested. So I've
added a new test:

---- testgen2.py ----
def teststr():
return "."

def test_addstr_funccall():
result = ""
for x in range(100):
result = result + teststr()
return result
python c:\python23\lib\timeit.py -s"import timegen2"
"timegen2.test_addstr_funccall()"
1000 loops, best of 3: 436 usec per loop

In this case, the code was measurably _slower_ than the generator
version (435 vs 385), and both are adding strings. It only shows how
hard is to try to optimize stuff -- depending on details, answers can
be totally different.

My conclusion, as of now, is that using generators in templating
mechanisms is a valuable tool as far as readability is concerned, but
it should not be done solely because of performance concerns. It may
be faster in some cases, but it's slower in the simplest situations.

--
Carlos Ribeiro
Consultoria em Projetos
blog: http://rascunhosrotos.blogspot.com
blog: http://pythonnotes.blogspot.com
mail: (e-mail address removed)
mail: (e-mail address removed)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,070
Latest member
BiogenixGummies

Latest Threads

Top