Question: How efficient is using generators for coroutine-likeproblems?

Discussion in 'Python' started by Carlos Ribeiro, Sep 23, 2004.

  1. As a side track of my latest investigations, I began to rely heavily
    on generators for some stuff where I would previsouly use a more
    conventional approach. Whenever I need to process a list, I'm tending
    towards the use of generators. One good example is if I want to print
    a report, or to work over a list with complex processing for each
    item. In both cases, a simple list comprehension can't be used. The
    conventional approach involves setting up a empty result list, looping
    over the list while appending the result, and finally returning it.
    For strings, it's something like this:

    def myfunc(items):
    result = []
    result += ["Start of processing"]
    for item in items:
    # do some processing with item
    ...
    result += [str(item)]
    result += ["End of processing"]
    return result

    This code is not only ugly to read, but it's inefficient because of
    the way list and string concatenations are handled. Other alternatives
    involve adding directly to the string, or using CStringIO. But I think
    you've got the point. Now, using generators, the function code gets
    simpler:

    def myfunc(items):
    yield "Start of processing"
    for item in items:
    # do some processing with item
    ...
    yield str(item)
    yield "End of processing"

    I can print the results either way:

    a) for line in myfunc([...]): print line

    b) print "\n".list(myfunc([...]))

    And I have other advantages -- no concatenations in the function code,
    and the list can be generated as needed. This has the potential to
    make the system less resource intensive (specially for large lists)
    and more responsive.

    Now, just because I can do it does not mean it's a good idea :) For
    particular cases, a measurement can be done. But I'm curious about the
    generic case. What is the performance penalty of using generators in
    situations as the ones shown above?

    --
    Carlos Ribeiro
    Consultoria em Projetos
    blog: http://rascunhosrotos.blogspot.com
    blog: http://pythonnotes.blogspot.com
    mail:
    mail:
    Carlos Ribeiro, Sep 23, 2004
    #1
    1. Advertising

  2. Carlos Ribeiro

    Peter Hansen Guest

    Carlos Ribeiro wrote:
    > Now, just because I can do it does not mean it's a good idea :) For
    > particular cases, a measurement can be done. But I'm curious about the
    > generic case. What is the performance penalty of using generators in
    > situations as the ones shown above?


    There shouldn't be a significant performance penalty, since
    generators set up the stack frame when they are created and
    reuse it, rather than creating a new one for each function
    call. They allow you to restructure your code more cleanly
    but without taking a big hit from extra function calls.

    But if you're concerned, there's always the timeit module
    to tell you exactly what you're facing...

    -Peter
    Peter Hansen, Sep 23, 2004
    #2
    1. Advertising

  3. Re: Question: How efficient is using generators for coroutine-like problems?

    Carlos Ribeiro <> wrote:
    ...
    > result += ["Start of processing"]


    I would suggest result.append('Start of processing'). As you're
    concerned with performance, see...:

    kallisti:~/cb alex$ python2.4 timeit.py -s'r=[]' 'r+=["goo"]'
    1000000 loops, best of 3: 1.45 usec per loop
    kallisti:~/cb alex$ python2.4 timeit.py -s'r=[]' 'r.append("goo")'
    1000000 loops, best of 3: 0.953 usec per loop
    kallisti:~/cb alex$ python2.4 timeit.py -s'r=[]; a=r.append' 'a("goo")'
    1000000 loops, best of 3: 0.556 usec per loop

    i.e. you can get result.append into a local once at the start and end up
    almost 3 times faster than with your approach, but even without this
    you're still gaining handsomely with good old result.append vs your
    preferred approach (where a singleton list gets created each time).


    > Now, just because I can do it does not mean it's a good idea :) For
    > particular cases, a measurement can be done. But I'm curious about the
    > generic case. What is the performance penalty of using generators in
    > situations as the ones shown above?


    Sorry, there's no "generic case" that I can think of. Since
    implementations of generators, list appends, etc, are allowed to change
    and get optimized at any transition 2.3 -> 2.4 -> 2.5 -> ... I see even
    conceptually no way to compare performance except on a specific case.

    "Generally" I would expect: if you're just looping on the result, a
    generator should _gain_ performance wrt making a list. Consider cr.py:

    x = map(str, range(333))

    def withapp(x=x):
    result = []
    a = result.append
    for item in x: a(item)
    return result

    def withgen(x=x):
    for item in x: yield item

    kallisti:~/cb alex$ python2.4 timeit.py -s'import cr' 'for x in
    cr.withapp(): pass'
    1000 loops, best of 3: 220 usec per loop
    kallisti:~/cb alex$ python2.4 timeit.py -s'import cr' 'for x in
    cr.withgen(): pass'
    1000 loops, best of 3: 200 usec per loop

    The difference is more pronounced in 2.3, with the generator clocking in
    at 280 usec, the appends at 370 (anybody who's interested in speed has
    hopefully already downloaded 2.4 and is busy trying it out -- I can't
    see any reason why not... even though one obviously can't yet deliver to
    customers stuff based on what's still an alpha release, of course, at
    least one can TRY it and pine for its general speedups;-).

    A join is different...:

    kallisti:~/cb alex$ python2.4 timeit.py -s'import cr'
    '"\n".join(cr.withgen())'
    1000 loops, best of 3: 274 usec per loop
    kallisti:~/cb alex$ python2.4 timeit.py -s'import cr'
    '"\n".join(cr.withapp())'
    1000 loops, best of 3: 225 usec per loop

    (speed difference was less pronounced in 2.3, 360 vs 350). Yeah, I
    know, it's not easy to conceptualize -- if looping is faster why is
    joining slower? "Implementation details" of course, and just the kind
    of thing that might change any time, if some nice free optimization can
    be obtained hither or yon...!


    Alex
    Alex Martelli, Sep 26, 2004
    #3
  4. On Sun, 26 Sep 2004 11:47:03 +0200, Alex Martelli <> wrote:
    > i.e. you can get result.append into a local once at the start and end up
    > almost 3 times faster than with your approach, but even without this
    > you're still gaining handsomely with good old result.append vs your
    > preferred approach (where a singleton list gets created each time).


    Thanks for the info -- specially regarding the "a=result.append"
    trick. It's a good one, and also helps to make the code less
    cluttered.

    > > Now, just because I can do it does not mean it's a good idea :) For
    > > particular cases, a measurement can be done. But I'm curious about the
    > > generic case. What is the performance penalty of using generators in
    > > situations as the ones shown above?

    >
    > Sorry, there's no "generic case" that I can think of. Since
    > implementations of generators, list appends, etc, are allowed to change
    > and get optimized at any transition 2.3 -> 2.4 -> 2.5 -> ... I see even
    > conceptually no way to compare performance except on a specific case.
    >
    > "Generally" I would expect: if you're just looping on the result, a
    > generator should _gain_ performance wrt making a list. Consider cr.py:


    That's my basic assumption. BTW, I had an alternative idea. I really
    don't need to use "\n".join(). All that I need is to concatenate the
    strings yielded by the generator, and the strings themselves should
    include the necessary linebreaks. I'm not a timeit wizard, and neither
    is a slow Windows machine reliable enough for timing tight loops, but
    anyway here are my results:

    ---- testgen.py ----
    def mygen():
    for x in range(100):
    yield "%d: this is a test string.\n" % x

    def test_genlist():
    return "".join(list(mygen()))

    def test_addstr():
    result = ""
    for x in range(40):
    # there is no append method for strings :(
    result = result + "%d: this is a test string.\n" % x
    return result

    def test_gen_addstr():
    result = ""
    for x in mygen(): result = result + x
    return result

    I've added the "%" operator because most of the time we are going to
    work with generated strings, not with constants, and I thought it
    would give a better idea of the timing.

    >python c:\python23\lib\timeit.py -s"import timegen" "timegen.test_genlist()"

    1000 loops, best of 3: 698 usec per loop

    >python c:\python23\lib\timeit.py -s"import timegen" "timegen.test_addstr()"

    1000 loops, best of 3: 766 usec per loop

    >python c:\python23\lib\timeit.py -s"import timegen" "timegen.test_gen_addstr()"

    1000 loops, best of 3: 854 usec per loop

    The test has shown that adding strings manually is just a tad slower
    than joining. But then I've decided to simplify my tests, and to add
    small constant strings. The results have surprised me:

    ---- testgen2.py ----
    def mygen():
    for x in range(100):
    yield "."

    def test_genlist():
    return "".join(list(mygen()))

    def test_addstr():
    result = ""
    for x in range(100):
    # there is no append method for strings :(
    result = result + "."
    return result

    def test_gen_addstr():
    result = ""
    for x in mygen(): result = result + x
    return result

    >python c:\python23\lib\timeit.py -s"import timegen2" "timegen2.test_genlist()"

    1000 loops, best of 3: 368 usec per loop

    >python c:\python23\lib\timeit.py -s"import timegen2" "timegen2.test_addstr()"

    1000 loops, best of 3: 263 usec per loop

    >python c:\python23\lib\timeit.py -s"import timegen2"

    "timegen2.test_gen_addstr()"
    1000 loops, best of 3: 385 usec per loop

    Now, it turns out that for this case, adding strings was *faster* than
    joining lines. But in all cases, the generator was slower than adding
    strings.

    Up to this point, the answer to my question is: generators _have_ a
    measurable performance penalty, and using a generator to return text
    line by line to append later is slower than appending to the return
    string line by line. But also, the difference between testbeds 1 and 2
    show that this is not the dominating factor in performance -- simply
    adding some string manipulation made the test run much slower.
    Finally, I *haven't* done any testing using 2.4, though, and
    generators are supposed to perform better with 2.4.

    .... and I was about to finish it here, but I decided to check another thing.

    There's still a catch. Generators were slower, because there is a
    implicit function call whenever a new value is requested. So I've
    added a new test:

    ---- testgen2.py ----
    def teststr():
    return "."

    def test_addstr_funccall():
    result = ""
    for x in range(100):
    result = result + teststr()
    return result

    >python c:\python23\lib\timeit.py -s"import timegen2"

    "timegen2.test_addstr_funccall()"
    1000 loops, best of 3: 436 usec per loop

    In this case, the code was measurably _slower_ than the generator
    version (435 vs 385), and both are adding strings. It only shows how
    hard is to try to optimize stuff -- depending on details, answers can
    be totally different.

    My conclusion, as of now, is that using generators in templating
    mechanisms is a valuable tool as far as readability is concerned, but
    it should not be done solely because of performance concerns. It may
    be faster in some cases, but it's slower in the simplest situations.

    --
    Carlos Ribeiro
    Consultoria em Projetos
    blog: http://rascunhosrotos.blogspot.com
    blog: http://pythonnotes.blogspot.com
    mail:
    mail:
    Carlos Ribeiro, Sep 26, 2004
    #4
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Francis Avila
    Replies:
    0
    Views:
    458
    Francis Avila
    Nov 2, 2003
  2. The_Incubator
    Replies:
    4
    Views:
    429
    Linus Elman
    Jan 6, 2004
  3. fishboy
    Replies:
    0
    Views:
    362
    fishboy
    May 31, 2004
  4. Matthew Wilson

    Need help writing coroutine

    Matthew Wilson, Nov 7, 2007, in forum: Python
    Replies:
    1
    Views:
    244
    Paul Hankin
    Nov 7, 2007
  5. Phillip B Oldham

    Most "active" coroutine library project?

    Phillip B Oldham, Aug 23, 2009, in forum: Python
    Replies:
    46
    Views:
    1,527
    Denis
    Oct 9, 2009
Loading...

Share This Page