shouldn't list comprehension be faster than for loops?

C

Carlos Grohmann

Hello all

I am testing my code with list comprehensions against for loops.

the loop:

dipList=[float(val[1]) for val in datalist]
dip1=[]
for dp in dipList:
if dp == 90:
dip1.append(dp - 0.01)
else:
dip1.append(dp)

listcomp:

dipList=[float(val[1]) for val in datalist]
dip1=[(dp, dp-0.01)[dp==90.0] for dp in dipList]


Tenting the time spent by each approach (using time.clock()), with a
file with about 100,000 entries, I get 0.03s for the loop and 0.05s
for the listcomp.

thoughts?

TIA
Carlos
 
A

Alf P. Steinbach

* Carlos Grohmann:
Hello all

I am testing my code with list comprehensions against for loops.

the loop:

dipList=[float(val[1]) for val in datalist]
dip1=[]
for dp in dipList:
if dp == 90:
dip1.append(dp - 0.01)
else:
dip1.append(dp)

listcomp:

dipList=[float(val[1]) for val in datalist]
dip1=[(dp, dp-0.01)[dp==90.0] for dp in dipList]


Tenting the time spent by each approach (using time.clock()), with a
file with about 100,000 entries, I get 0.03s for the loop and 0.05s
for the listcomp.

thoughts?

In the list comprehension you're constructing n tuples that you're not
constructing in the loop.

Have you tried this with

dip1 = [dp - 0.01 if dp == 90 else dp for dp in dipList]

?


Cheers & hth.,

- Alf
 
T

Tobias Weber

Carlos Grohmann said:
thoughts?

Well, for the loop you use an if statement, for the list you create a
tuple, so your benchmark is invalid. Try again.

Also, I wouldn't worry about speed and use what looks better in writing.
 
S

sturlamolden

Tenting the time spent by each approach (using time.clock()), with a
file with about 100,000 entries, I get 0.03s for the loop and 0.05s
for the listcomp.

thoughts?

Anything else being equal, list comprehensions will be the faster
becuase they incur fewer name and attribute lookups. It will be the
same as the difference between a for loop and a call to map. A list
comprehension is basically an enhancement of map.
 
S

sturlamolden

Have you tried this with

   dip1 = [dp - 0.01 if dp == 90 else dp for dp in dipList]

And for comparison with map:

map(lambda dp: dp - 0.01 if dp == 90 else dp, dipList)
 
C

Carl Banks

Tenting the time spent by each approach (using time.clock()), with a
file with about 100,000 entries, I get 0.03s for the loop and 0.05s
for the listcomp.

thoughts?

You shouldn't trust your intuition in things like this. Some features
were added to Python to make writing easier, not to make it run
faster. This time your intuition was correct. Next time, who knows?


Carl Banks
 
S

sturlamolden

Tenting the time spent by each approach (using time.clock()), with a
file with about 100,000 entries, I get 0.03s for the loop and 0.05s
for the listcomp.

thoughts?

Let me ask a retoric question:

- How much do you really value 20 ms of CPU time?
 
R

Ryan Kelly

Anything else being equal, list comprehensions will be the faster
becuase they incur fewer name and attribute lookups. It will be the
same as the difference between a for loop and a call to map. A list
comprehension is basically an enhancement of map.

Not so. If you use the "dis" module to peek at the bytecode generated
for a list comprehension, you'll see it's very similar to that generated
for an explicit for-loop. The byte-code for a call to map is very
different.

Basically: both a for-loop and a list-comp do the looping in python
bytecode, while a call to map will do the actual looping in C.
.... return [i*2 for i in xrange(10)]
.... 2 0 BUILD_LIST 0
3 DUP_TOP
4 STORE_FAST 0 (_[1])
7 LOAD_GLOBAL 0 (xrange)
10 LOAD_CONST 1 (10)
13 CALL_FUNCTION 1
16 GET_ITER 20 STORE_FAST 1 (i)
23 LOAD_FAST 0 (_[1])
26 LOAD_FAST 1 (i)
29 LOAD_CONST 2 (2)
32 BINARY_MULTIPLY
33 LIST_APPEND
34 JUMP_ABSOLUTE 17
>> 37 DELETE_FAST 0 (_[1]) 40 RETURN_VALUE
def maper():
.... return map(lambda i: i*2,xrange(10))
.... 2 0 LOAD_GLOBAL 0 (map)
3 LOAD_CONST 1 (<code object ...)
6 MAKE_FUNCTION 0
9 LOAD_GLOBAL 1 (xrange)
12 LOAD_CONST 2 (10)
15 CALL_FUNCTION 1
18 CALL_FUNCTION 2
21 RETURN_VALUE


Cheers,

Ryan

--
Ryan Kelly
http://www.rfk.id.au | This message is digitally signed. Please visit
(e-mail address removed) | http://www.rfk.id.au/ramblings/gpg/ for details


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEABECAAYFAkssLDsACgkQfI5S64uP50q7kQCgmPy124Aj+16usI6xPxGDrTQI
aN0AnikJyb6vCaeE995RckkVpbw5LuUh
=awFd
-----END PGP SIGNATURE-----
 
G

Gregory Ewing

Ryan said:
Not so. If you use the "dis" module to peek at the bytecode generated
for a list comprehension, you'll see it's very similar to that generated
for an explicit for-loop.

The usual advice is that if you have a built-in function that
does what you want done for each element, then using map() is
probably the fastest way.

However, if you need to create a Python function to pass to
map(), the list comprehension may well be faster, because it
avoids the cost of a Python function call per element.
 
S

Steven D'Aprano

Not so. If you use the "dis" module to peek at the bytecode generated
for a list comprehension, you'll see it's very similar to that generated
for an explicit for-loop. The byte-code for a call to map is very
different.

"Very similar" and "very different" byte-code mean very little regarding
speed.

Basically: both a for-loop and a list-comp do the looping in python
bytecode, while a call to map will do the actual looping in C.

This is a classic example of the confirmation fallacy -- if you say that
for-loops and list-comps are very similar, you need to actually check the
byte-code of both. You don't. You need to compare the byte-code of all
three operations, not just two of them, e.g.:

dis.dis(compile("map(f, seq)", '', 'exec'))
dis.dis(compile("[f(x) for x in seq]", '', 'exec'))
dis.dis(compile("L = []\nfor x in seq: L.append(f(x))", '', 'exec'))

But in fact just looking at the byte-code isn't helpful, because it tells
you nothing about the relative speed of each operation. You need to
actually time the operations.
from timeit import Timer
t1 = Timer("map(len, 'abcdefgh')", setup='')
t2 = Timer("[len(c) for c in 'abcdefgh']", setup='')
t3 = Timer("""L = []
.... for c in 'abcdefgh':
.... L.append(len(c))
.... """, setup='')7.4744069576263428


So, on my PC, with Python 2.5, with this example, a for-loop is about 60%
slower than a list comp and about 90% slower than map; the list comp is
about 20% slower than map.

But that only holds for *that* example. Here's another one:

.... return 1+2*x+3*x**2
....
values = [1,2,3,4,5,6]
t1 = Timer("map(f, values)", setup='from __main__ import f, values')
t2 = Timer("[f(x) for x in values]", .... setup='from __main__ import f, values')

t3 = Timer("""L = []
.... for x in values:
.... L.append(f(x))
.... """, setup='from __main__ import f, values')9.1957418918609619


For this example map and the list comp are nearly the same speed, with
map slightly slower; but the for-loop is still significantly worse.

Of course, none of these timing tests are terribly significant. The
actual difference in time is of the order of a millionth of a second per
call to map compared to the list comp or the for-loop, for these small
examples. Most of the time you shouldn't care about time differences of
that magnitude, and write whatever is easiest.
 
S

sturlamolden

Not so.  If you use the "dis" module to peek at the bytecode generated
for a list comprehension, you'll see it's very similar to that generated
for an explicit for-loop.  The byte-code for a call to map is very
different.

First, you failed to realize that the bytecode is different because
map is doing the work in C.

Second, you did not provide bytecode for the for-loop.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,764
Messages
2,569,565
Members
45,041
Latest member
RomeoFarnh

Latest Threads

Top