Method much slower than function?

S

Steven D'Aprano

20004 function calls in 10.214 CPU seconds

This calls the method on the CLASS, instead of an instance. When I try it,
I get this:

TypeError: unbound method readgenome() must be called with bar instance as
first argument (got file instance instead)

So you're running something subtly different than what you think you're
running. Maybe you assigned bar = bar() at some point?

However, having said that, the speed difference does seem to be real: even
when I correct the above issue, I get a large time difference using
either cProfile.run() or profile.run(), and timeit agrees:
0.1940619945526123

That's a difference of two orders of magnitude, and I can't see why.
 
P

Peter Otten

Chris said:
Your tests (which I have snipped) show attribute access being about 3x
slower than local access, which is consistent with my own tests. The
OP is seeing a speed difference of 2 orders of magnitude. That's far
outside the range that attribute access should account for.

Not if it conspires to defeat an optimization for string concatenation

$ cat iadd.py
class A(object):
def add_attr(self):
self.x = ""
for i in xrange(10000):
self.x += " yadda"
def add_local(self):
x = ""
for i in xrange(10000):
x += " yadda"

add_local = A().add_local
add_attr = A().add_attr
$ python2.5 -m timeit -s'from iadd import add_local' 'add_local()'
100 loops, best of 3: 3.15 msec per loop
$ python2.5 -m timeit -s'from iadd import add_attr' 'add_attr()'
10 loops, best of 3: 83.3 msec per loop

As the length of self.x grows performance will continue to degrade.
The original test is worthless as I tried to explain in the section you
snipped.

Peter
 
S

Steven D'Aprano

Hi all,

I am running Python 2.5 on Feisty Ubuntu. I came across some code that
is substantially slower when in a method than in a function.


After further testing, I think I have found the cause of the speed
difference -- and it isn't that the code is a method.

Here's my test code:


def readgenome(filehandle):
s = ""
for line in filehandle.xreadlines():
s += line.strip()

class SlowClass:
def readgenome(self, filehandle):
self.s = ""
for line in filehandle.xreadlines():
self.s += line.strip()

class FastClass:
def readgenome(self, filehandle):
s = ""
for line in filehandle.xreadlines():
s += line.strip()
self.s = s


Now I test them. For brevity, I am leaving out the verbose profiling
output, and just showing the total function calls and CPU time.

20005 function calls in 0.071 CPU seconds
20005 function calls in 4.030 CPU seconds
20005 function calls in 0.077 CPU seconds


So you can see that the slow-down for calling a method (compared to a
function) is very small.

I think what we're seeing in the SlowClass case is the "normal" speed of
repeated string concatenations. That's REALLY slow. In the function and
FastClass cases, the compiler optimization is able to optimize that slow
behaviour away.

So, nothing to do with methods vs. functions, and everything to do with
the O(N**2) behaviour of repeated string concatenation.
 
S

sjdevnull

En Thu, 14 Jun 2007 01:39:29 -0300, (e-mail address removed)
<[email protected]> escribió:


Ehh. Python 2.5 (and probably some earlier versions) optimize += on
strings pretty well.
a=""
for i in xrange(100000):
a+="a"

a=[]
for i in xrange(100000):
a.append("a")
a="".join(a)
take virtually the same amount of time on my machine (2.5), and the
non-join version is clearer, IMO. I'd still use join in case I wind
up running under an older Python, but it's probably not a big issue
here.

Yes, for concatenating a lot of a's, sure... Try again using strings
around the size of your expected lines - and make sure they are all
different too.

py> import timeit
py>
py> def f1():
... a=""
... for i in xrange(100000):
... a+=str(i)*20
...
py> def f2():
... a=[]
... for i in xrange(100000):
... a.append(str(i)*20)
... a="".join(a)
...
py> print timeit.Timer("f2()", "from __main__ import f2").repeat(number=1)
[0.42673663831576358, 0.42807591467630662, 0.44401481193838876]
py> print timeit.Timer("f1()", "from __main__ import f1").repeat(number=1)

...after a few minutes I aborted the process...

Are you using an old version of python? I get a fairly small
difference between the 2:

Python 2.5 (r25:51908, Jan 23 2007, 18:42:39)
[GCC 3.3.3 20040412 (Red Hat Linux 3.3.3-7)] on ELIDED
Type "help", "copyright", "credits" or "license" for more information..... a=""
.... for i in xrange(100000):
.... a+=str(i)*20
........ a=[]
.... for i in xrange(100000):
.... a.append(str(i)*20)
.... a="".join(a)
....
print timeit.Timer("f2()", "from __main__ import f2").repeat(number=1) [0.91355299949645996, 0.86561012268066406, 0.84371185302734375]
print timeit.Timer("f1()", "from __main__ import f1").repeat(number=1)
[0.94637894630432129, 0.89946198463439941, 1.170320987701416]
 
S

sjdevnull

You should not rely on using 2.5

I use generator expressions and passed-in values to generators and
other features of 2.5. Whether or not to rely on a new version is
really a judgement call based on how much time/effort/money the new
features save you vs. the cost of losing portability to older
versions.
or even on that optimization staying in CPython.

You also shouldn't count on dicts being O(1) on lookup, or "i in
myDict" being faster than "i in myList". A lot of quality of
implementation issues outside of the language specification have to be
considered when you're worried about running time.

Unlike fast dictionary lookup at least the += optimization in CPython
is specified in the docs (as well as noting that "".join is greatly
preferred if you're working across different versions and
implementations).
Best is to use StringIO or something comparable.

Yes, or the join() variant.
 
J

Josiah Carlson

I use generator expressions and passed-in values to generators and
other features of 2.5.

For reference, generator expressions are a 2.4 feature.
You also shouldn't count on dicts being O(1) on lookup, or "i in
myDict" being faster than "i in myList".

Python dictionaries (and most decent hash table implementations) may not
be O(1) technically, but they are expected O(1) and perform O(1) in
practice (at least for the Python implementations). If you have
particular inputs that force Python dictionaries to perform poorly (or
as slow as 'i in lst' for large dictionaries and lists), then you should
post a bug report in the sourceforge tracker.


- Josiah
 
J

Josiah Carlson

Francesco said:
Gabriel said:
...
py> print timeit.Timer("f2()", "from __main__ import f2").repeat(number=1)
[0.42673663831576358, 0.42807591467630662, 0.44401481193838876]
py> print timeit.Timer("f1()", "from __main__ import f1").repeat(number=1)

...after a few minutes I aborted the process...

I can't confirm this.
[...]

$ python2.5 -m timeit -s 'from join import f1' 'f1()'
10 loops, best of 3: 212 msec per loop
$ python2.5 -m timeit -s 'from join import f2' 'f2()'
10 loops, best of 3: 259 msec per loop
$ python2.5 -m timeit -s 'from join import f3' 'f3()'
10 loops, best of 3: 236 msec per loop
print timeit.Timer("f2()", "from __main__ import f2").repeat(number
= 1) [0.19726834822823575, 0.19324697456408974, 0.19474492594212861]
print timeit.Timer("f1()", "from __main__ import f1").repeat(number
= 1)
[21.982707133304167, 21.905312587963252, 22.843430035622767]

so it seems that there is a rather sensible difference.
what's the reason of the apparent inconsistency with Peter's test?

It sounds like a platform memory resize difference.


- Josiah
 
G

Gabriel Genellina

En Thu, 14 Jun 2007 05:54:25 -0300, Francesco Guerrieri
Gabriel said:
...
py> print timeit.Timer("f2()", "from __main__ import f2").repeat(number=1)
[0.42673663831576358, 0.42807591467630662, 0.44401481193838876]
py> print timeit.Timer("f1()", "from __main__ import f1").repeat(number=1)

...after a few minutes I aborted the process...

I can't confirm this.
[...]

$ python2.5 -m timeit -s 'from join import f1' 'f1()'
10 loops, best of 3: 212 msec per loop
$ python2.5 -m timeit -s 'from join import f2' 'f2()'
10 loops, best of 3: 259 msec per loop
$ python2.5 -m timeit -s 'from join import f3' 'f3()'
10 loops, best of 3: 236 msec per loop
print timeit.Timer("f2()", "from __main__ import f2").repeat(number =
1) [0.19726834822823575, 0.19324697456408974, 0.19474492594212861]
print timeit.Timer("f1()", "from __main__ import f1").repeat(number =
1)
[21.982707133304167, 21.905312587963252, 22.843430035622767]

so it seems that there is a rather sensible difference.
what's the reason of the apparent inconsistency with Peter's test?

I left the test running and went to sleep. Now, the results:

C:\TEMP>python -m timeit -s "from join import f1" "f1()"
10 loops, best of 3: 47.7 sec per loop

C:\TEMP>python -m timeit -s "from join import f2" "f2()"
10 loops, best of 3: 317 msec per loop

C:\TEMP>python -m timeit -s "from join import f3" "f3()"
10 loops, best of 3: 297 msec per loop

Yes, 47.7 *seconds* to build the string using the += operator.
I don't know what's the difference: python version (this is not 2.5.1
final), hardware, OS... but certainly in this PC it is *very* important.
Memory usage was around 40MB (for a 10MB string) and CPU usage went to 99%
(!).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,770
Messages
2,569,583
Members
45,075
Latest member
MakersCBDBloodSupport

Latest Threads

Top