Method much slower than function?

I

idoerg

Hi all,

I am running Python 2.5 on Feisty Ubuntu. I came across some code that
is substantially slower when in a method than in a function.

################# START SOURCE #############
# The function

def readgenome(filehandle):
s = ''
for line in filehandle.xreadlines():
if '>' in line:
continue
s += line.strip()
return s

# The method in a class
class bar:
def readgenome(self, filehandle):
self.s = ''
for line in filehandle.xreadlines():
if '>' in line:
continue
self.s += line.strip()

################# END SOURCE ##############
When running the function and the method on a 20,000 line text file, I
get the following:
20004 function calls in 10.214 CPU seconds

Ordered by: standard name

ncalls tottime percall cumtime percall
filename:lineno(function)
1 0.000 0.000 10.214 10.214 <string>:1(<module>)
1 10.205 10.205 10.214 10.214 reader.py:11(readgenome)
1 0.000 0.000 0.000 0.000 {method 'disable' of
'_lsprof.Profiler' objects}
19999 0.009 0.000 0.009 0.000 {method 'strip' of 'str'
objects}
1 0.000 0.000 0.000 0.000 {method 'xreadlines' of
'file' objects}
1 0.000 0.000 0.000 0.000 {open}

20004 function calls in 0.041 CPU seconds

Ordered by: standard name

ncalls tottime percall cumtime percall
filename:lineno(function)
1 0.000 0.000 0.041 0.041 <string>:1(<module>)
1 0.035 0.035 0.041 0.041 reader.py:2(readgenome)
1 0.000 0.000 0.000 0.000 {method 'disable' of
'_lsprof.Profiler' objects}
19999 0.007 0.000 0.007 0.000 {method 'strip' of 'str'
objects}
1 0.000 0.000 0.000 0.000 {method 'xreadlines' of
'file' objects}
1 0.000 0.000 0.000 0.000 {open}


The method takes > 10 seconds, the function call 0.041 seconds!

Yes, I know that I wrote the underlying code rather inefficiently, and
I can streamline it with a single file.read() call instead if an
xreadlines() + strip loop. Still, the differences in performance are
rather staggering! Any comments?

Thanks,

Iddo
 
N

Neil Cerutti

Hi all,

I am running Python 2.5 on Feisty Ubuntu. I came across some code that
is substantially slower when in a method than in a function.

################# START SOURCE #############
# The function

def readgenome(filehandle):
s = ''
for line in filehandle.xreadlines():
if '>' in line:
continue
s += line.strip()
return s

# The method in a class
class bar:
def readgenome(self, filehandle):
self.s = ''
for line in filehandle.xreadlines():
if '>' in line:
continue
self.s += line.strip()

################# END SOURCE ##############
When running the function and the method on a 20,000 line text file, I
get the following:

20004 function calls in 10.214 CPU seconds

Ordered by: standard name

ncalls tottime percall cumtime percall
filename:lineno(function)
1 0.000 0.000 10.214 10.214 <string>:1(<module>)
1 10.205 10.205 10.214 10.214 reader.py:11(readgenome)
1 0.000 0.000 0.000 0.000 {method 'disable' of
'_lsprof.Profiler' objects}
19999 0.009 0.000 0.009 0.000 {method 'strip' of 'str'
objects}
1 0.000 0.000 0.000 0.000 {method 'xreadlines' of
'file' objects}
1 0.000 0.000 0.000 0.000 {open}


20004 function calls in 0.041 CPU seconds

Ordered by: standard name

ncalls tottime percall cumtime percall
filename:lineno(function)
1 0.000 0.000 0.041 0.041 <string>:1(<module>)
1 0.035 0.035 0.041 0.041 reader.py:2(readgenome)
1 0.000 0.000 0.000 0.000 {method 'disable' of
'_lsprof.Profiler' objects}
19999 0.007 0.000 0.007 0.000 {method 'strip' of 'str'
objects}
1 0.000 0.000 0.000 0.000 {method 'xreadlines' of
'file' objects}
1 0.000 0.000 0.000 0.000 {open}


The method takes > 10 seconds, the function call 0.041 seconds!

Yes, I know that I wrote the underlying code rather
inefficiently, and I can streamline it with a single
file.read() call instead if an xreadlines() + strip loop.
Still, the differences in performance are rather staggering!
Any comments?

It is likely the repeated attribute lookup, self.s, that's
slowing it down in comparison to the non-method version.

Try the following simple optimization, using a local variable
instead of an attribute to build up the result.

# The method in a class
class bar:
def readgenome(self, filehandle):
s = ''
for line in filehandle.xreadlines():
if '>' in line:
continue
s += line.strip()
self.s = s

To further speed things up, think about using the str.join idiom
instead of str.+=, and using a generator expression instead of an
explicit loop.

# The method in a class
class bar:
def readgenome(self, filehandle):
self.s = ''.join(line.strip() for line in filehandle)
 
G

Gabriel Genellina

Hi all,

I am running Python 2.5 on Feisty Ubuntu. I came across some code that
is substantially slower when in a method than in a function.

################# START SOURCE #############
# The function

def readgenome(filehandle):
s = ''
for line in filehandle.xreadlines():
if '>' in line:
continue
s += line.strip()
return s

# The method in a class
class bar:
def readgenome(self, filehandle):
self.s = ''
for line in filehandle.xreadlines():
if '>' in line:
continue
self.s += line.strip()

In the function above, s is a local variable, and accessing local
variables is very efficient (using an array of local variables, the
compiler assigns statically an index for each one).
Using self.s, on the other hand, requires a name lookup for each access.
The most obvious change would be to use a local variable s, and assign
self.s = s only at the end. This should make both methods almost identical
in performance.
In addition, += is rather inefficient for strings; the usual idiom is
using ''.join(items)
And since you have Python 2.5, you can use the file as its own iterator;
combining all this:

return ''.join(line.strip() for line in filehandle if '>' not in line)
 
G

Grant Edwards

The method takes > 10 seconds, the function call 0.041 seconds!

What happens when you run them in the other order?

The first time you read the file, it has to read it from disk.
The second time, it's probably just reading from the buffer
cache in RAM.
 
S

sjdevnull

Gabriel said:
In the function above, s is a local variable, and accessing local
variables is very efficient (using an array of local variables, the
compiler assigns statically an index for each one).
Using self.s, on the other hand, requires a name lookup for each access.
The most obvious change would be to use a local variable s, and assign
self.s = s only at the end. This should make both methods almost identical
in performance.

Yeah, that's a big deal and makes a significant difference on my
machine.
In addition, += is rather inefficient for strings; the usual idiom is
using ''.join(items)

Ehh. Python 2.5 (and probably some earlier versions) optimize += on
strings pretty well.

a=""
for i in xrange(100000):
a+="a"

and:

a=[]
for i in xrange(100000):
a.append("a")
a="".join(a)

take virtually the same amount of time on my machine (2.5), and the
non-join version is clearer, IMO. I'd still use join in case I wind
up running under an older Python, but it's probably not a big issue
here.
And since you have Python 2.5, you can use the file as its own iterator;
combining all this:

return ''.join(line.strip() for line in filehandle if '>' not in line)

That's probably pretty good.
 
P

Paul Rubin

take virtually the same amount of time on my machine (2.5), and the
non-join version is clearer, IMO. I'd still use join in case I wind
up running under an older Python, but it's probably not a big issue here.

You should not rely on using 2.5 or even on that optimization staying in
CPython. Best is to use StringIO or something comparable.
 
G

Gabriel Genellina

En Thu, 14 Jun 2007 01:39:29 -0300, (e-mail address removed)
Gabriel said:
In addition, += is rather inefficient for strings; the usual idiom is
using ''.join(items)

Ehh. Python 2.5 (and probably some earlier versions) optimize += on
strings pretty well.

a=""
for i in xrange(100000):
a+="a"

and:

a=[]
for i in xrange(100000):
a.append("a")
a="".join(a)

take virtually the same amount of time on my machine (2.5), and the
non-join version is clearer, IMO. I'd still use join in case I wind
up running under an older Python, but it's probably not a big issue
here.

Yes, for concatenating a lot of a's, sure... Try again using strings
around the size of your expected lines - and make sure they are all
different too.

py> import timeit
py>
py> def f1():
.... a=""
.... for i in xrange(100000):
.... a+=str(i)*20
....
py> def f2():
.... a=[]
.... for i in xrange(100000):
.... a.append(str(i)*20)
.... a="".join(a)
....
py> print timeit.Timer("f2()", "from __main__ import f2").repeat(number=1)
[0.42673663831576358, 0.42807591467630662, 0.44401481193838876]
py> print timeit.Timer("f1()", "from __main__ import f1").repeat(number=1)

....after a few minutes I aborted the process...
 
L

Leo Kislov

Hi all,

I am running Python 2.5 on Feisty Ubuntu. I came across some code that
is substantially slower when in a method than in a function.


20004 function calls in 10.214 CPU seconds

20004 function calls in 0.041 CPU seconds

I suspect open files are cached so the second reader
picks up where the first one left: at the of the file.
The second call doesn't do any text processing at all.

-- Leo
 
B

Bruno Desthuilliers

Neil Cerutti a écrit :
(snip)
class bar:
def readgenome(self, filehandle):
self.s = ''.join(line.strip() for line in filehandle)

=>
self.s = ''.join(line.strip() for line in filehandle if not
'>' in line)
 
P

Peter Otten

Gabriel said:
En Thu, 14 Jun 2007 01:39:29 -0300, (e-mail address removed)
Gabriel said:
In addition, += is rather inefficient for strings; the usual idiom is
using ''.join(items)

Ehh. Python 2.5 (and probably some earlier versions) optimize += on
strings pretty well.

a=""
for i in xrange(100000):
a+="a"

and:

a=[]
for i in xrange(100000):
a.append("a")
a="".join(a)

take virtually the same amount of time on my machine (2.5), and the
non-join version is clearer, IMO. I'd still use join in case I wind
up running under an older Python, but it's probably not a big issue
here.

Yes, for concatenating a lot of a's, sure... Try again using strings
around the size of your expected lines - and make sure they are all
different too.

py> import timeit
py>
py> def f1():
... a=""
... for i in xrange(100000):
... a+=str(i)*20
...
py> def f2():
... a=[]
... for i in xrange(100000):
... a.append(str(i)*20)
... a="".join(a)
...
py> print timeit.Timer("f2()", "from __main__ import f2").repeat(number=1)
[0.42673663831576358, 0.42807591467630662, 0.44401481193838876]
py> print timeit.Timer("f1()", "from __main__ import f1").repeat(number=1)

...after a few minutes I aborted the process...

I can't confirm this.

$ cat join.py
def f1():
a = ""
for i in xrange(100000):
a += str(i)*20

def f2():
a = []
for i in xrange(100000):
a.append(str(i)*20)
a = "".join(a)

def f3():
a = []
append = a.append
for i in xrange(100000):
append(str(i)*20)
a = "".join(a)

$ python2.5 -m timeit -s 'from join import f1' 'f1()'
10 loops, best of 3: 212 msec per loop
$ python2.5 -m timeit -s 'from join import f2' 'f2()'
10 loops, best of 3: 259 msec per loop
$ python2.5 -m timeit -s 'from join import f3' 'f3()'
10 loops, best of 3: 236 msec per loop

Peter
 
P

Peter Otten

Leo said:
I suspect open files are cached so the second reader
picks up where the first one left: at the of the file.
The second call doesn't do any text processing at all.

-- Leo

Indeed, the effect of attribute access is much smaller than what the OP is
seeing:

$ cat iadd.py
class A(object):
def add_attr(self):
self.x = 0
for i in xrange(100000):
self.x += 1
def add_local(self):
x = 0
for i in xrange(100000):
x += 1

add_local = A().add_local
add_attr = A().add_attr
$ python2.5 -m timeit -s 'from iadd import add_local' 'add_local()'
10 loops, best of 3: 21.6 msec per loop
$ python2.5 -m timeit -s 'from iadd import add_attr' 'add_attr()'
10 loops, best of 3: 52.2 msec per loop

Peter
 
F

Francesco Guerrieri

Gabriel said:
...
py> print timeit.Timer("f2()", "from __main__ import f2").repeat(number=1)
[0.42673663831576358, 0.42807591467630662, 0.44401481193838876]
py> print timeit.Timer("f1()", "from __main__ import f1").repeat(number=1)

...after a few minutes I aborted the process...

I can't confirm this.
[...]

$ python2.5 -m timeit -s 'from join import f1' 'f1()'
10 loops, best of 3: 212 msec per loop
$ python2.5 -m timeit -s 'from join import f2' 'f2()'
10 loops, best of 3: 259 msec per loop
$ python2.5 -m timeit -s 'from join import f3' 'f3()'
10 loops, best of 3: 236 msec per loop
print timeit.Timer("f2()", "from __main__ import f2").repeat(number = 1) [0.19726834822823575, 0.19324697456408974, 0.19474492594212861]
print timeit.Timer("f1()", "from __main__ import f1").repeat(number = 1)
[21.982707133304167, 21.905312587963252, 22.843430035622767]

so it seems that there is a rather sensible difference.
what's the reason of the apparent inconsistency with Peter's test?

Francesco
 
C

Christof Winter

Gabriel Genellina wrote:
[...]
py> import timeit
py>
py> def f1():
... a=""
... for i in xrange(100000):
... a+=str(i)*20
...
py> def f2():
... a=[]
... for i in xrange(100000):
... a.append(str(i)*20)
... a="".join(a)
...
py> print timeit.Timer("f2()", "from __main__ import f2").repeat(number=1)
[0.42673663831576358, 0.42807591467630662, 0.44401481193838876]
py> print timeit.Timer("f1()", "from __main__ import f1").repeat(number=1)

...after a few minutes I aborted the process...

Using

Python 2.4.4 (#2, Jan 13 2007, 17:50:26)
[GCC 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)] on linux2

f1() and f2() also virtually take the same amount of time, although I must admit
that this is quite different from what I expected.

Cheers,
Christof
 
G

Grant Edwards

I suspect open files are cached

They shouldn't be.
so the second reader picks up where the first one left: at the
of the file.

That sounds like a bug. Opening a file a second time should
produce a "new" file object with the file-pointer at the
beginning of the file.
 
M

Matimus

The first time you read the file, it has to read it from disk.
The second time, it's probably just reading from the buffer
cache in RAM.

I can verify this type of behavior when reading large files. Opening
the file doesn't take long, but the first read will take a while
(multiple seconds depending on the size of the file). When the file is
opened a second time, the initial read takes significantly less time.

Matt
 
D

Diez B. Roggisch

Grant said:
They shouldn't be.


That sounds like a bug. Opening a file a second time should
produce a "new" file object with the file-pointer at the
beginning of the file.

It's a OS thing.

Diez
 
P

Peter Otten

Peter said:
Indeed, the effect of attribute access is much smaller than what the OP is
seeing:

I have to take that back
$ cat iadd.py
class A(object):
def add_attr(self):
self.x = 0
for i in xrange(100000):
self.x += 1
def add_local(self):
x = 0
for i in xrange(100000):
x += 1

add_local = A().add_local
add_attr = A().add_attr
$ python2.5 -m timeit -s 'from iadd import add_local' 'add_local()'
10 loops, best of 3: 21.6 msec per loop
$ python2.5 -m timeit -s 'from iadd import add_attr' 'add_attr()'
10 loops, best of 3: 52.2 msec per loop

Iddo, adding integers is not a good model for the effect you are seeing.
Caching, while happening on the OS-level isn't, either.

As already mentioned in this thread there is a special optimization for

some_string += another_string

in the Python C-source. This optimization works by mutating on the C-level
the string that is immutable on the Python-level, and is limited to cases
where there are no other names referencing some_string. The statement

self.s += t

is executed internally as

tmp = self.s [1]
tmp += t [2]
self.s = tmp [3]

where tmp is not visible from the Python level. Unfortunately after [1]
there are two references to the string in question (tmp and self.s) so the
optimization cannot kick in.

Peter
 
C

Chris Mellon

I have to take that back

Your tests (which I have snipped) show attribute access being about 3x
slower than local access, which is consistent with my own tests. The
OP is seeing a speed difference of 2 orders of magnitude. That's far
outside the range that attribute access should account for.
 
G

Grant Edwards

However, having said that, the speed difference does seem to be real: even
when I correct the above issue, I get a large time difference using
either cProfile.run() or profile.run(), and timeit agrees:

0.1940619945526123

That's a difference of two orders of magnitude, and I can't see why.

Is it independent of the test order?

What happens when you reverse the order?

What happens if you run the same test twice in a row?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top