Some optimization tale

S

Stephan Diehl

A while ago, I've posted a recipie about finding a common prefix to a list
of strings. While the recipie itself is quite bad (I have to admit) and I
didn't know at that time that this problem was solved already in the
os.path module.
The interesting part can be found in the commentaries, as this turned out to
be a quest for the most efficient algorithm to solve this particular
problem.
All of this can be found at
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/252177

What I was most surprised at was the inefficency of the trivial solutions
(and that the right algorithm makes indeed a difference).

If you have read that far, you might be interested in the actual algorithms.
(If you are interested in the one-liner versions, please have a look at the
recipie)
Here they are:

-- os.path.commonprefix --------------------------------------------------

def f1(m):
"Given a list of pathnames, returns the longest common leading
component"
if not m: return ''
prefix = m[0]
for item in m:
for i in range(len(prefix)):
if prefix[:i+1] != item[:i+1]:
prefix = prefix[:i]
if i == 0:
return ''
break
return prefix

---------------------------------------------------------------------------

The problem with this algorithm is the copying of all those small strings.
This can be easily fixed

--- optimized os.path.commonprefix ----------------------------------------

def f2(m):
"Given a list of pathnames, returns the longest common leading
component"
if not m: return ''
if len(m) == 1: return m[0]
prefix = m[0]
for i in xrange(len(prefix)):
for item in m[1:]:
if prefix != item:
return prefix[:i]
return prefix[:i]

---------------------------------------------------------------------------

Now it gets interesting. It turns out that the above algorithms doesn't
scale well.

Some anonymous submitter suggested the following

---- by anonymous ---------------------------------------------------------

def f3(seq):
if not seq:return ""
seq.sort()
s1, s2 = seq[0], seq[-1]
l = min(len(s1), len(s2))
if l == 0 :
return ""
for i in xrange(l) :
if s1 != s2 :
return s1[0:i]
return s1[0:l]

---------------------------------------------------------------------------

It is just not nessesary to compare all strings in the list. It is enough to
sort the list first and then compare the first and the last element. Even
though the 'sort' algorithm is coded in C and is therefore quite fast, the
order of runtime has changed.

Michael Dyck then pointed out that instead of using 'sort', 'min' and 'max'
should be used. While tests suggest that this is true, I have no idea why
that should be, since finding a minimum or maximum uses some sorting anyway
(if we don't have some quantum computer at our hands), so, my reasoning
would be that sorting once should be faster than computing both, maximum
and minimum.

You might have realized that the optimization so far was done one the number
of strings. There is still another dimension to optimize in and that is the
actual string comparing.
Raymond Hettinger suggests using a binary search:

---------------------------------------------------------------------------
def f4(m):
"Given a list of pathnames, returns the longest common leading
component"
if not m: return ''
a, b = min(m), max(m)
lo, hi = 0, min(len(a), len(b))
while lo < hi:
mid = (lo+hi)//2 + 1
if a[lo:mid] == b[lo:mid]:
lo = mid
else:
hi = mid - 1
return a[:hi]
----------------------------------------------------------------------------


To give you some ideas about the running times:

f1 f3 f4 # of strings

['0.131058', '0.021471', '0.012050'] 2
['0.214896', '0.041648', '0.012963'] 4
['0.401236', '0.020444', '0.014707'] 8
['0.841738', '0.026415', '0.018589'] 16
['1.670606', '0.039348', '0.029020'] 32
['3.184446', '0.065657', '0.044247'] 64
['6.257635', '0.123510', '0.072568'] 128


Every calculation was done 200 times.
Furthermore, the testset consists of only two different strings, so the
binary search part of Raymonds solution comes only in as a static factor.

Anyway, the fastest solution is up to a 100 times faster than the trivial
one.

Cheers

Stephan
 
T

Terry Reedy

Stephan Diehl said:
All of this can be found at
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/252177

What I was most surprised at was the inefficency of the trivial solutions
(and that the right algorithm makes indeed a difference).

A common surprise. The science of algorithms (including empirical testing)
gives real benefits.
os.path.commonprefix --------------------------------------------------
def f1(m):
"Given a list of pathnames, returns the longest common leading
component"
if not m: return ''
prefix = m[0]

prefix = m.pop() # avoids comparing prefix to itself as first item
for item in m:
for i in range(len(prefix)):
if prefix[:i+1] != item[:i+1]:

I am 99% sure that above is a typo and performance bug that should read
if prefix[i:i+1] != item[i:i+1]:
since all previous chars have been found equal in previous iteration.
prefix = prefix[:i]
if i == 0:
return ''
break
return prefix

Perhaps you could retest this with the two suggested changes. It will be
somewhat faster.
The problem with this algorithm is the copying of all those small
strings.

Since Python does not have pointers, comparing characters within strings
requires pulling out the chars as separate strings. This is why using
C-coded comparison functions may win even though more comparisons are done.

The reason f1uses slicing (of len 1 after my correction) instead of
indexing is to avoid exceptions when len(item) < len(prefix). However, all
the +1s have a cost (and I suspect slicing does also), so it may pay to
truncate prefix to the length of item first. The simplest fix for this
(untested) gives

def f9(m): # modified f1 == os.path.commonprefix
"Given a list of strings, returns the longest common prefix"
if not m: return ''
prefix = m.pop()
for item in m:
prefix = prefix[:len(item)]
for i in range(len(prefix)):
if prefix != item:
if not i: return ''
prefix = prefix[:i]
break
return prefix
This can be easily fixed
--- optimized
os.path.commonprefix ----------------------------------------
def f2(m):
"Given a list of pathnames, returns the longest common leading
component"
if not m: return ''
if len(m) == 1: return m[0]
prefix = m[0]
for i in xrange(len(prefix)):
for item in m[1:]:
if prefix != item:
return prefix[:i]
return prefix[:i]


and easily messed up;-) If len(item) < len(prefix), item throws
exception. For this approach to work, prefix should be set as shortest
string of m in preliminary loop. Also, you reslice m several times. Do it
once before outer loop.
It is just not nessesary to compare all strings in the list.

Every string has to be compared to something at least once.
It is enough to sort the list first
and then compare the first and the last element.

Sorting compares all strings in the list to something at least once, and
most more than once.
Even though the 'sort' algorithm is coded in C and is therefore quite fast,
the order of runtime has changed.

The C part is what makes f3 faster. In your timings, 128 is not large
enough for the nlogn component to be noticeable.
Michael Dyck then pointed out that instead of using 'sort', 'min' and 'max'
should be used. While tests suggest that this is true, I have no idea why
that should be, since finding a minimum or maximum uses some sorting
anyway

No. max and min each do a linear scan. No sorting. But each does at
least as many character comparisons as modified f1 or f2. The speedup is
from looping and comparing in C, even though at least twice as many
compares are done.
You might have realized that the optimization so far was done one the number
of strings. There is still another dimension to optimize in and that is the
actual string comparing.
Raymond Hettinger suggests using a binary search:

Since this only affects the final comparison of min and max, and not the n
comparisons done to calculate each, the effect is minimal and constant,
independent of number of strings.

Since this compares slices rather than chars in each loop, I wonder whether
this is really faster than linear scan anyway. I would like to see timing
of f5 with min/max of f4 combined with linear scan of f3. (Basically, f3
with sort removed and min/max added.) Since you changed two parts of f3 to
get f4, we cannot be sure that both changes are each an improvement even
though the combination of two is.

def f5(seq):
if not seq: return ''
s1 = min(seq)
s2 = max(seq)
n = min(len(s1), len(s2))
if not n: return '' # not required since s1[0:0] == ''
for i in xrange(n) :
if s1 != s2 :
return s1[0:i]
return s1[0:n]

Terry J. Reedy
 
S

Stephan Diehl

Terry Reedy wrote:

[...]
and easily messed up;-) If len(item) < len(prefix), item throws
exception. For this approach to work, prefix should be set as shortest
string of m in preliminary loop. Also, you reslice m several times. Do
it once before outer loop.
It is just not nessesary to compare all strings in the list.

Every string has to be compared to something at least once.


You are right, of course. I have to admit that I've been too sloppy in my
descriptions (and too sloppy in my thinking).

[...]
anyway

No. max and min each do a linear scan. No sorting. But each does at
least as many character comparisons as modified f1 or f2. The speedup is
from looping and comparing in C, even though at least twice as many
compares are done.

Makes sense.
You might have realized that the optimization so far was done one the number
of strings. There is still another dimension to optimize in and that is the
actual string comparing.
Raymond Hettinger suggests using a binary search:

Since this only affects the final comparison of min and max, and not the n
comparisons done to calculate each, the effect is minimal and constant,
independent of number of strings.

Since this compares slices rather than chars in each loop, I wonder
whether
this is really faster than linear scan anyway. I would like to see timing
of f5 with min/max of f4 combined with linear scan of f3. (Basically, f3
with sort removed and min/max added.) Since you changed two parts of f3
to get f4, we cannot be sure that both changes are each an improvement
even though the combination of two is.

def f5(seq):
if not seq: return ''
s1 = min(seq)
s2 = max(seq)
n = min(len(s1), len(s2))
if not n: return '' # not required since s1[0:0] == ''
for i in xrange(n) :
if s1 != s2 :
return s1[0:i]
return s1[0:n]


Your f5 function runs virtually at the same speed than Raymonds version.
Even with growing string length, there is no dicernable difference.

Cheers

Stephan
 
C

Christos TZOTZIOY Georgiou

A while ago, I've posted a recipie about finding a common prefix to a list
of strings. While the recipie itself is quite bad (I have to admit) and I
didn't know at that time that this problem was solved already in the
os.path module.
The interesting part can be found in the commentaries, as this turned out to
be a quest for the most efficient algorithm to solve this particular
problem.

You might also want to read:

http://www.python.org/sf/681780
 
B

Bengt Richter

Stephan Diehl said:
All of this can be found at
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/252177

What I was most surprised at was the inefficency of the trivial solutions
(and that the right algorithm makes indeed a difference).

A common surprise. The science of algorithms (including empirical testing)
gives real benefits.
os.path.commonprefix --------------------------------------------------
def f1(m):
"Given a list of pathnames, returns the longest common leading
component"
if not m: return ''
prefix = m[0]

prefix = m.pop() # avoids comparing prefix to itself as first item
for item in m:
for i in range(len(prefix)):
if prefix[:i+1] != item[:i+1]:

I am 99% sure that above is a typo and performance bug that should read
if prefix[i:i+1] != item[i:i+1]:
since all previous chars have been found equal in previous iteration.
prefix = prefix[:i]
if i == 0:
return ''
break
return prefix

Perhaps you could retest this with the two suggested changes. It will be
somewhat faster.
The problem with this algorithm is the copying of all those small
strings.

Since Python does not have pointers, comparing characters within strings
requires pulling out the chars as separate strings. This is why using
C-coded comparison functions may win even though more comparisons are done.

The reason f1uses slicing (of len 1 after my correction) instead of
indexing is to avoid exceptions when len(item) < len(prefix). However, all
the +1s have a cost (and I suspect slicing does also), so it may pay to
truncate prefix to the length of item first. The simplest fix for this
(untested) gives

def f9(m): # modified f1 == os.path.commonprefix
"Given a list of strings, returns the longest common prefix"
if not m: return ''
prefix = m.pop()
for item in m:
prefix = prefix[:len(item)]
for i in range(len(prefix)):
if prefix != item:
if not i: return ''
prefix = prefix[:i]
break
return prefix
This can be easily fixed
--- optimized
os.path.commonprefix ----------------------------------------
def f2(m):
"Given a list of pathnames, returns the longest common leading
component"
if not m: return ''
if len(m) == 1: return m[0]
prefix = m[0]
for i in xrange(len(prefix)):
for item in m[1:]:
if prefix != item:
return prefix[:i]
return prefix[:i]


and easily messed up;-) If len(item) < len(prefix), item throws
exception. For this approach to work, prefix should be set as shortest
string of m in preliminary loop. Also, you reslice m several times. Do it
once before outer loop.
It is just not nessesary to compare all strings in the list.

Every string has to be compared to something at least once.
It is enough to sort the list first
and then compare the first and the last element.

Sorting compares all strings in the list to something at least once, and
most more than once.
Even though the 'sort' algorithm is coded in C and is therefore quite fast,
the order of runtime has changed.

The C part is what makes f3 faster. In your timings, 128 is not large
enough for the nlogn component to be noticeable.
Michael Dyck then pointed out that instead of using 'sort', 'min' and 'max'
should be used. While tests suggest that this is true, I have no idea why
that should be, since finding a minimum or maximum uses some sorting
anyway

No. max and min each do a linear scan. No sorting. But each does at
least as many character comparisons as modified f1 or f2. The speedup is
from looping and comparing in C, even though at least twice as many
compares are done.
You might have realized that the optimization so far was done one the number
of strings. There is still another dimension to optimize in and that is the
actual string comparing.
Raymond Hettinger suggests using a binary search:

Since this only affects the final comparison of min and max, and not the n
comparisons done to calculate each, the effect is minimal and constant,
independent of number of strings.

Since this compares slices rather than chars in each loop, I wonder whether
this is really faster than linear scan anyway. I would like to see timing
of f5 with min/max of f4 combined with linear scan of f3. (Basically, f3
with sort removed and min/max added.) Since you changed two parts of f3 to
get f4, we cannot be sure that both changes are each an improvement even
though the combination of two is.

def f5(seq):
if not seq: return ''
s1 = min(seq)
s2 = max(seq)
n = min(len(s1), len(s2))
if not n: return '' # not required since s1[0:0] == ''
for i in xrange(n) :
if s1 != s2 :
return s1[0:i]
return s1[0:n]

I wonder about this version for speed (not very tested ;-):
... "return longest common prefix of strings in seq"
... if not m: return ''
... prefix = m.pop()
... ssw = str.startswith
... for item in m:
... while not ssw(item, prefix): prefix = prefix[:-1]
... if not prefix: return ''
... return prefix
... ''

Regards,
Bengt Richter
 
C

Christos TZOTZIOY Georgiou

Terry's solution is much faster (at least on my test set) and, as an
additional benefit, is the easiest to understand.

Yep, I believe clarity is essential in the library (and that is why my
patch was obviously not accepted :) Actually IIRC (it's been a long
since) that code never compares more than once prefixes that have been
found equal and does not create slices (used buffer() and then switched
to startswith() for future compatibility), that is why it's a little
complicated. The main loop runs math.ceil(math.log(N,2)) times, where N
is min([len(x) for x in argument_list]).

Anyway, perhaps Terry or you should update the SF patch so that
os.path.commonprefix becomes faster... the point is to benefit the whole
Python community, right? :)
 
S

Stephan Diehl

Christos said:
Terry's solution is much faster (at least on my test set) and, as an
additional benefit, is the easiest to understand.

Yep, I believe clarity is essential in the library (and that is why my
patch was obviously not accepted :) Actually IIRC (it's been a long
since) that code never compares more than once prefixes that have been
found equal and does not create slices (used buffer() and then switched
to startswith() for future compatibility), that is why it's a little
complicated. The main loop runs math.ceil(math.log(N,2)) times, where N
is min([len(x) for x in argument_list]).

Anyway, perhaps Terry or you should update the SF patch so that
os.path.commonprefix becomes faster... the point is to benefit the whole
Python community, right? :)

In principle, yes :)
I'm not too sure if commonprefix is used that often and in a way that would
really be worth the effort to patch the existing code. (I guess, if there
had been a real need, it would have been done already a long time ago).
Another thing is that the speed improvement would be largely due to using C
implemented functions instead of pure Python code (o.k., the lines are
blury here). In order to understand what's going on, one needs to know what
kind of functions are C implemented and why, in that particular case, it is
a good idea, to use them.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,055
Latest member
SlimSparkKetoACVReview

Latest Threads

Top