Some optimization tale

Discussion in 'Python' started by Stephan Diehl, Dec 27, 2003.

  1. A while ago, I've posted a recipie about finding a common prefix to a list
    of strings. While the recipie itself is quite bad (I have to admit) and I
    didn't know at that time that this problem was solved already in the
    os.path module.
    The interesting part can be found in the commentaries, as this turned out to
    be a quest for the most efficient algorithm to solve this particular
    problem.
    All of this can be found at
    http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/252177

    What I was most surprised at was the inefficency of the trivial solutions
    (and that the right algorithm makes indeed a difference).

    If you have read that far, you might be interested in the actual algorithms.
    (If you are interested in the one-liner versions, please have a look at the
    recipie)
    Here they are:

    -- os.path.commonprefix --------------------------------------------------

    def f1(m):
    "Given a list of pathnames, returns the longest common leading
    component"
    if not m: return ''
    prefix = m[0]
    for item in m:
    for i in range(len(prefix)):
    if prefix[:i+1] != item[:i+1]:
    prefix = prefix[:i]
    if i == 0:
    return ''
    break
    return prefix

    ---------------------------------------------------------------------------

    The problem with this algorithm is the copying of all those small strings.
    This can be easily fixed

    --- optimized os.path.commonprefix ----------------------------------------

    def f2(m):
    "Given a list of pathnames, returns the longest common leading
    component"
    if not m: return ''
    if len(m) == 1: return m[0]
    prefix = m[0]
    for i in xrange(len(prefix)):
    for item in m[1:]:
    if prefix != item:
    return prefix[:i]
    return prefix[:i]

    ---------------------------------------------------------------------------

    Now it gets interesting. It turns out that the above algorithms doesn't
    scale well.

    Some anonymous submitter suggested the following

    ---- by anonymous ---------------------------------------------------------

    def f3(seq):
    if not seq:return ""
    seq.sort()
    s1, s2 = seq[0], seq[-1]
    l = min(len(s1), len(s2))
    if l == 0 :
    return ""
    for i in xrange(l) :
    if s1 != s2 :
    return s1[0:i]
    return s1[0:l]

    ---------------------------------------------------------------------------

    It is just not nessesary to compare all strings in the list. It is enough to
    sort the list first and then compare the first and the last element. Even
    though the 'sort' algorithm is coded in C and is therefore quite fast, the
    order of runtime has changed.

    Michael Dyck then pointed out that instead of using 'sort', 'min' and 'max'
    should be used. While tests suggest that this is true, I have no idea why
    that should be, since finding a minimum or maximum uses some sorting anyway
    (if we don't have some quantum computer at our hands), so, my reasoning
    would be that sorting once should be faster than computing both, maximum
    and minimum.

    You might have realized that the optimization so far was done one the number
    of strings. There is still another dimension to optimize in and that is the
    actual string comparing.
    Raymond Hettinger suggests using a binary search:

    ---------------------------------------------------------------------------
    def f4(m):
    "Given a list of pathnames, returns the longest common leading
    component"
    if not m: return ''
    a, b = min(m), max(m)
    lo, hi = 0, min(len(a), len(b))
    while lo < hi:
    mid = (lo+hi)//2 + 1
    if a[lo:mid] == b[lo:mid]:
    lo = mid
    else:
    hi = mid - 1
    return a[:hi]
    ----------------------------------------------------------------------------


    To give you some ideas about the running times:

    f1 f3 f4 # of strings

    ['0.131058', '0.021471', '0.012050'] 2
    ['0.214896', '0.041648', '0.012963'] 4
    ['0.401236', '0.020444', '0.014707'] 8
    ['0.841738', '0.026415', '0.018589'] 16
    ['1.670606', '0.039348', '0.029020'] 32
    ['3.184446', '0.065657', '0.044247'] 64
    ['6.257635', '0.123510', '0.072568'] 128


    Every calculation was done 200 times.
    Furthermore, the testset consists of only two different strings, so the
    binary search part of Raymonds solution comes only in as a static factor.

    Anyway, the fastest solution is up to a 100 times faster than the trivial
    one.

    Cheers

    Stephan
    Stephan Diehl, Dec 27, 2003
    #1
    1. Advertising

  2. Stephan Diehl

    Terry Reedy Guest

    "Stephan Diehl" <> wrote in message
    news:bskcb0$36n$00$-online.com...
    > All of this can be found at
    > http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/252177
    >
    > What I was most surprised at was the inefficency of the trivial solutions
    > (and that the right algorithm makes indeed a difference).


    A common surprise. The science of algorithms (including empirical testing)
    gives real benefits.

    > --

    os.path.commonprefix --------------------------------------------------
    >
    > def f1(m):
    > "Given a list of pathnames, returns the longest common leading
    > component"
    > if not m: return ''
    > prefix = m[0]


    prefix = m.pop() # avoids comparing prefix to itself as first item

    > for item in m:
    > for i in range(len(prefix)):
    > if prefix[:i+1] != item[:i+1]:


    I am 99% sure that above is a typo and performance bug that should read
    if prefix[i:i+1] != item[i:i+1]:
    since all previous chars have been found equal in previous iteration.

    > prefix = prefix[:i]
    > if i == 0:
    > return ''
    > break
    > return prefix


    Perhaps you could retest this with the two suggested changes. It will be
    somewhat faster.

    > The problem with this algorithm is the copying of all those small

    strings.

    Since Python does not have pointers, comparing characters within strings
    requires pulling out the chars as separate strings. This is why using
    C-coded comparison functions may win even though more comparisons are done.

    The reason f1uses slicing (of len 1 after my correction) instead of
    indexing is to avoid exceptions when len(item) < len(prefix). However, all
    the +1s have a cost (and I suspect slicing does also), so it may pay to
    truncate prefix to the length of item first. The simplest fix for this
    (untested) gives

    def f9(m): # modified f1 == os.path.commonprefix
    "Given a list of strings, returns the longest common prefix"
    if not m: return ''
    prefix = m.pop()
    for item in m:
    prefix = prefix[:len(item)]
    for i in range(len(prefix)):
    if prefix != item:
    if not i: return ''
    prefix = prefix[:i]
    break
    return prefix

    > This can be easily fixed
    > --- optimized

    os.path.commonprefix ----------------------------------------
    >
    > def f2(m):
    > "Given a list of pathnames, returns the longest common leading
    > component"
    > if not m: return ''
    > if len(m) == 1: return m[0]
    > prefix = m[0]
    > for i in xrange(len(prefix)):
    > for item in m[1:]:
    > if prefix != item:
    > return prefix[:i]
    > return prefix[:i]


    and easily messed up;-) If len(item) < len(prefix), item throws
    exception. For this approach to work, prefix should be set as shortest
    string of m in preliminary loop. Also, you reslice m several times. Do it
    once before outer loop.

    > It is just not nessesary to compare all strings in the list.


    Every string has to be compared to something at least once.

    > It is enough to sort the list first
    > and then compare the first and the last element.


    Sorting compares all strings in the list to something at least once, and
    most more than once.

    > Even though the 'sort' algorithm is coded in C and is therefore quite

    fast,
    > the order of runtime has changed.


    The C part is what makes f3 faster. In your timings, 128 is not large
    enough for the nlogn component to be noticeable.

    > Michael Dyck then pointed out that instead of using 'sort', 'min' and

    'max'
    > should be used. While tests suggest that this is true, I have no idea why
    > that should be, since finding a minimum or maximum uses some sorting

    anyway

    No. max and min each do a linear scan. No sorting. But each does at
    least as many character comparisons as modified f1 or f2. The speedup is
    from looping and comparing in C, even though at least twice as many
    compares are done.

    > You might have realized that the optimization so far was done one the

    number
    > of strings. There is still another dimension to optimize in and that is

    the
    > actual string comparing.
    > Raymond Hettinger suggests using a binary search:


    Since this only affects the final comparison of min and max, and not the n
    comparisons done to calculate each, the effect is minimal and constant,
    independent of number of strings.

    Since this compares slices rather than chars in each loop, I wonder whether
    this is really faster than linear scan anyway. I would like to see timing
    of f5 with min/max of f4 combined with linear scan of f3. (Basically, f3
    with sort removed and min/max added.) Since you changed two parts of f3 to
    get f4, we cannot be sure that both changes are each an improvement even
    though the combination of two is.

    def f5(seq):
    if not seq: return ''
    s1 = min(seq)
    s2 = max(seq)
    n = min(len(s1), len(s2))
    if not n: return '' # not required since s1[0:0] == ''
    for i in xrange(n) :
    if s1 != s2 :
    return s1[0:i]
    return s1[0:n]

    Terry J. Reedy
    Terry Reedy, Dec 27, 2003
    #2
    1. Advertising

  3. Terry Reedy wrote:

    [...]
    >
    > and easily messed up;-) If len(item) < len(prefix), item throws
    > exception. For this approach to work, prefix should be set as shortest
    > string of m in preliminary loop. Also, you reslice m several times. Do
    > it once before outer loop.
    >
    >> It is just not nessesary to compare all strings in the list.

    >
    > Every string has to be compared to something at least once.


    You are right, of course. I have to admit that I've been too sloppy in my
    descriptions (and too sloppy in my thinking).

    >


    [...]

    >> Michael Dyck then pointed out that instead of using 'sort', 'min' and

    > 'max'
    >> should be used. While tests suggest that this is true, I have no idea why
    >> that should be, since finding a minimum or maximum uses some sorting

    > anyway
    >
    > No. max and min each do a linear scan. No sorting. But each does at
    > least as many character comparisons as modified f1 or f2. The speedup is
    > from looping and comparing in C, even though at least twice as many
    > compares are done.


    Makes sense.

    >
    >> You might have realized that the optimization so far was done one the

    > number
    >> of strings. There is still another dimension to optimize in and that is

    > the
    >> actual string comparing.
    >> Raymond Hettinger suggests using a binary search:

    >
    > Since this only affects the final comparison of min and max, and not the n
    > comparisons done to calculate each, the effect is minimal and constant,
    > independent of number of strings.
    >
    > Since this compares slices rather than chars in each loop, I wonder
    > whether
    > this is really faster than linear scan anyway. I would like to see timing
    > of f5 with min/max of f4 combined with linear scan of f3. (Basically, f3
    > with sort removed and min/max added.) Since you changed two parts of f3
    > to get f4, we cannot be sure that both changes are each an improvement
    > even though the combination of two is.
    >
    > def f5(seq):
    > if not seq: return ''
    > s1 = min(seq)
    > s2 = max(seq)
    > n = min(len(s1), len(s2))
    > if not n: return '' # not required since s1[0:0] == ''
    > for i in xrange(n) :
    > if s1 != s2 :
    > return s1[0:i]
    > return s1[0:n]


    Your f5 function runs virtually at the same speed than Raymonds version.
    Even with growing string length, there is no dicernable difference.

    Cheers

    Stephan
    >
    > Terry J. Reedy
    Stephan Diehl, Dec 28, 2003
    #3
  4. On Sat, 27 Dec 2003 17:36:46 +0100, rumours say that Stephan Diehl
    <> might have written:

    >A while ago, I've posted a recipie about finding a common prefix to a list
    >of strings. While the recipie itself is quite bad (I have to admit) and I
    >didn't know at that time that this problem was solved already in the
    >os.path module.
    >The interesting part can be found in the commentaries, as this turned out to
    >be a quest for the most efficient algorithm to solve this particular
    >problem.


    You might also want to read:

    http://www.python.org/sf/681780
    --
    TZOTZIOY, I speak England very best,
    Ils sont fous ces Redmontains! --Harddix
    Christos TZOTZIOY Georgiou, Dec 29, 2003
    #4
  5. Christos TZOTZIOY Georgiou wrote:

    > On Sat, 27 Dec 2003 17:36:46 +0100, rumours say that Stephan Diehl
    > <> might have written:
    >
    >>A while ago, I've posted a recipie about finding a common prefix to a list
    >>of strings. While the recipie itself is quite bad (I have to admit) and I
    >>didn't know at that time that this problem was solved already in the
    >>os.path module.
    >>The interesting part can be found in the commentaries, as this turned out
    >>to be a quest for the most efficient algorithm to solve this particular
    >>problem.

    >
    > You might also want to read:
    >
    > http://www.python.org/sf/681780


    Terry's solution is much faster (at least on my test set) and, as an
    additional benefit, is the easiest to understand.
    Stephan Diehl, Dec 29, 2003
    #5
  6. On Sat, 27 Dec 2003 15:23:34 -0500, "Terry Reedy" <> wrote:

    >
    >"Stephan Diehl" <> wrote in message
    >news:bskcb0$36n$00$-online.com...
    >> All of this can be found at
    >> http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/252177
    >>
    >> What I was most surprised at was the inefficency of the trivial solutions
    >> (and that the right algorithm makes indeed a difference).

    >
    >A common surprise. The science of algorithms (including empirical testing)
    >gives real benefits.
    >
    >> --

    >os.path.commonprefix --------------------------------------------------
    >>
    >> def f1(m):
    >> "Given a list of pathnames, returns the longest common leading
    >> component"
    >> if not m: return ''
    >> prefix = m[0]

    >
    > prefix = m.pop() # avoids comparing prefix to itself as first item
    >
    >> for item in m:
    >> for i in range(len(prefix)):
    >> if prefix[:i+1] != item[:i+1]:

    >
    >I am 99% sure that above is a typo and performance bug that should read
    > if prefix[i:i+1] != item[i:i+1]:
    >since all previous chars have been found equal in previous iteration.
    >
    >> prefix = prefix[:i]
    >> if i == 0:
    >> return ''
    >> break
    >> return prefix

    >
    >Perhaps you could retest this with the two suggested changes. It will be
    >somewhat faster.
    >
    >> The problem with this algorithm is the copying of all those small

    >strings.
    >
    >Since Python does not have pointers, comparing characters within strings
    >requires pulling out the chars as separate strings. This is why using
    >C-coded comparison functions may win even though more comparisons are done.
    >
    >The reason f1uses slicing (of len 1 after my correction) instead of
    >indexing is to avoid exceptions when len(item) < len(prefix). However, all
    >the +1s have a cost (and I suspect slicing does also), so it may pay to
    >truncate prefix to the length of item first. The simplest fix for this
    >(untested) gives
    >
    >def f9(m): # modified f1 == os.path.commonprefix
    > "Given a list of strings, returns the longest common prefix"
    > if not m: return ''
    > prefix = m.pop()
    > for item in m:
    > prefix = prefix[:len(item)]
    > for i in range(len(prefix)):
    > if prefix != item:
    > if not i: return ''
    > prefix = prefix[:i]
    > break
    > return prefix
    >
    >> This can be easily fixed
    >> --- optimized

    >os.path.commonprefix ----------------------------------------
    >>
    >> def f2(m):
    >> "Given a list of pathnames, returns the longest common leading
    >> component"
    >> if not m: return ''
    >> if len(m) == 1: return m[0]
    >> prefix = m[0]
    >> for i in xrange(len(prefix)):
    >> for item in m[1:]:
    >> if prefix != item:
    >> return prefix[:i]
    >> return prefix[:i]

    >
    >and easily messed up;-) If len(item) < len(prefix), item throws
    >exception. For this approach to work, prefix should be set as shortest
    >string of m in preliminary loop. Also, you reslice m several times. Do it
    >once before outer loop.
    >
    >> It is just not nessesary to compare all strings in the list.

    >
    >Every string has to be compared to something at least once.
    >
    >> It is enough to sort the list first
    >> and then compare the first and the last element.

    >
    >Sorting compares all strings in the list to something at least once, and
    >most more than once.
    >
    >> Even though the 'sort' algorithm is coded in C and is therefore quite

    >fast,
    >> the order of runtime has changed.

    >
    >The C part is what makes f3 faster. In your timings, 128 is not large
    >enough for the nlogn component to be noticeable.
    >
    >> Michael Dyck then pointed out that instead of using 'sort', 'min' and

    >'max'
    >> should be used. While tests suggest that this is true, I have no idea why
    >> that should be, since finding a minimum or maximum uses some sorting

    >anyway
    >
    >No. max and min each do a linear scan. No sorting. But each does at
    >least as many character comparisons as modified f1 or f2. The speedup is
    >from looping and comparing in C, even though at least twice as many
    >compares are done.
    >
    >> You might have realized that the optimization so far was done one the

    >number
    >> of strings. There is still another dimension to optimize in and that is

    >the
    >> actual string comparing.
    >> Raymond Hettinger suggests using a binary search:

    >
    >Since this only affects the final comparison of min and max, and not the n
    >comparisons done to calculate each, the effect is minimal and constant,
    >independent of number of strings.
    >
    >Since this compares slices rather than chars in each loop, I wonder whether
    >this is really faster than linear scan anyway. I would like to see timing
    >of f5 with min/max of f4 combined with linear scan of f3. (Basically, f3
    >with sort removed and min/max added.) Since you changed two parts of f3 to
    >get f4, we cannot be sure that both changes are each an improvement even
    >though the combination of two is.
    >
    >def f5(seq):
    > if not seq: return ''
    > s1 = min(seq)
    > s2 = max(seq)
    > n = min(len(s1), len(s2))
    > if not n: return '' # not required since s1[0:0] == ''
    > for i in xrange(n) :
    > if s1 != s2 :
    > return s1[0:i]
    > return s1[0:n]
    >

    I wonder about this version for speed (not very tested ;-):

    >>> def f10(m):

    ... "return longest common prefix of strings in seq"
    ... if not m: return ''
    ... prefix = m.pop()
    ... ssw = str.startswith
    ... for item in m:
    ... while not ssw(item, prefix): prefix = prefix[:-1]
    ... if not prefix: return ''
    ... return prefix
    ...
    >>> f10('abcd abcd'.split())

    'abcd'
    >>> f10('abcd abce'.split())

    'abc'
    >>> f10('abcd abcd a'.split())

    'a'
    >>> f10('abcd abcd a x'.split())

    ''

    Regards,
    Bengt Richter
    Bengt Richter, Dec 29, 2003
    #6
  7. On Mon, 29 Dec 2003 17:08:39 +0100, rumours say that Stephan Diehl
    <> might have written:

    >> You might also want to read:
    >>
    >> http://www.python.org/sf/681780

    >
    >Terry's solution is much faster (at least on my test set) and, as an
    >additional benefit, is the easiest to understand.


    Yep, I believe clarity is essential in the library (and that is why my
    patch was obviously not accepted :) Actually IIRC (it's been a long
    since) that code never compares more than once prefixes that have been
    found equal and does not create slices (used buffer() and then switched
    to startswith() for future compatibility), that is why it's a little
    complicated. The main loop runs math.ceil(math.log(N,2)) times, where N
    is min([len(x) for x in argument_list]).

    Anyway, perhaps Terry or you should update the SF patch so that
    os.path.commonprefix becomes faster... the point is to benefit the whole
    Python community, right? :)
    --
    TZOTZIOY, I speak England very best,
    Ils sont fous ces Redmontains! --Harddix
    Christos TZOTZIOY Georgiou, Dec 30, 2003
    #7
  8. Christos TZOTZIOY Georgiou wrote:

    > On Mon, 29 Dec 2003 17:08:39 +0100, rumours say that Stephan Diehl
    > <> might have written:
    >
    >>> You might also want to read:
    >>>
    >>> http://www.python.org/sf/681780

    >>
    >>Terry's solution is much faster (at least on my test set) and, as an
    >>additional benefit, is the easiest to understand.

    >
    > Yep, I believe clarity is essential in the library (and that is why my
    > patch was obviously not accepted :) Actually IIRC (it's been a long
    > since) that code never compares more than once prefixes that have been
    > found equal and does not create slices (used buffer() and then switched
    > to startswith() for future compatibility), that is why it's a little
    > complicated. The main loop runs math.ceil(math.log(N,2)) times, where N
    > is min([len(x) for x in argument_list]).
    >
    > Anyway, perhaps Terry or you should update the SF patch so that
    > os.path.commonprefix becomes faster... the point is to benefit the whole
    > Python community, right? :)


    In principle, yes :)
    I'm not too sure if commonprefix is used that often and in a way that would
    really be worth the effort to patch the existing code. (I guess, if there
    had been a real need, it would have been done already a long time ago).
    Another thing is that the speed improvement would be largely due to using C
    implemented functions instead of pure Python code (o.k., the lines are
    blury here). In order to understand what's going on, one needs to know what
    kind of functions are C implemented and why, in that particular case, it is
    a good idea, to use them.
    Stephan Diehl, Dec 30, 2003
    #8
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Replies:
    10
    Views:
    535
    Otis Mukinfus
    Apr 15, 2006
  2. Replies:
    10
    Views:
    1,172
  3. Joel Seligmann

    Tell-tale Blue Borders: Help!

    Joel Seligmann, Dec 28, 2003, in forum: HTML
    Replies:
    15
    Views:
    2,363
    Adrian Wood
    Mar 5, 2004
  4. Replies:
    25
    Views:
    627
    Peter Decker
    Feb 16, 2007
  5. Ravikiran

    Zero Optimization and Sign Optimization???

    Ravikiran, Nov 17, 2008, in forum: C Programming
    Replies:
    22
    Views:
    840
    Thad Smith
    Nov 24, 2008
Loading...

Share This Page