Efficient String Lookup?

Discussion in 'Python' started by Chris S., Oct 16, 2004.

  1. Chris S.

    Chris S. Guest

    I have a number of strings, containing wildcards (e.g. 'abc#e#' where #
    is anything), which I want to match with a test string (e.g 'abcdef').
    What would be the best way for me to store my strings so lookup is as
    fast as possible?
     
    Chris S., Oct 16, 2004
    #1
    1. Advertising

  2. Chris S. wrote:
    > I have a number of strings, containing wildcards (e.g. 'abc#e#' where #
    > is anything), which I want to match with a test string (e.g 'abcdef').
    > What would be the best way for me to store my strings so lookup is as
    > fast as possible?


    If it were me, I would store them as compiled regular expressions.

    See the re module documentation and use re.compile().

    If you want a better solution, it might help if you supply a little more
    information about your problem and why this solution is unsuitable
    (maybe it is :]).
    --
    Michael Hoffman
     
    Michael Hoffman, Oct 16, 2004
    #2
    1. Advertising

  3. Chris S. wrote:

    > I have a number of strings, containing wildcards (e.g. 'abc#e#' where #
    > is anything), which I want to match with a test string (e.g 'abcdef').
    > What would be the best way for me to store my strings so lookup is as
    > fast as possible?


    As a compiled regular expression, I guess - you don't give much info here,
    so maybe there is a better way. But to me it looks like a classic regexp
    thing. Maybe if your wildcards are equivalent to .*, then using subsequent
    string searches lik this helps you:

    pattern = 'abc#e#'.split('#')
    s = 'abcdef'
    found = True
    pos = 0
    for p in pattern:
    h = s.find(p)
    if h != -1:
    p = h + 1b
    else:
    found = False


    That might be faster, if the string.find operation uses something else than
    simple brute force linear searching - but I don't know enough about the
    internals of python's string implementation to give an definite answer
    here.

    But to be honest: I don't think regexps are easy to beat, unless your
    usecase is modeled in a way that makes it prone to other approaches.

    --
    Regards,

    Diez B. Roggisch
     
    Diez B. Roggisch, Oct 16, 2004
    #3

  4. > I have a number of strings, containing wildcards (e.g. 'abc#e#' where #
    > is anything), which I want to match with a test string (e.g 'abcdef').
    > What would be the best way for me to store my strings so lookup is as
    > fast as possible?


    Start with a Trie, and virtually merge branches as necessary.

    - Josiah
     
    Josiah Carlson, Oct 16, 2004
    #4
  5. Josiah Carlson wrote:
    >>I have a number of strings, containing wildcards (e.g. 'abc#e#' where #
    >>is anything), which I want to match with a test string (e.g 'abcdef').
    >>What would be the best way for me to store my strings so lookup is as
    >>fast as possible?

    >
    >
    > Start with a Trie, and virtually merge branches as necessary.
    >
    > - Josiah
    >


    Yup, you might also have a look at the Aho-Corasick algorithm, which can
    match a test string against a big number of strings quite efficiently :

    http://www-sr.informatik.uni-tuebingen.de/~buehler/AC/AC.html

    You'll have to adapt the algorithm so that it can handle your wilcard,
    though. I found it easy to implement the '.' (one character wildcard),
    but the '.*' (zero or more character wildcard) forces you to have
    backtracking.

    But if you can use the regexp engine, do not hesitate, it will save you
    a lot of headaches. Unless of course you're a student and your teacher
    asked you this question ;).

    Regards,

    Nicolas
     
    Nicolas Lehuen, Oct 16, 2004
    #5
  6. Chris S.

    Chris S. Guest

    Michael Hoffman wrote:
    > Chris S. wrote:
    >
    >> I have a number of strings, containing wildcards (e.g. 'abc#e#' where
    >> # is anything), which I want to match with a test string (e.g
    >> 'abcdef'). What would be the best way for me to store my strings so
    >> lookup is as fast as possible?

    >
    >
    > If it were me, I would store them as compiled regular expressions.
    >
    > See the re module documentation and use re.compile().
    >
    > If you want a better solution, it might help if you supply a little more
    > information about your problem and why this solution is unsuitable
    > (maybe it is :]).


    The problem is I want to associate some data with my pattern, as in a
    dictionary. Basically, my application consists of a number of
    conditions, represented as strings with wildcards. Associated to each
    condition is arbitrary data explaining "what I must do". My task is to
    find this data by match a state string with these condition strings. Of
    course, the brute force approach is to just add each pattern to a
    dictionary, and linearly seach every key for a match. To improve this, I
    considered a trie, implemented as a special dictionary:

    class Trie(dict):
    '''Implements a traditional Patricia-style Tria.
    Keys must be sequence types. None key represents a value.'''

    def __init__(self):
    dict.__init__(self)

    def __setitem__(self, key, value):
    assert key, 'invalid key '+str(key)
    d = self
    last = None
    for n in key:
    if n not in d:
    dict.__setitem__(d, n, {})
    last = (d,n)
    d = dict.__getitem__(d, n)
    (d,n) = last
    dict.__getitem__(d, n)[None] = value

    def __getitem__(self, key):
    d = self
    for n in key:
    assert n in d, 'invalid key '+str(key)
    d = dict.__getitem__(d, n)
    assert None in d, 'missing value for key '+str(key)
    return dict.__getitem__(d, None)

    def __delitem__(self, key):
    previous = []
    d = self
    for n in key:
    assert n in d, 'invalid key '+str(key)
    previous.append((d,n))
    d = dict.__getitem__(d, n)
    assert None in d, 'missing value for key '+str(key)
    # remove value
    dict.__delitem__(d, None)
    # find and remove empty keys
    while len(previous):
    (d,n) = previous.pop()
    if not len(dict.__getitem__(d, n)):
    dict.__delitem__(d, n)
    else:
    break

    However, I'm uncertain about the efficiency of this approach. I'd like
    to use regexps, but how would I associate data with each pattern?
     
    Chris S., Oct 16, 2004
    #6
  7. Chris S.

    Chris S. Guest

    Diez B. Roggisch wrote:

    > That might be faster, if the string.find operation uses something else than
    > simple brute force linear searching - but I don't know enough about the
    > internals of python's string implementation to give an definite answer
    > here.
    >
    > But to be honest: I don't think regexps are easy to beat, unless your
    > usecase is modeled in a way that makes it prone to other approaches.


    The problem is, I want to find all patterns that match my test string,
    so even with re I'd still have to search through every pattern, which is
    what I'm trying to avoid. Something like a trie might be better, but
    they don't seem very efficient when implemented in Python.
     
    Chris S., Oct 16, 2004
    #7
  8. Chris S.

    Andrew Dalke Guest

    Chris S. wrote:
    > The problem is I want to associate some data with my pattern, as in a
    > dictionary. Basically, my application consists of a number of
    > conditions, represented as strings with wildcards. Associated to each
    > condition is arbitrary data explaining "what I must do".

    ...
    > However, I'm uncertain about the efficiency of this approach. I'd like
    > to use regexps, but how would I associate data with each pattern?


    One way is with groups. Make each pattern into a regexp
    pattern then concatenate them as
    (pat1)|(pat2)|(pat3)| ... |(patN)

    Do the match and find which group has the non-None value.

    You may need to tack a "$" on the end of string (in which
    case remember to enclose everything in a () so the $ doesn't
    affect only the last pattern).

    One things to worry about is you can only have 99 groups
    in a pattern.

    Here's example code.


    import re

    config_data = [
    ("abc#e#", "Reactor meltdown imminent"),
    ("ab##", "Antimatter containment field breach"),
    ("b####f", "Coffee too strong"),
    ]

    as_regexps = ["(%s)" % pattern.replace("#", ".")
    for (pattern, text) in config_data]

    full_regexp = "|".join(as_regexps) + "$"
    pat = re.compile(full_regexp)


    input_data = [
    "abadb",
    "abcdef",
    "zxc",
    "abcq",
    "b1234f",
    ]

    for text in input_data:
    m = pat.match(text)
    if not m:
    print "%s? That's okay." % (text,)
    else:
    for i, val in enumerate(m.groups()):
    if val is not None:
    print "%s? We've got a %r warning!" % (text,
    config_data[1],)



    Here's the output I got when I ran it


    abadb? We've got a 'Antimatter containment field breach' warning!
    abcdef? We've got a 'Reactor meltdown imminent' warning!
    zxc? That's okay.
    abcq? We've got a 'Antimatter containment field breach' warning!
    b1234f? We've got a 'Coffee too strong' warning!


    Andrew
     
    Andrew Dalke, Oct 17, 2004
    #8
  9. On Sat, 16 Oct 2004 09:11:37 GMT, "Chris S." <> wrote:

    >I have a number of strings, containing wildcards (e.g. 'abc#e#' where #
    >is anything), which I want to match with a test string (e.g 'abcdef').
    >What would be the best way for me to store my strings so lookup is as
    >fast as possible?


    Insufficient info. But 'fast as possible' suggests putting your strings in
    a flex grammar and generating a parser in c. See
    http://www.gnu.org/software/flex/
    Defining a grammar is a good exercise in precise definition of the problem anyway ;-)

    If you want to do it in python, you still need to be more precise...

    - is # a single character? any number of characters?
    - if your test string were 'abcdefabcdef' would you want 'abc#e#' to match the whole thing?
    or match abcdef twice?
    - if one wild card string matches, does that "use up" the test string so other wild card strings
    mustn't match? If so, what has priority? Longest? shortest? Other criterion?
    - etc etc

    Regards,
    Bengt Richter
     
    Bengt Richter, Oct 17, 2004
    #9
  10. Chris S.

    Chris S. Guest

    Bengt Richter wrote:

    > On Sat, 16 Oct 2004 09:11:37 GMT, "Chris S." <> wrote:
    >
    >
    >>I have a number of strings, containing wildcards (e.g. 'abc#e#' where #
    >>is anything), which I want to match with a test string (e.g 'abcdef').
    >>What would be the best way for me to store my strings so lookup is as
    >>fast as possible?

    >
    >
    > Insufficient info. But 'fast as possible' suggests putting your strings in
    > a flex grammar and generating a parser in c. See
    > http://www.gnu.org/software/flex/
    > Defining a grammar is a good exercise in precise definition of the problem anyway ;-)
    >
    > If you want to do it in python, you still need to be more precise...
    >
    > - is # a single character? any number of characters?
    > - if your test string were 'abcdefabcdef' would you want 'abc#e#' to match the whole thing?
    > or match abcdef twice?
    > - if one wild card string matches, does that "use up" the test string so other wild card strings
    > mustn't match? If so, what has priority? Longest? shortest? Other criterion?
    > - etc etc


    Sorry for the ambiguity. My case is actually pretty simple. '#'
    represents any single character, so it's essentially the same as re's
    '.'. The match must be exact, so the string and pattern must be of equal
    lengths. Each wildcard is independent of other wildcards. For example,
    suppose we restricted the possible characters to 1 and 0, then the
    pattern '##' would only match '00', '01', '10', and '11'. This pattern
    would not match '0', '111', etc. I feel that a trie would work well, but
    I'm concerned that for large patterns, the overhead in the Python
    implementation would be too inefficient.
     
    Chris S., Oct 17, 2004
    #10
  11. Chris S.

    Chris S. Guest

    Andrew Dalke wrote:

    > One way is with groups. Make each pattern into a regexp
    > pattern then concatenate them as
    > (pat1)|(pat2)|(pat3)| ... |(patN)
    >
    > Do the match and find which group has the non-None value.
    >
    > You may need to tack a "$" on the end of string (in which
    > case remember to enclose everything in a () so the $ doesn't
    > affect only the last pattern).
    >
    > One things to worry about is you can only have 99 groups
    > in a pattern.
    >
    > Here's example code.
    >
    >
    > import re
    >
    > config_data = [
    > ("abc#e#", "Reactor meltdown imminent"),
    > ("ab##", "Antimatter containment field breach"),
    > ("b####f", "Coffee too strong"),
    > ]
    >
    > as_regexps = ["(%s)" % pattern.replace("#", ".")
    > for (pattern, text) in config_data]
    >
    > full_regexp = "|".join(as_regexps) + "$"
    > pat = re.compile(full_regexp)
    >
    >
    > input_data = [
    > "abadb",
    > "abcdef",
    > "zxc",
    > "abcq",
    > "b1234f",
    > ]
    >
    > for text in input_data:
    > m = pat.match(text)
    > if not m:
    > print "%s? That's okay." % (text,)
    > else:
    > for i, val in enumerate(m.groups()):
    > if val is not None:
    > print "%s? We've got a %r warning!" % (text,
    > config_data[1],)
    >
    >
    >
    > Here's the output I got when I ran it
    >
    >
    > abadb? We've got a 'Antimatter containment field breach' warning!
    > abcdef? We've got a 'Reactor meltdown imminent' warning!
    > zxc? That's okay.
    > abcq? We've got a 'Antimatter containment field breach' warning!
    > b1234f? We've got a 'Coffee too strong' warning!


    Thanks, that's almost exactly what I'm looking for. The only downside I
    see is that I still need to add and remove patterns, so continually
    recompiling the expression might be expensive.
     
    Chris S., Oct 17, 2004
    #11
  12. Chris S.

    Chris S. Guest

    Andrew Dalke wrote:
    > Here's the output I got when I ran it
    >
    >
    > abadb? We've got a 'Antimatter containment field breach' warning!
    > abcdef? We've got a 'Reactor meltdown imminent' warning!
    > zxc? That's okay.
    > abcq? We've got a 'Antimatter containment field breach' warning!
    > b1234f? We've got a 'Coffee too strong' warning!


    Actually, I've noticed some strange behavior. It seems to match more
    than one character per wild card. For instance, your code matches
    'abaxile', 'abaze', and 'abbacomes' to the pattern 'ab##'. I'm not an
    expert with rex, but your expression looks correct. What could be
    causing this?
     
    Chris S., Oct 17, 2004
    #12
  13. On Sun, 17 Oct 2004 05:50:49 GMT, "Chris S." <> wrote:

    >Bengt Richter wrote:
    >
    >> On Sat, 16 Oct 2004 09:11:37 GMT, "Chris S." <> wrote:
    >>
    >>
    >>>I have a number of strings, containing wildcards (e.g. 'abc#e#' where #
    >>>is anything), which I want to match with a test string (e.g 'abcdef').
    >>>What would be the best way for me to store my strings so lookup is as
    >>>fast as possible?

    >>
    >>
    >> Insufficient info. But 'fast as possible' suggests putting your strings in
    >> a flex grammar and generating a parser in c. See
    >> http://www.gnu.org/software/flex/
    >> Defining a grammar is a good exercise in precise definition of the problem anyway ;-)
    >>
    >> If you want to do it in python, you still need to be more precise...
    >>
    >> - is # a single character? any number of characters?
    >> - if your test string were 'abcdefabcdef' would you want 'abc#e#' to match the whole thing?
    >> or match abcdef twice?
    >> - if one wild card string matches, does that "use up" the test string so other wild card strings
    >> mustn't match? If so, what has priority? Longest? shortest? Other criterion?
    >> - etc etc

    >
    >Sorry for the ambiguity. My case is actually pretty simple. '#'
    >represents any single character, so it's essentially the same as re's
    >'.'. The match must be exact, so the string and pattern must be of equal
    >lengths. Each wildcard is independent of other wildcards. For example,
    >suppose we restricted the possible characters to 1 and 0, then the
    >pattern '##' would only match '00', '01', '10', and '11'. This pattern
    >would not match '0', '111', etc. I feel that a trie would work well, but
    >I'm concerned that for large patterns, the overhead in the Python
    >implementation would be too inefficient.


    So is the set of patterns static and you want to find which pattern(s!)
    match dynamic input? How many patterns vs inputs strings? What max
    length patterns, input strings? Total volume?

    Regards,
    Bengt Richter
     
    Bengt Richter, Oct 17, 2004
    #13
  14. Chris S.

    Chris S. Guest

    Chris S. wrote:

    > Andrew Dalke wrote:
    >
    >> Here's the output I got when I ran it
    >>
    >>
    >> abadb? We've got a 'Antimatter containment field breach' warning!
    >> abcdef? We've got a 'Reactor meltdown imminent' warning!
    >> zxc? That's okay.
    >> abcq? We've got a 'Antimatter containment field breach' warning!
    >> b1234f? We've got a 'Coffee too strong' warning!

    >
    >
    > Actually, I've noticed some strange behavior. It seems to match more
    > than one character per wild card. For instance, your code matches
    > 'abaxile', 'abaze', and 'abbacomes' to the pattern 'ab##'. I'm not an
    > expert with rex, but your expression looks correct. What could be
    > causing this?


    Spoke too soon. I seems all you needed was to change:

    full_regexp = "|".join(as_regexps) + "$"

    to:

    full_regexp = "$|".join(as_regexps) + "$"

    However, I noticed rex still doesn't return multiple matches. For
    instance, matching 'abc' to the given the patterns '#bc', 'a#c', and
    'ab#', your code only returns a match to the first pattern '#bc'. Is
    this standard behavior or is it possible to change this?
     
    Chris S., Oct 17, 2004
    #14
  15. Chris S.

    pitkali Guest

    Chris S. wrote:

    > 'abaxile', 'abaze', and 'abbacomes' to the pattern 'ab##'. I'm not an
    > expert with rex, but your expression looks correct. What could be
    > causing this?


    To avoid this, one would have to add to the patterns \b AFAIR, so that it
    matches whole words only.

    Regards,

    --
    * Piotr (pitkali) Kalinowski * mailto: pitkali (at) o2 (dot) pl *
    * Registered Linux User No. 282090 * Powered by Gentoo Linux *
    * Fingerprint: D5BB 27C7 9993 50BB A1D2 33F5 961E FE1E D049 4FCD *
     
    pitkali, Oct 17, 2004
    #15
  16. Chris S.

    Chris S. Guest

    Bengt Richter wrote:

    > On Sun, 17 Oct 2004 05:50:49 GMT, "Chris S." <> wrote:
    >>Sorry for the ambiguity. My case is actually pretty simple. '#'
    >>represents any single character, so it's essentially the same as re's
    >>'.'. The match must be exact, so the string and pattern must be of equal
    >>lengths. Each wildcard is independent of other wildcards. For example,
    >>suppose we restricted the possible characters to 1 and 0, then the
    >>pattern '##' would only match '00', '01', '10', and '11'. This pattern
    >>would not match '0', '111', etc. I feel that a trie would work well, but
    >>I'm concerned that for large patterns, the overhead in the Python
    >>implementation would be too inefficient.

    >
    >
    > So is the set of patterns static and you want to find which pattern(s!)
    > match dynamic input? How many patterns vs inputs strings? What max
    > length patterns, input strings? Total volume?


    Patterns and inputs are dynamic, input more so than patterns. The
    number, length, and volume of patterns and strings should be arbitrary.
     
    Chris S., Oct 17, 2004
    #16
  17. On Sun, 17 Oct 2004 07:18:07 GMT, "Chris S." <> wrote:

    >Bengt Richter wrote:
    >
    >> On Sun, 17 Oct 2004 05:50:49 GMT, "Chris S." <> wrote:
    >>>Sorry for the ambiguity. My case is actually pretty simple. '#'
    >>>represents any single character, so it's essentially the same as re's
    >>>'.'. The match must be exact, so the string and pattern must be of equal
    >>>lengths. Each wildcard is independent of other wildcards. For example,
    >>>suppose we restricted the possible characters to 1 and 0, then the
    >>>pattern '##' would only match '00', '01', '10', and '11'. This pattern
    >>>would not match '0', '111', etc. I feel that a trie would work well, but
    >>>I'm concerned that for large patterns, the overhead in the Python
    >>>implementation would be too inefficient.

    >>
    >>
    >> So is the set of patterns static and you want to find which pattern(s!)
    >> match dynamic input? How many patterns vs inputs strings? What max
    >> length patterns, input strings? Total volume?

    >
    >Patterns and inputs are dynamic, input more so than patterns. The
    >number, length, and volume of patterns and strings should be arbitrary.

    Strategies for performance will vary according to volume (small => anything ok), relative
    sizes of strings and respective sets of strings, assuming enough work to make you notice a difference.
    patterns>>inputs => walk tables based on patterns using inputs vs patterns<<inputs =>
    walk tables base on input sets based on strategic walks through patterns)
    Max length of strings, patterns will make some things worthwhile if strings are small,
    dumb if they are thousands of characters. Who can tell what performance tricks you may need
    or not? Why don't you just try (^pattern$|^pat2$|...) for every pattern to rescan the whole
    input for each pattern, and we'll worry about performance later ;-)

    Regards,
    Bengt Richter
     
    Bengt Richter, Oct 17, 2004
    #17

  18. > Sorry for the ambiguity. My case is actually pretty simple. '#'
    > represents any single character, so it's essentially the same as re's
    > '.'. The match must be exact, so the string and pattern must be of equal
    > lengths. Each wildcard is independent of other wildcards. For example,
    > suppose we restricted the possible characters to 1 and 0, then the
    > pattern '##' would only match '00', '01', '10', and '11'. This pattern
    > would not match '0', '111', etc. I feel that a trie would work well, but
    > I'm concerned that for large patterns, the overhead in the Python
    > implementation would be too inefficient.


    Having implemented what is known as a burst trie (a trie where you don't
    expand a branch until it has more than 'k' entries) in Python, it ends
    up taking up much more space, but that isn't really an issue unless you
    have large numbers of strings (millions), or the strings are long
    (kilobytes).

    If you want to make it more efficient (space-wise), write the algorithm
    and structures in pure C, then wrap it with SWIG. Add options for
    inserting and deleting strings, and also querying for strings that match
    a certain pattern.

    Thinking about it, if your dictionary is very restricted, you could just
    toss all strings in a balanced search tree, doing a similar tree
    traversal as the trie solution. Much less overhead, most of the
    benefits.

    - Josiah
     
    Josiah Carlson, Oct 17, 2004
    #18
  19. Chris S.

    Andrew Dalke Guest

    Chris S. wrote:
    > Actually, I've noticed some strange behavior. It seems to match more
    > than one character per wild card. For instance, your code matches
    > 'abaxile', 'abaze', and 'abbacomes' to the pattern 'ab##'. I'm not an
    > expert with rex, but your expression looks correct. What could be
    > causing this?


    It's matching the prefix. To make it match the string and
    only the string you need a $. Either do

    (pat1$)|(pat2$)| ... |(patN$)

    or do

    ((pat1)|(pat2)| ... |(patN))$

    If you do the last, don't forget to omit group(1) in
    the list of results, or use the non-capturing group
    notation, which I believe is (?: ... ) as in

    (?:(pat1)|(pat2)| ... |(patN))$

    Andrew
     
    Andrew Dalke, Oct 17, 2004
    #19
  20. Chris S.

    Andrew Dalke Guest

    Chris S. wrote:
    > Spoke too soon.


    As did I. :)

    > However, I noticed rex still doesn't return multiple matches. For
    > instance, matching 'abc' to the given the patterns '#bc', 'a#c', and
    > 'ab#', your code only returns a match to the first pattern '#bc'. Is
    > this standard behavior or is it possible to change this?


    This is standard behavior. You can't change it. The
    only easy solution along these lines is to have a triangular
    table of

    (pat1)|(pat2)| .... |(patN)
    (pat2)| .... |(patN)
    ...
    (patN)

    and if group i matched at a point x then do another
    search using the (i+1)th entry in that table at that
    point. Repeat until no more matches at x.

    I don't know of any off-the-shelf solution for Python
    for what you want to do, other than the usual "try
    each pattern individually." You'll need to make some
    sort of state table (or trie in your case) and do it
    that way.

    You *can* use Perl's regexps for this sort of thing. That
    regexp language allowed embedded Perl code, so this will
    get you an answer


    % perl -ne 'while (/((.bc)(?{print "Match 1 at ", length($`),
    "\n"})^)|((a.c)(?{print "Match 2 at ", length($`), "\n"})^)|./g){}'
    This is abc acbcb
    Match 1 at 8
    Match 2 at 8
    Match 1 at 13

    Breaking the pattern down I'm using "while (/ ... /g) {}" to
    match everything in the input string ($_), which comes from
    each line of stdin (because of the '-n' command-line flag).

    The pattern is

    ((.bc)(?{print "Match 1 at ", length($`), "\n"})^)
    |((a.c)(?{print "Match 2 at ", length($`), "\n"})^)
    |.

    That is, match ".bc" then execute the corresponding piece
    of embedded Perl code. This prints "Match 1 at " followed
    by the length of the text before the current match, which
    corresponds to the position of the match.

    (If you only need that there is a match, you don't need
    then. Using $` in perl gives a performance hit.)

    After it executes, the subgroup matches (embedded executable
    code always passes, and it consumes no characters). But
    then it gets to the '^' test which fails because this is
    never at the beginning of the string.

    So the regexp engine tries the next option, which is the
    "Match 2 at .." test and print. After the print (if
    indeed there is a match 2) it also fails.

    This takes the engine to the last option which is the "."
    character. And that always passes.

    Hence this pattern always consumes one and only one character.
    I could put it inside a (...)* to match all characters,
    but decided instead to use the while(/.../g){} to do the looping.
    Why? Old practices and not well-determined reason.

    (The while loop works because of the 'g' flag on the pattern.)

    You talk about needing to eek all the performance you can
    out of the system. Have you tried the brute force approach
    of just doing N regexp tests?

    If you need the performance, it's rather easy to convert
    a simple trie into C code, save the result on the fly
    to a file, compile that into a Python shared library, and
    import that library, to get a function that does the
    tests given a string. Remember to give a new name to
    each shared library as otherwise the importing gets confused.

    Andrew
     
    Andrew Dalke, Oct 17, 2004
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. David Pratt

    Efficient lookup in list of dictionaries

    David Pratt, Dec 5, 2005, in forum: Python
    Replies:
    2
    Views:
    310
    bruno at modulix
    Dec 5, 2005
  2. Timasmith
    Replies:
    1
    Views:
    2,035
    Daniel Pitts
    Nov 6, 2006
  3. Ken Fine
    Replies:
    3
    Views:
    563
    Steven Cheng [MSFT]
    Jul 23, 2008
  4. Per Freem
    Replies:
    13
    Views:
    652
    Tim Chase
    Jan 13, 2009
  5. James
    Replies:
    30
    Views:
    1,177
    Chris M. Thomasson
    Nov 23, 2009
Loading...

Share This Page