Efficient String Lookup?

C

Chris S.

I have a number of strings, containing wildcards (e.g. 'abc#e#' where #
is anything), which I want to match with a test string (e.g 'abcdef').
What would be the best way for me to store my strings so lookup is as
fast as possible?
 
M

Michael Hoffman

Chris said:
I have a number of strings, containing wildcards (e.g. 'abc#e#' where #
is anything), which I want to match with a test string (e.g 'abcdef').
What would be the best way for me to store my strings so lookup is as
fast as possible?

If it were me, I would store them as compiled regular expressions.

See the re module documentation and use re.compile().

If you want a better solution, it might help if you supply a little more
information about your problem and why this solution is unsuitable
(maybe it is :]).
 
D

Diez B. Roggisch

Chris said:
I have a number of strings, containing wildcards (e.g. 'abc#e#' where #
is anything), which I want to match with a test string (e.g 'abcdef').
What would be the best way for me to store my strings so lookup is as
fast as possible?

As a compiled regular expression, I guess - you don't give much info here,
so maybe there is a better way. But to me it looks like a classic regexp
thing. Maybe if your wildcards are equivalent to .*, then using subsequent
string searches lik this helps you:

pattern = 'abc#e#'.split('#')
s = 'abcdef'
found = True
pos = 0
for p in pattern:
h = s.find(p)
if h != -1:
p = h + 1b
else:
found = False


That might be faster, if the string.find operation uses something else than
simple brute force linear searching - but I don't know enough about the
internals of python's string implementation to give an definite answer
here.

But to be honest: I don't think regexps are easy to beat, unless your
usecase is modeled in a way that makes it prone to other approaches.
 
J

Josiah Carlson

I have a number of strings, containing wildcards (e.g. 'abc#e#' where #
is anything), which I want to match with a test string (e.g 'abcdef').
What would be the best way for me to store my strings so lookup is as
fast as possible?

Start with a Trie, and virtually merge branches as necessary.

- Josiah
 
N

Nicolas Lehuen

Josiah said:
Start with a Trie, and virtually merge branches as necessary.

- Josiah

Yup, you might also have a look at the Aho-Corasick algorithm, which can
match a test string against a big number of strings quite efficiently :

http://www-sr.informatik.uni-tuebingen.de/~buehler/AC/AC.html

You'll have to adapt the algorithm so that it can handle your wilcard,
though. I found it easy to implement the '.' (one character wildcard),
but the '.*' (zero or more character wildcard) forces you to have
backtracking.

But if you can use the regexp engine, do not hesitate, it will save you
a lot of headaches. Unless of course you're a student and your teacher
asked you this question ;).

Regards,

Nicolas
 
C

Chris S.

Michael said:
Chris said:
I have a number of strings, containing wildcards (e.g. 'abc#e#' where
# is anything), which I want to match with a test string (e.g
'abcdef'). What would be the best way for me to store my strings so
lookup is as fast as possible?


If it were me, I would store them as compiled regular expressions.

See the re module documentation and use re.compile().

If you want a better solution, it might help if you supply a little more
information about your problem and why this solution is unsuitable
(maybe it is :]).

The problem is I want to associate some data with my pattern, as in a
dictionary. Basically, my application consists of a number of
conditions, represented as strings with wildcards. Associated to each
condition is arbitrary data explaining "what I must do". My task is to
find this data by match a state string with these condition strings. Of
course, the brute force approach is to just add each pattern to a
dictionary, and linearly seach every key for a match. To improve this, I
considered a trie, implemented as a special dictionary:

class Trie(dict):
'''Implements a traditional Patricia-style Tria.
Keys must be sequence types. None key represents a value.'''

def __init__(self):
dict.__init__(self)

def __setitem__(self, key, value):
assert key, 'invalid key '+str(key)
d = self
last = None
for n in key:
if n not in d:
dict.__setitem__(d, n, {})
last = (d,n)
d = dict.__getitem__(d, n)
(d,n) = last
dict.__getitem__(d, n)[None] = value

def __getitem__(self, key):
d = self
for n in key:
assert n in d, 'invalid key '+str(key)
d = dict.__getitem__(d, n)
assert None in d, 'missing value for key '+str(key)
return dict.__getitem__(d, None)

def __delitem__(self, key):
previous = []
d = self
for n in key:
assert n in d, 'invalid key '+str(key)
previous.append((d,n))
d = dict.__getitem__(d, n)
assert None in d, 'missing value for key '+str(key)
# remove value
dict.__delitem__(d, None)
# find and remove empty keys
while len(previous):
(d,n) = previous.pop()
if not len(dict.__getitem__(d, n)):
dict.__delitem__(d, n)
else:
break

However, I'm uncertain about the efficiency of this approach. I'd like
to use regexps, but how would I associate data with each pattern?
 
C

Chris S.

Diez said:
That might be faster, if the string.find operation uses something else than
simple brute force linear searching - but I don't know enough about the
internals of python's string implementation to give an definite answer
here.

But to be honest: I don't think regexps are easy to beat, unless your
usecase is modeled in a way that makes it prone to other approaches.

The problem is, I want to find all patterns that match my test string,
so even with re I'd still have to search through every pattern, which is
what I'm trying to avoid. Something like a trie might be better, but
they don't seem very efficient when implemented in Python.
 
A

Andrew Dalke

Chris said:
The problem is I want to associate some data with my pattern, as in a
dictionary. Basically, my application consists of a number of
conditions, represented as strings with wildcards. Associated to each
condition is arbitrary data explaining "what I must do". ...
However, I'm uncertain about the efficiency of this approach. I'd like
to use regexps, but how would I associate data with each pattern?

One way is with groups. Make each pattern into a regexp
pattern then concatenate them as
(pat1)|(pat2)|(pat3)| ... |(patN)

Do the match and find which group has the non-None value.

You may need to tack a "$" on the end of string (in which
case remember to enclose everything in a () so the $ doesn't
affect only the last pattern).

One things to worry about is you can only have 99 groups
in a pattern.

Here's example code.


import re

config_data = [
("abc#e#", "Reactor meltdown imminent"),
("ab##", "Antimatter containment field breach"),
("b####f", "Coffee too strong"),
]

as_regexps = ["(%s)" % pattern.replace("#", ".")
for (pattern, text) in config_data]

full_regexp = "|".join(as_regexps) + "$"
pat = re.compile(full_regexp)


input_data = [
"abadb",
"abcdef",
"zxc",
"abcq",
"b1234f",
]

for text in input_data:
m = pat.match(text)
if not m:
print "%s? That's okay." % (text,)
else:
for i, val in enumerate(m.groups()):
if val is not None:
print "%s? We've got a %r warning!" % (text,
config_data[1],)



Here's the output I got when I ran it


abadb? We've got a 'Antimatter containment field breach' warning!
abcdef? We've got a 'Reactor meltdown imminent' warning!
zxc? That's okay.
abcq? We've got a 'Antimatter containment field breach' warning!
b1234f? We've got a 'Coffee too strong' warning!


Andrew
(e-mail address removed)
 
B

Bengt Richter

I have a number of strings, containing wildcards (e.g. 'abc#e#' where #
is anything), which I want to match with a test string (e.g 'abcdef').
What would be the best way for me to store my strings so lookup is as
fast as possible?

Insufficient info. But 'fast as possible' suggests putting your strings in
a flex grammar and generating a parser in c. See
http://www.gnu.org/software/flex/
Defining a grammar is a good exercise in precise definition of the problem anyway ;-)

If you want to do it in python, you still need to be more precise...

- is # a single character? any number of characters?
- if your test string were 'abcdefabcdef' would you want 'abc#e#' to match the whole thing?
or match abcdef twice?
- if one wild card string matches, does that "use up" the test string so other wild card strings
mustn't match? If so, what has priority? Longest? shortest? Other criterion?
- etc etc

Regards,
Bengt Richter
 
C

Chris S.

Bengt said:
Insufficient info. But 'fast as possible' suggests putting your strings in
a flex grammar and generating a parser in c. See
http://www.gnu.org/software/flex/
Defining a grammar is a good exercise in precise definition of the problem anyway ;-)

If you want to do it in python, you still need to be more precise...

- is # a single character? any number of characters?
- if your test string were 'abcdefabcdef' would you want 'abc#e#' to match the whole thing?
or match abcdef twice?
- if one wild card string matches, does that "use up" the test string so other wild card strings
mustn't match? If so, what has priority? Longest? shortest? Other criterion?
- etc etc

Sorry for the ambiguity. My case is actually pretty simple. '#'
represents any single character, so it's essentially the same as re's
'.'. The match must be exact, so the string and pattern must be of equal
lengths. Each wildcard is independent of other wildcards. For example,
suppose we restricted the possible characters to 1 and 0, then the
pattern '##' would only match '00', '01', '10', and '11'. This pattern
would not match '0', '111', etc. I feel that a trie would work well, but
I'm concerned that for large patterns, the overhead in the Python
implementation would be too inefficient.
 
C

Chris S.

Andrew said:
One way is with groups. Make each pattern into a regexp
pattern then concatenate them as
(pat1)|(pat2)|(pat3)| ... |(patN)

Do the match and find which group has the non-None value.

You may need to tack a "$" on the end of string (in which
case remember to enclose everything in a () so the $ doesn't
affect only the last pattern).

One things to worry about is you can only have 99 groups
in a pattern.

Here's example code.


import re

config_data = [
("abc#e#", "Reactor meltdown imminent"),
("ab##", "Antimatter containment field breach"),
("b####f", "Coffee too strong"),
]

as_regexps = ["(%s)" % pattern.replace("#", ".")
for (pattern, text) in config_data]

full_regexp = "|".join(as_regexps) + "$"
pat = re.compile(full_regexp)


input_data = [
"abadb",
"abcdef",
"zxc",
"abcq",
"b1234f",
]

for text in input_data:
m = pat.match(text)
if not m:
print "%s? That's okay." % (text,)
else:
for i, val in enumerate(m.groups()):
if val is not None:
print "%s? We've got a %r warning!" % (text,
config_data[1],)



Here's the output I got when I ran it


abadb? We've got a 'Antimatter containment field breach' warning!
abcdef? We've got a 'Reactor meltdown imminent' warning!
zxc? That's okay.
abcq? We've got a 'Antimatter containment field breach' warning!
b1234f? We've got a 'Coffee too strong' warning!


Thanks, that's almost exactly what I'm looking for. The only downside I
see is that I still need to add and remove patterns, so continually
recompiling the expression might be expensive.
 
C

Chris S.

Andrew said:
Here's the output I got when I ran it


abadb? We've got a 'Antimatter containment field breach' warning!
abcdef? We've got a 'Reactor meltdown imminent' warning!
zxc? That's okay.
abcq? We've got a 'Antimatter containment field breach' warning!
b1234f? We've got a 'Coffee too strong' warning!

Actually, I've noticed some strange behavior. It seems to match more
than one character per wild card. For instance, your code matches
'abaxile', 'abaze', and 'abbacomes' to the pattern 'ab##'. I'm not an
expert with rex, but your expression looks correct. What could be
causing this?
 
B

Bengt Richter

Sorry for the ambiguity. My case is actually pretty simple. '#'
represents any single character, so it's essentially the same as re's
'.'. The match must be exact, so the string and pattern must be of equal
lengths. Each wildcard is independent of other wildcards. For example,
suppose we restricted the possible characters to 1 and 0, then the
pattern '##' would only match '00', '01', '10', and '11'. This pattern
would not match '0', '111', etc. I feel that a trie would work well, but
I'm concerned that for large patterns, the overhead in the Python
implementation would be too inefficient.

So is the set of patterns static and you want to find which pattern(s!)
match dynamic input? How many patterns vs inputs strings? What max
length patterns, input strings? Total volume?

Regards,
Bengt Richter
 
C

Chris S.

Chris said:
Actually, I've noticed some strange behavior. It seems to match more
than one character per wild card. For instance, your code matches
'abaxile', 'abaze', and 'abbacomes' to the pattern 'ab##'. I'm not an
expert with rex, but your expression looks correct. What could be
causing this?

Spoke too soon. I seems all you needed was to change:

full_regexp = "|".join(as_regexps) + "$"

to:

full_regexp = "$|".join(as_regexps) + "$"

However, I noticed rex still doesn't return multiple matches. For
instance, matching 'abc' to the given the patterns '#bc', 'a#c', and
'ab#', your code only returns a match to the first pattern '#bc'. Is
this standard behavior or is it possible to change this?
 
P

pitkali

Chris said:
'abaxile', 'abaze', and 'abbacomes' to the pattern 'ab##'. I'm not an
expert with rex, but your expression looks correct. What could be
causing this?

To avoid this, one would have to add to the patterns \b AFAIR, so that it
matches whole words only.

Regards,
 
C

Chris S.

Bengt said:
So is the set of patterns static and you want to find which pattern(s!)
match dynamic input? How many patterns vs inputs strings? What max
length patterns, input strings? Total volume?

Patterns and inputs are dynamic, input more so than patterns. The
number, length, and volume of patterns and strings should be arbitrary.
 
B

Bengt Richter

Patterns and inputs are dynamic, input more so than patterns. The
number, length, and volume of patterns and strings should be arbitrary.
Strategies for performance will vary according to volume (small => anything ok), relative
sizes of strings and respective sets of strings, assuming enough work to make you notice a difference.
patterns>>inputs => walk tables based on patterns using inputs vs patterns<<inputs =>
walk tables base on input sets based on strategic walks through patterns)
Max length of strings, patterns will make some things worthwhile if strings are small,
dumb if they are thousands of characters. Who can tell what performance tricks you may need
or not? Why don't you just try (^pattern$|^pat2$|...) for every pattern to rescan the whole
input for each pattern, and we'll worry about performance later ;-)

Regards,
Bengt Richter
 
J

Josiah Carlson

Sorry for the ambiguity. My case is actually pretty simple. '#'
represents any single character, so it's essentially the same as re's
'.'. The match must be exact, so the string and pattern must be of equal
lengths. Each wildcard is independent of other wildcards. For example,
suppose we restricted the possible characters to 1 and 0, then the
pattern '##' would only match '00', '01', '10', and '11'. This pattern
would not match '0', '111', etc. I feel that a trie would work well, but
I'm concerned that for large patterns, the overhead in the Python
implementation would be too inefficient.

Having implemented what is known as a burst trie (a trie where you don't
expand a branch until it has more than 'k' entries) in Python, it ends
up taking up much more space, but that isn't really an issue unless you
have large numbers of strings (millions), or the strings are long
(kilobytes).

If you want to make it more efficient (space-wise), write the algorithm
and structures in pure C, then wrap it with SWIG. Add options for
inserting and deleting strings, and also querying for strings that match
a certain pattern.

Thinking about it, if your dictionary is very restricted, you could just
toss all strings in a balanced search tree, doing a similar tree
traversal as the trie solution. Much less overhead, most of the
benefits.

- Josiah
 
A

Andrew Dalke

Chris said:
Actually, I've noticed some strange behavior. It seems to match more
than one character per wild card. For instance, your code matches
'abaxile', 'abaze', and 'abbacomes' to the pattern 'ab##'. I'm not an
expert with rex, but your expression looks correct. What could be
causing this?

It's matching the prefix. To make it match the string and
only the string you need a $. Either do

(pat1$)|(pat2$)| ... |(patN$)

or do

((pat1)|(pat2)| ... |(patN))$

If you do the last, don't forget to omit group(1) in
the list of results, or use the non-capturing group
notation, which I believe is (?: ... ) as in

(?:(pat1)|(pat2)| ... |(patN))$

Andrew
(e-mail address removed)
 
A

Andrew Dalke

Chris said:
Spoke too soon.

As did I. :)
However, I noticed rex still doesn't return multiple matches. For
instance, matching 'abc' to the given the patterns '#bc', 'a#c', and
'ab#', your code only returns a match to the first pattern '#bc'. Is
this standard behavior or is it possible to change this?

This is standard behavior. You can't change it. The
only easy solution along these lines is to have a triangular
table of

(pat1)|(pat2)| .... |(patN)
(pat2)| .... |(patN)
...
(patN)

and if group i matched at a point x then do another
search using the (i+1)th entry in that table at that
point. Repeat until no more matches at x.

I don't know of any off-the-shelf solution for Python
for what you want to do, other than the usual "try
each pattern individually." You'll need to make some
sort of state table (or trie in your case) and do it
that way.

You *can* use Perl's regexps for this sort of thing. That
regexp language allowed embedded Perl code, so this will
get you an answer


% perl -ne 'while (/((.bc)(?{print "Match 1 at ", length($`),
"\n"})^)|((a.c)(?{print "Match 2 at ", length($`), "\n"})^)|./g){}'
This is abc acbcb
Match 1 at 8
Match 2 at 8
Match 1 at 13

Breaking the pattern down I'm using "while (/ ... /g) {}" to
match everything in the input string ($_), which comes from
each line of stdin (because of the '-n' command-line flag).

The pattern is

((.bc)(?{print "Match 1 at ", length($`), "\n"})^)
|((a.c)(?{print "Match 2 at ", length($`), "\n"})^)
|.

That is, match ".bc" then execute the corresponding piece
of embedded Perl code. This prints "Match 1 at " followed
by the length of the text before the current match, which
corresponds to the position of the match.

(If you only need that there is a match, you don't need
then. Using $` in perl gives a performance hit.)

After it executes, the subgroup matches (embedded executable
code always passes, and it consumes no characters). But
then it gets to the '^' test which fails because this is
never at the beginning of the string.

So the regexp engine tries the next option, which is the
"Match 2 at .." test and print. After the print (if
indeed there is a match 2) it also fails.

This takes the engine to the last option which is the "."
character. And that always passes.

Hence this pattern always consumes one and only one character.
I could put it inside a (...)* to match all characters,
but decided instead to use the while(/.../g){} to do the looping.
Why? Old practices and not well-determined reason.

(The while loop works because of the 'g' flag on the pattern.)

You talk about needing to eek all the performance you can
out of the system. Have you tried the brute force approach
of just doing N regexp tests?

If you need the performance, it's rather easy to convert
a simple trie into C code, save the result on the fly
to a file, compile that into a Python shared library, and
import that library, to get a function that does the
tests given a string. Remember to give a new name to
each shared library as otherwise the importing gets confused.

Andrew
(e-mail address removed)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,042
Latest member
icassiem

Latest Threads

Top