Looking for lots of words in lots of files

B

brad

Just wondering if anyone has ever solved this efficiently... not looking
for specific solutions tho... just ideas.

I have one thousand words and one thousand files. I need to read the
files to see if some of the words are in the files. I can stop reading a
file once I find 10 of the words in it. It's easy for me to do this with
a few dozen words, but a thousand words is too large for an RE and too
inefficient to loop, etc. Any suggestions?

Thanks
 
D

Diez B. Roggisch

brad said:
Just wondering if anyone has ever solved this efficiently... not looking
for specific solutions tho... just ideas.

I have one thousand words and one thousand files. I need to read the
files to see if some of the words are in the files. I can stop reading a
file once I find 10 of the words in it. It's easy for me to do this with
a few dozen words, but a thousand words is too large for an RE and too
inefficient to loop, etc. Any suggestions?

Use an indexer, like lucene (available as pylucene) or a database that
offers word-indices.

Diez
 
C

Calvin Spealman

Upload, wait, and google them.

Seriously tho, aside from using a real indexer, I would build a set
of the words I'm looking for, and then loop over each file, looping
over the words and doing quick checks for containment in the set. If
so, add to a dict of file names to list of words found until the list
hits 10 length. I don't think that would be a complicated solution
and it shouldn't be terrible at performance.

If you need to run this more than once, use an indexer.

If you only need to use it once, use an indexer, so you learn how for
next time.
 
K

Kris Kennaway

Calvin said:
Upload, wait, and google them.

Seriously tho, aside from using a real indexer, I would build a set of
the words I'm looking for, and then loop over each file, looping over
the words and doing quick checks for containment in the set. If so, add
to a dict of file names to list of words found until the list hits 10
length. I don't think that would be a complicated solution and it
shouldn't be terrible at performance.

If you need to run this more than once, use an indexer.

If you only need to use it once, use an indexer, so you learn how for
next time.

If you can't use an indexer, and performance matters, evaluate using
grep and a shell script. Seriously.

grep is a couple of orders of magnitude faster at pattern matching
strings in files (and especially regexps) than python is. Even if you
are invoking grep multiple times it is still likely to be faster than a
"maximally efficient" single pass over the file in python. This
realization was disappointing to me :)

Kris
 
R

Robert Bossy

brad said:
Just wondering if anyone has ever solved this efficiently... not
looking for specific solutions tho... just ideas.

I have one thousand words and one thousand files. I need to read the
files to see if some of the words are in the files. I can stop reading
a file once I find 10 of the words in it. It's easy for me to do this
with a few dozen words, but a thousand words is too large for an RE
and too inefficient to loop, etc. Any suggestions?
The quick answer would be:
grep -F -f WORDLIST FILE1 FILE2 ... FILE1000
where WORDLIST is a file containing the thousand words, one per line.

The more interesting answers would be to use either a suffix tree or an
Aho-Corasick graph.

- The suffix tree is a representation of the target string (your files)
that allows to search quickly for a word. Your problem would then be
solved by 1) building a suffix tree for your files, and 2) search for
each word sequentially in the suffix tree.

- The Aho-Corasick graph is a representation of the query word list that
allows fast scanning of the words on a target string. Your problem would
then be solved by 1) building an Aho-Corasick graph for the list of
words, and 2) scan sequentially each file.

The preference for using either one or the other depends on some details
of your problems: the expected size of target files, the rate of
overlaps between words in your list (are there common prefixes), will
you repeat the operation with another word list or another set of files,
etc. Personally, I'd lean towards Aho-Corasick, it is a matter of taste;
the kind of applications that comes to my mind makes it more practical.

Btw, the `grep -F -f` combo builds an Aho-Corasick graph. Also you can
find modules for building both data structures in the python package index.

Cheers,
RB
 
M

Martin P. Hellwig

Kris Kennaway wrote:
If you can't use an indexer, and performance matters, evaluate using
grep and a shell script. Seriously.

grep is a couple of orders of magnitude faster at pattern matching
strings in files (and especially regexps) than python is. Even if you
are invoking grep multiple times it is still likely to be faster than a
"maximally efficient" single pass over the file in python. This
realization was disappointing to me :)

Kris

Adding to this:
Then again, there is nothing wrong with wrapping grep from python and
revert to a pure python 'solution' if the system has no grep.
Reinventing the wheel is usually only practical if the existing ones
aren't round :)
 
J

Jeff McNeil

Use an indexer, like lucene (available as pylucene) or a database that
offers word-indices.

Diez

I've been toying around with Nucular (http://nucular.sourceforge.net/)
a bit recently for some side projects. It's pure Python and seems to
work fairly well for my needs. I haven't pumped all that much data
into it, though.
 
C

Cong

If you can't use an indexer, and performance matters, evaluate using
grep and a shell script.  Seriously.

grep is a couple of orders of magnitude faster at pattern matching
strings infiles(and especially regexps) than python is.  Even if you
are invoking grep multiple times it is still likely to be faster than a
"maximally efficient" single pass over the file in python.  This
realization was disappointing to me :)

Kris

Alternatively, if you don't feel like writing shell scripts, you can
write a Python program which auto-generate the desired shell script
which utilizes grep. E.g. use Python for generating the file list
which is passed to grep as arguments. ;-P
 
B

Bruno Desthuilliers

brad a écrit :
Just wondering if anyone has ever solved this efficiently... not looking
for specific solutions tho... just ideas.

I have one thousand words and one thousand files. I need to read the
files to see if some of the words are in the files. I can stop reading a
file once I find 10 of the words in it. It's easy for me to do this with
a few dozen words, but a thousand words is too large for an RE and too
inefficient to loop, etc. Any suggestions?

Full text indexing.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,482
Members
44,901
Latest member
Noble71S45

Latest Threads

Top