External Hashing [was Re: matching strings in a large set of strings]

H

Helmut Jarausch

I think one could apply an external hashing technique which would require only
very few disk accesses per lookup.
Unfortunately, I'm now aware of an implementation in Python.
Does anybody know about a Python implementation of external hashing?

Thanks,
Helmut.

--
Helmut Jarausch

Lehrstuhl fuer Numerische Mathematik
RWTH - Aachen University
D 52056 Aachen, Germany
 
J

John Bokma

Helmut Jarausch said:
I think one could apply an external hashing technique which would require only
very few disk accesses per lookup.
Unfortunately, I'm now aware of an implementation in Python.
Does anybody know about a Python implementation of external hashing?

SQLite?
 
T

Tim Chase

I think one could apply an external hashing technique which would require only
very few disk accesses per lookup.
Unfortunately, I'm now aware of an implementation in Python.
Does anybody know about a Python implementation of external hashing?

While you don't detail what you're hashing, Stephan Behnel
already suggested (in the parent thread) using one of Python's
native dbm modules (I just use anydbm and let it choose). The
underlying implementations should be fairly efficient assuming
you don't use the dumbdbm last-resort fallback). With the anydbm
interface, you can implement dict/set semantics as long as you
take care that everything is marshalled into and out of strings
for keys/values.

-tkc
 
D

Dave Angel

Helmut said:
I think one could apply an external hashing technique which would require only
very few disk accesses per lookup.
Unfortunately, I'm now aware of an implementation in Python.
Does anybody know about a Python implementation of external hashing?

Thanks,
Helmut.
That's what databases are for. But if you're trying to streamline it
further than a database, you need to give some constraints. In an
earlier thread, you mentioned strings with a constant size. I don't
think you ever actually said that the match string was also of the same
size, but I'll assume it is. Final question is how often you'll be
updating the list, versus searching it. If you do both frequently,
stick to a database.

On the other hand, if the list on disk is fixed, you could thoroughly
optimize its makeup. Perhaps the easiest efficient form would be to
have a file in two parts. The first part is the hash table, where each
entry has a seek-offset and a size. And the rest of the file is just a
"list" of items, sorted by hash value.

If you make a hash large enough that the maximum collision list is a
couple of k, then two seeks would get any record. First seek to the
hash entry, then use that to decide where to seek for the data, and how
much to read. Then you simply search that block in memory.

You could also play games with a two-level hash, where each block of
records has its own mini hash table. Doesn't seem worth it for a mere
83million records.

DaveA
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,055
Latest member
SlimSparkKetoACVReview

Latest Threads

Top