efficient way of looking up huge hashes

R

Ram Prasad

Hi,
I am a quiet a newbie to C programming. ( not new to programming
itself though )
I am writing an email blacklist application that will lookup huge DB
files like

----------
somedomain.com=emailid1
someotherdomain.com=emailid2
somedomain.com=senderdomain1
....
....

---------------------

there would be anywhere between 10k-200k such records
What is the best way of looking up such a list. I want to have the
queries return very fast , the memory footprint would ideally be low
but that is not a limiting factor

Should I use something like a Berkeley DB and read from a DB file or
should I read the entire stuff into memory.


Thanks
Ram

PS:
Note to Spammers: Go ahead , send me spam
(e-mail address removed)
http://ecm.netcore.co.in/spamtrap.html
 
R

Richard Tobin

Ram Prasad said:
I am writing an email blacklist application that will lookup huge DB
files like

----------
somedomain.com=emailid1
someotherdomain.com=emailid2
somedomain.com=senderdomain1
...
...

Even with 200k records that's still only about 5MB, which is hardly
huge. Does a single run of the program just do one lookup? If so,
and it runs infrequently, you could just read the file in and compare
the strings as you go. On the other hand, if it sits there repeatedly
processing addresses, it would be reasonable to use an in-memory hash
table.

If you're doing single lookups but running the program very often (say
once a second), or if you think the list could get much bigger, then
an on-disk hash table such as Berkeley DB would be the way to go. It
has the advantage that it won't read the whole file.

-- Richarad
 
R

Ram Prasad

Even with 200k records that's still only about 5MB, which is hardly
huge. Does a single run of the program just do one lookup? If so,
and it runs infrequently, you could just read the file in and compare
the strings as you go. On the other hand, if it sits there repeatedly
processing addresses, it would be reasonable to use an in-memory hash
table.

If you're doing single lookups but running the program very often (say
once a second), or if you think the list could get much bigger, then
an on-disk hash table such as Berkeley DB would be the way to go. It
has the advantage that it won't read the whole file.

-- Richarad

This will be running in a mail-filter daemon. A single instance
would potentially do thousands of lookups. Since the proram would be a
daemon there I could read the entire DB into memory during startup and
use it from the memory
Which libraries should I use for such a such lookups. I dont
need a hash lookup , just an if_key_exists() lookup
 
D

Duncan Muirhead

<snip>
This will be running in a mail-filter daemon. A single instance
would potentially do thousands of lookups. Since the proram would be a
daemon there I could read the entire DB into memory during startup and
use it from the memory
Which libraries should I use for such a such lookups. I dont
need a hash lookup , just an if_key_exists() lookup

http://judy.sourceforge.net/
 
T

Tor Rustad

Ram said:
Should I use something like a Berkeley DB and read from a DB file or
should I read the entire stuff into memory.

I'm having a similar problem, and was thinking about using Berkeley DB too.

I wouldn't worry too much about the performance issue, if you have the table
in memory, the OS might swap your unused memory pages to disk anyway.
Likewise, if you access some disk location a lot, it will typically be in
cache.

So, using Berkeley DB is a good idea. Knuth once said:

"premature optimization is the root of all evil"
 
R

Ram Prasad

I'm having a similar problem, and was thinking about using Berkeley DB too.

I wouldn't worry too much about the performance issue, if you have the table
in memory, the OS might swap your unused memory pages to disk anyway.
Likewise, if you access some disk location a lot, it will typically be in
cache.

So, using Berkeley DB is a good idea.

I am evaluating multiple options. I think tinycdb looks very
promising
http://www.corpit.ru/mjt/tinycdb.html

It compiles easily on my machine and the example scripts work without
much fuss.
Unfortunately with so many options available there is no single
standard method. How does one make his choice
 
T

Tor Rustad

Ram said:
I am evaluating multiple options. I think tinycdb looks very
promising
http://www.corpit.ru/mjt/tinycdb.html

Nice and simple C API, but rather limited platform support.
Unfortunately with so many options available there is no single
standard method. How does one make his choice

Yes, there are pro and cons, if I have a tuning problem, I would
prototype multiple solutions, and select the one which get the job done,
with minimal drawbacks. In practice,

* robustness / maturity
* maintainability
* support / documentation
* portability
* fingerprint
* security
* error handling
* simplicity

etc.

might be important design parameters too. I rarely select the fastest
solution.

The simplest solution to compare with, would perhaps be holding the
blacklist in memory, and to use qsort()/bsearch(). That prototype can be
made in no time.

However, for expert advice, you should instead consult one of the many
database news groups.
 
R

Ram Prasad

Nice and simple C API, but rather limited platform support.


Yes, there are pro and cons, if I have a tuning problem, I would
prototype multiple solutions, and select the one which get the job done,
with minimal drawbacks. In practice,

* robustness / maturity
* maintainability
* support / documentation
* portability
* fingerprint
* security
* error handling
* simplicity

etc.

might be important design parameters too. I rarely select the fastest
solution.

The simplest solution to compare with, would perhaps be holding the
blacklist in memory, and to use qsort()/bsearch(). That prototype can be
made in no time.

Can you pls explain figerprint ?


Would you suggest bsearch() on a 5MB data would be better than a hash-
db lookup.
What are the implications of multiple instances running can they share
the same data area


Thanks
Ram

PS:
Note to Spammers: Go ahead , send me spam
(e-mail address removed)
http://ecm.netcore.co.in/spamtrap.html
 
T

Tor Rustad

Ram said:
Can you pls explain figerprint ?

That was a typo, for "footprint", e.g. the memory usage.
Would you suggest bsearch() on a 5MB data would be better than a hash-
db lookup.

The relevant question for you: is a simple qsort/bsearch sufficient?

What are the implications of multiple instances running can they share
the same data area

Those implications are system specific, we don't discuss file locks,
mutex, thread programming etc. here.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,483
Members
44,902
Latest member
Elena68X5

Latest Threads

Top