strncmp performance

S

Sean Kenwrick

pembed2003 said:
"Sean Kenwrick" <[email protected]> wrote in message

Do you mean memcmp() here instead? Yes I think memcpy is faster than
strcmp or strncmp but I need to find out the longer string and pass in
the lenght of that. Otherwise, something like:

char* s1 = "auto";
char* s2 = "auto insurance";

memcmp(s1,s2,strlen(s1));

will return 0 which isn't. I will need to do the extra work like:

int l1 = strlen(s1);
int l2 = strlen(s2);

memcmp(s1,s2,l1 > l2 ? l1 : l2);

Do you think that will be faster than a strcmp or strncmp?

You need to examine your code for data-caching possibilities. What I mean
by this is that you evaluate a value at some stage which you keep and use
multiple times later on. In this case the important value is the length
of the strings. From a previous post it looks like you are evaluating
hash_keys() prior to posting keys into your lookup table - it seems that
this function is a likely candidate for calculating the length of the string
with little overhead (e.g save the original pointer and use pointer
arithmetic at the end to calculate the strlen()). You could then store
the length along with the other information in your lookup table.
Then you only need to do a memcmp() if the strings are of equal length, and
you already have the string lengths calculated if you do need to call
memcmp()...

Sean
 
C

CBFalconer

Nick said:
pembed2003 wrote:

[ snip ]
I have a very simple hash table where the keys are string and
values are also string. What I want to let people do is:

hash_insert("key","value");

and then later retrive "value" by saying:

hash_lookup("key");

The hash algr. is very simple. It calculates the hash code like:

unsigned long hash_code(const char* s){
unsigned long int c = 0;
while(*s){
c = 131 * c + *s;
s++;
}
return c % MAX_HASH_SIZE;
}

At first glance, that does not look like a very robust
hash algorithm and it MAY be why you are doing so many
strncmp() calls. Have you any data on how many strings
hash into the same hash code? If it's more than 2 or 3
on average, then you should revise the hash algorithm
rather than trying to optimize strcmp().

A good hash algorithm can get down to about 1.5 probes/search.
Try the CRC algorithm for starters.

His function sounds much too dependant on the low order bits of
the last character hashed.

To experiment with hash functions and immediately see the
probes/search and other statistics, the OP could try using the
hashlib package. It was born out of an investigation into hash
functions. There are some sample string hashing routines, and
references to other methods.

<http://cbfalconer.home.att.net/download/hashlib.zip>
 
R

Rob Thorpe

I have a very simple hash table where the keys are string and values
are also string. What I want to let people do is:

hash_insert("key","value");

and then later retrive "value" by saying:

hash_lookup("key");

The hash algr. is very simple. It calculates the hash code like:

unsigned long hash_code(const char* s){
unsigned long int c = 0;
while(*s){
c = 131 * c + *s;
s++;
}
return c % MAX_HASH_SIZE;
}

It's possible for 2 different string to have the same hash code, so
when I am doing lookup, I am doing something like:

char* hash_lookup(const char* s){
unsigned long c = hash_code(s);
// Note I can't simply return the slot at c because
// it's possible that another different string is at
// this spot because of the hash algr. So I need to...
if(strcmp(s,(hash+c)->value) == 0){
// Found it...
}else{
// Do something else
}
}

So because of my hash algr., an extra strcmp is needed.

Since it is a hashing algorithm I would assume that the chance of the
first character been the same as the second character is very high.
Only if the hash table becomes extremely full does this change.

If this is the case replace:

if (strcmp (s, (hash + c)->value) == 0) {
// Found it...
}

with

if (s[0] == *((hash + c)->value)) /* Compare the first two chars. */
{
if (s, (hash + c)->value) == 0) /* Only now compare strings. */
{
// Found it...
}
else
{
// Do something else ..
}
}
else
{
// Do something else ..
}

This will kick out the cases where the strings don't match rather
faster. It removes the overhead of a function call and the beginning
of a loop (though neither ammount to much these days).

You may want to consider using a flag that labels a bucket as full
rather than comparing the string in the bucket to the key. This way
the first time the program looks up a label no comparison is done -
this will be insertion.

You may also want to consider:

* Using a better hash function ( read
http://burtleburtle.net/bob/hash/index.html#lookup )

* Resizing the hash when it's nearly full

* Using linked lists as buckets.
 
P

pembed2003

[snip]
You need to examine your code for data-caching possibilities. What I mean
by this is that you evaluate a value at some stage which you keep and use
multiple times later on. In this case the important value is the length
of the strings. From a previous post it looks like you are evaluating
hash_keys() prior to posting keys into your lookup table - it seems that
this function is a likely candidate for calculating the length of the string
with little overhead (e.g save the original pointer and use pointer
arithmetic at the end to calculate the strlen()). You could then store
the length along with the other information in your lookup table.
Then you only need to do a memcmp() if the strings are of equal length, and
you already have the string lengths calculated if you do need to call
memcmp()...

Sean

If fact, that's exactly what I did now. If the strlen isn't the same,
there is no chance for the strings to be the same. If they are the
same length, memcmp can be used. So instead of doing strcmp (or
strncmp) I am doing either strlen and/or memcmp which should be
faster. Another problem now I have encountered is that the string
passed in to my function is not from a C program, it's from a PHP
extension (which is written in C). Because of this, I sometimes got
segfault which I think is related to PHP not padding the string with
the NULL marker. My question is: Does memcmp care whethere there is a
NULL marker somewhere or not? Is there any circumstance where memcmp
might segfault?

Thanks!
 
P

Paul Hsieh

I've just been reading thread and two things pop to mind. First of
all, the hash function you have chosen looks a little bit questionable
in terms of collisions. The FNV hash is well known to behave quite
well and will have performance identical to your hash function:

http://www.isthe.com/chongo/tech/comp/fnv/

Second, if your program still boils down to string comparison no
matter what, then you should consider converting your program over to
a library like The Better String library:

http://bstring.sf.net/

In this library, the length of each string is predetermined as they
are created or modified (this is very cheap, while leading to massive
performance improvements in some string functionality.) In this way
you can use memcmp() (which has the potential of being implemented to
use block comparisons) directly without incurring the string traversal
costs (in general.) The Better String library also includes its own
string comparison functions, of course, which additionally capture
trivial cases like strings having different lengths and aliased string
pointers in O(1) time.

Additionally calling strlen(), or using strcmp, or strncmp, or
whatever based on the assumption of using raw char * buffers will all
incurr an additional O(n) cost no matter how you slice it, which may
be a big factor in what is showing up on your bottom line. Using
libraries like Bstrlib (which essential has a O(1) strlen) as
described above is really the only way to avoid this cost.

As a point of disclosure, I am the author of Bstrlib -- other
libraries like Vstr (www.and.org/vstr) have comparable mechanisms.
 
C

Christian Bau

I've just been reading thread and two things pop to mind. First of
all, the hash function you have chosen looks a little bit questionable
in terms of collisions. The FNV hash is well known to behave quite
well and will have performance identical to your hash function:

http://www.isthe.com/chongo/tech/comp/fnv/

Second, if your program still boils down to string comparison no
matter what, then you should consider converting your program over to
a library like The Better String library:

http://bstring.sf.net/

In this library, the length of each string is predetermined as they
are created or modified (this is very cheap, while leading to massive
performance improvements in some string functionality.) In this way
you can use memcmp() (which has the potential of being implemented to
use block comparisons) directly without incurring the string traversal
costs (in general.) The Better String library also includes its own
string comparison functions, of course, which additionally capture
trivial cases like strings having different lengths and aliased string
pointers in O(1) time.

If the data is completely under your control, you could make different
changes: Store all the strings in arrays of unsigned long instead of
unsigned char. In the table, end every string with at least one zero and
a 1, with the 1 being the last byte in an unsigned long. In the strings
that you pass in, end every string with at least two zeroes, with the
last zero being the last byte in an unsigned long.

You can know compare one unsigned long at a time. You don't need to
check for the end of the strings because the data you pass in and the
data in your table will be different. After finding the first unsigned
long that is different, the strings are equal if the difference between
the two strings is 1 and the last byte of the unsigned long that you
took from the table is 1.
 
P

pembed2003

I've just been reading thread and two things pop to mind. First of
all, the hash function you have chosen looks a little bit questionable
in terms of collisions. The FNV hash is well known to behave quite
well and will have performance identical to your hash function:

http://www.isthe.com/chongo/tech/comp/fnv/

Second, if your program still boils down to string comparison no
matter what, then you should consider converting your program over to
a library like The Better String library:

http://bstring.sf.net/

Thanks for pointing out a better hash algr. and string library. I will
consider using both in my application. As I have pointed out in
another thread, my problem now seems to be in PHP not padding the
string with \0 which result in segfault.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,066
Latest member
VytoKetoReviews

Latest Threads

Top