B
bob_jenkins
The C code
for (hash=seed, i=0; i<length; ++i)
hash = tab[(key ^ hash) & 0xff] ^ (hash >> 8);
with the bytes of tab[] being permutations of 0..255 is a nice portable
hash, but not all that fast. Or it's a CRC if you fill the table
appropriately. A CRC is linear, has many nice mathematical properties
you may want (or may want to not have). Filling tab[] with
permutations, I've got to check whether that's linear in some sense,
but I don't think it is. Nonlinear functions make better
approximations of random mappings.
A CRC can be made twice as fast,
for (hash=seed, i=0; i<length; ++i)
hash = tab[(key ^ hash) & 0xffff] ^ (hash >> 16);
if key[] has 2-byte entries instead of 1-byte entries, and it can
compute the same hash as the 1-byte version (though you'd need
different tables for different endian machines). In fact you could
wrap the one byte version around the two byte version, using the two
byte version on an aligned subset of the data and the one byte version
on any unaligned head and tail.
Can the same trick be used where tab[] fills each byte with
permutations, or was that trick possible only due to the linearity of
CRC? Apparently the trick can be used no matter how tab[] is filled.
Looking at individual bytes:
a, b, c, d (let x = d^k)
=> ta[x], tb[x] ^ a, tc[x] ^ b, td[x] ^ c (let y = td[x] ^ (c^k[i+1]))
=> ta[y], tb[y] ^ ta[x], tc[y] ^ tb[x] ^ a, td[y] ^ tc[x] ^ b
All the bytes of the 64k lookup table are well defined, using (x,y) as
index and combinations of ta, tb, tc, td to define the values. Given
d^k, c^k[i+1], and td[] we can get x,y. The ^(hash>>16) would take
care of XORing a,b to the bottom two bytes. One iteration of the two
byte version produces the same result as two iterations of the one byte
version.
I've got to do some more tuning and timings and tests, but this strikes
me as a winner for Perl's internal hash function. It can be fast for
short strings (using the 1-byte version), fast for long strings (mostly
using the 2-byte version), doesn't care about alignment, and can give
the same results on both big endian and little endian machines. The
most likely thing to kill it would be if the 64k array for the two-byte
version gets paged out often enough that the cost of loading it is more
than the benefit of using it. Or if it's linear in GF(2^^8) with
unfortunate properties or something like that. I've also got to time
it against Paul Hsieh's hash.
for (hash=seed, i=0; i<length; ++i)
hash = tab[(key ^ hash) & 0xff] ^ (hash >> 8);
with the bytes of tab[] being permutations of 0..255 is a nice portable
hash, but not all that fast. Or it's a CRC if you fill the table
appropriately. A CRC is linear, has many nice mathematical properties
you may want (or may want to not have). Filling tab[] with
permutations, I've got to check whether that's linear in some sense,
but I don't think it is. Nonlinear functions make better
approximations of random mappings.
A CRC can be made twice as fast,
for (hash=seed, i=0; i<length; ++i)
hash = tab[(key ^ hash) & 0xffff] ^ (hash >> 16);
if key[] has 2-byte entries instead of 1-byte entries, and it can
compute the same hash as the 1-byte version (though you'd need
different tables for different endian machines). In fact you could
wrap the one byte version around the two byte version, using the two
byte version on an aligned subset of the data and the one byte version
on any unaligned head and tail.
Can the same trick be used where tab[] fills each byte with
permutations, or was that trick possible only due to the linearity of
CRC? Apparently the trick can be used no matter how tab[] is filled.
Looking at individual bytes:
a, b, c, d (let x = d^k)
=> ta[x], tb[x] ^ a, tc[x] ^ b, td[x] ^ c (let y = td[x] ^ (c^k[i+1]))
=> ta[y], tb[y] ^ ta[x], tc[y] ^ tb[x] ^ a, td[y] ^ tc[x] ^ b
All the bytes of the 64k lookup table are well defined, using (x,y) as
index and combinations of ta, tb, tc, td to define the values. Given
d^k, c^k[i+1], and td[] we can get x,y. The ^(hash>>16) would take
care of XORing a,b to the bottom two bytes. One iteration of the two
byte version produces the same result as two iterations of the one byte
version.
I've got to do some more tuning and timings and tests, but this strikes
me as a winner for Perl's internal hash function. It can be fast for
short strings (using the 1-byte version), fast for long strings (mostly
using the 2-byte version), doesn't care about alignment, and can give
the same results on both big endian and little endian machines. The
most likely thing to kill it would be if the 64k array for the two-byte
version gets paged out often enough that the cost of loading it is more
than the benefit of using it. Or if it's linear in GF(2^^8) with
unfortunate properties or something like that. I've also got to time
it against Paul Hsieh's hash.