Generate unique ID for URL

R

Richard

Hello,

I want to create a URL-safe unique ID for URL's.
Currently I use:
url_id = base64.urlsafe_b64encode(url)
'ZG9jcy5weXRob24ub3JnL2xpYnJhcnkvdXVpZC5odG1s'

I would prefer more concise ID's.
What do you recommend? - Compression?

Richard
 
J

John Gordon

In said:
I want to create a URL-safe unique ID for URL's.
Currently I use:
url_id = base64.urlsafe_b64encode(url)

I would prefer more concise ID's.
What do you recommend? - Compression?

Does the ID need to contain all the information necessary to recreate the
original URL?
 
R

Richard

Good point - one way encoding would be fine.

Also this is performed millions of times so ideally efficient.
 
M

Miki Tebeka

I want to create a URL-safe unique ID for URL's.
What do you recommend? - Compression?
You can use base62 with a running counter, but then you'll need a (semi) centralized entity to come up with the next id.

You can see one implementation at http://bit.ly/PSJkHS (AppEngine environment).
 
C

Chris Kaynor

One option would be using a hash. Python's built-in hash, a 32-bit
CRC, 128-bit MD5, 256-bit SHA or one of the many others that exist,
depending on the needs. Higher bit counts will reduce the odds of
accidental collisions; cryptographically secure ones if outside
attacks matter. In such a case, you'd have to roll your own means of
converting the hash back into the string if you ever need it for
debugging, and there is always the possibility of collisions. A
similar solution would be using a pseudo-random GUID using the url as
the seed.

You could use a counter if all IDs are generated by a single process
(and even in other cases with some work).

If you want to be able to go both ways, using base64 encoding is
probably your best bet, though you might get benefits by using
compression.
Chris
 
R

Richard Baron Penman

I found the MD5 and SHA hashes slow to calculate.
The builtin hash is fast but I was concerned about collisions. What
rate of collisions could I expect?

Outside attacks not an issue and multiple processes would be used.
 
R

Richard

These URL ID's would just be used internally for quick lookups, not exposed publicly in a web application.

Ideally I would want to avoid collisions altogether. But if that means significant extra CPU time then 1 collision in 10 million hashes would be tolerable.
 
R

Richard

I found md5 / sha 4-5 times slower than hash. And base64 a lot slower.

No database or else I would just use their ID.
 
R

Roy Smith

Richard said:
Hello,

I want to create a URL-safe unique ID for URL's.
Currently I use:
url_id = base64.urlsafe_b64encode(url)

'ZG9jcy5weXRob24ub3JnL2xpYnJhcnkvdXVpZC5odG1s'

I would prefer more concise ID's.
What do you recommend? - Compression?

If you're generating random id strings, there's only two ways to make
them shorter. Either encode fewer bits of information, or encode them
more compactly.

Let's start with the second one. You're already using base64, so you're
getting 6 bits per character. You can do a little better than that, but
not much. The set of URL-safe characters is the 96-ish printable ascii
set, minus a few pieces of punctuation. Maybe you could get it up to
6.3 or 6.4 bits per character, but that's about it. For the complexity
this would add it's probably not worth it.

The next step is to reduce the number of bits you are encoding. You
said in another post that "1 collision in 10 million hashes would be
tolerable". So you need:
23.25349666421154

24 bits worth of key. Base64 encoded, that's only 4 characters.
Actually, I probably just proved that I don't really understand how
probabilities work, so maybe what you really need is 32 or 48 or 64
bits. Certainly not the 264 bits you're encoding with your example
above.

So, something like:

hash = md5.md5('docs.python.org/library/uuid.html').digest()
hash64 = base64.urlsafe_b64encode(hash)
id = hash64[:8] # or 12, or whatever

But, I still don't really understand your use case. You've already
mentioned the following requirements:

"just be used internally for quick lookups, not exposed publicly"
"URL-safe"
"unique"
"1 collision in 10 million hashes would be tolerable"
"one way encoding would be fine"
"performed millions of times so ideally efficient"

but haven't really explained what it is that you're trying to do.

If they're not going to be exposed publicly, why do you care if they're
URL-safe?

What's wrong with just using the URLs directly as dictionary keys and
not worrying about it until you've got some hard data showing that this
is not sufficient?
 
S

Steven D'Aprano

You can use base62 with a running counter, but then you'll need a (semi)
centralized entity to come up with the next id.

You can see one implementation at http://bit.ly/PSJkHS (AppEngine
environment).

Perhaps this is a silly question, but if you're using a running counter,
why bother with base64? Decimal or hex digits are URL safe. If there are
no concerns about predictability, why not just use the counter directly?

You can encode a billion IDs in 8 hex digits compared to 16 base64
characters:


py> base64.urlsafe_b64encode('1000000000')
'MTAwMDAwMDAwMA=='
py> "%x" % 1000000000
'3b9aca00'


Short and sweet and easy: no base64 calculation, no hash function, no
database lookup, just a trivial int to string conversion.
 
S

Steve Howell

Perhaps this is a silly question, but if you're using a running counter,
why bother with base64? Decimal or hex digits are URL safe. If there are
no concerns about predictability, why not just use the counter directly?

You can encode a billion IDs in 8 hex digits compared to 16 base64
characters:

py> base64.urlsafe_b64encode('1000000000')
'MTAwMDAwMDAwMA=='
py> "%x" % 1000000000
'3b9aca00'

Short and sweet and easy: no base64 calculation, no hash function, no
database lookup, just a trivial int to string conversion.

If you're dealing entirely with integers, then this works too:

import base64

def encode(n):
s = ''
while n > 0:
s += chr(n % 256)
n //= 256
return base64.urlsafe_b64encode(s)

def test():
seen = set()
for i in range(999900000, 1000000000):
s = encode(i)
if s in seen:
raise Exception('non-unique encoding')
seen.add(s)
print encode(1000000000)

test()

It prints this for 1000000000:

AMqaOw==
 
R

Richard

So the use case - I'm storing webpages on disk and want a quick retrieval system based on URL.
I can't store the files in a single directory because of OS limitations so have been using a sub folder structure.
For example to store data at URL "abc": a/b/c/index.html
This data is also viewed locally through a web app.

If you can suggest a better approach I would welcome it.
 
R

Richard

The next step is to reduce the number of bits you are encoding. You
said in another post that "1 collision in 10 million hashes would be

tolerable". So you need:




23.25349666421154


I think a difficulty would be finding a hash algorithm that maps evenly across those bits.
 
R

Roy Smith

Richard said:
So the use case - I'm storing webpages on disk and want a quick retrieval
system based on URL.
I can't store the files in a single directory because of OS limitations so
have been using a sub folder structure.
For example to store data at URL "abc": a/b/c/index.html
This data is also viewed locally through a web app.

If you can suggest a better approach I would welcome it.

Ah, so basically, you're reinventing Varnish?

Maybe do what Varnish (and MongoDB, and a few other things) do? Bypass
the file system entirely. Juar mmap() a chunk of memory large enough to
hold everything and let the OS figure out how to page things to disk.
 
R

Richard

thanks for pointer to Varnish.

I found MongoDB had a lot of size overhead so that it ended up using 4x the data stored.
 
C

Chris Angelico

So the use case - I'm storing webpages on disk and want a quick retrieval system based on URL.
I can't store the files in a single directory because of OS limitations so have been using a sub folder structure.
For example to store data at URL "abc": a/b/c/index.html
This data is also viewed locally through a web app.

If you can suggest a better approach I would welcome it.

The cost of a crypto hash on the URL will be completely dwarfed by the
cost of storing/retrieving on disk. You could probably do some
arithmetic and figure out exactly how many URLs (at an average length
of, say, 100 bytes) you can hash in the time of one disk seek.

ChrisA
 
R

Richard

yeah good point - I have gone with md5 for now.


The cost of a crypto hash on the URL will be completely dwarfed by the

cost of storing/retrieving on disk. You could probably do some

arithmetic and figure out exactly how many URLs (at an average length

of, say, 100 bytes) you can hash in the time of one disk seek.



ChrisA
 
R

Richard

yeah good point - I have gone with md5 for now.


The cost of a crypto hash on the URL will be completely dwarfed by the

cost of storing/retrieving on disk. You could probably do some

arithmetic and figure out exactly how many URLs (at an average length

of, say, 100 bytes) you can hash in the time of one disk seek.



ChrisA
 
J

Johannes Bauer

I found the MD5 and SHA hashes slow to calculate.

Slow? For URLs? Are you kidding? How many URLs per second do you want to
calculate?
The builtin hash is fast but I was concerned about collisions. What
rate of collisions could I expect?

MD5 has 16 bytes (128 bit), SHA1 has 20 bytes (160 bit). Utilizing the
birthday paradox and some approximations, I can tell you that when using
the full MD5 you'd need around 2.609e16 hashes in the same namespace to
get a one in a million chance of a collision. That is, 26090000000000000
filenames.

For SHA1 This number rises even further and you'd need around 1.71e21 or
1710000000000000000000 hashes in one namespace for the one-in-a-million.

I really have no clue about how many URLs you want to hash, and it seems
to be LOTS since the speed of MD5 seems to be an issue for you. Let me
estimate that you'd want to calculate a million hashes per second then
when you use MD5, you'd have about 827 years to fill the namespace up
enough to get a one-in-a-million.

If you need even more hashes (say a million million per second), I'd
suggest you go with SHA-1, giving you 54 years to get the one-in-a-million.

Then again, if you went for a million million hashes per second, Python
would probably not be the language of your choice.

Best regards,
Johannes

--
Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
- Karl Kaos über Rüdiger Thomas in dsa <[email protected]>
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,754
Messages
2,569,527
Members
44,998
Latest member
MarissaEub

Latest Threads

Top