Generate unique ID for URL

Discussion in 'Python' started by Richard, Nov 13, 2012.

  1. Richard

    Richard Guest

    Hello,

    I want to create a URL-safe unique ID for URL's.
    Currently I use:
    url_id = base64.urlsafe_b64encode(url)

    >>> base64.urlsafe_b64encode('docs.python.org/library/uuid.html')

    'ZG9jcy5weXRob24ub3JnL2xpYnJhcnkvdXVpZC5odG1s'

    I would prefer more concise ID's.
    What do you recommend? - Compression?

    Richard
     
    Richard, Nov 13, 2012
    #1
    1. Advertising

  2. Richard

    John Gordon Guest

    In <> Richard <> writes:

    > I want to create a URL-safe unique ID for URL's.
    > Currently I use:
    > url_id = base64.urlsafe_b64encode(url)


    > >>> base64.urlsafe_b64encode('docs.python.org/library/uuid.html')

    > 'ZG9jcy5weXRob24ub3JnL2xpYnJhcnkvdXVpZC5odG1s'


    > I would prefer more concise ID's.
    > What do you recommend? - Compression?


    Does the ID need to contain all the information necessary to recreate the
    original URL?

    --
    John Gordon A is for Amy, who fell down the stairs
    B is for Basil, assaulted by bears
    -- Edward Gorey, "The Gashlycrumb Tinies"
     
    John Gordon, Nov 13, 2012
    #2
    1. Advertising

  3. Richard

    Richard Guest

    Good point - one way encoding would be fine.

    Also this is performed millions of times so ideally efficient.


    On Wednesday, November 14, 2012 10:34:03 AM UTC+11, John Gordon wrote:
    > In <> Richard <> writes:
    >
    >
    >
    > > I want to create a URL-safe unique ID for URL's.

    >
    > > Currently I use:

    >
    > > url_id = base64.urlsafe_b64encode(url)

    >
    >
    >
    > > >>> base64.urlsafe_b64encode('docs.python.org/library/uuid.html')

    >
    > > 'ZG9jcy5weXRob24ub3JnL2xpYnJhcnkvdXVpZC5odG1s'

    >
    >
    >
    > > I would prefer more concise ID's.

    >
    > > What do you recommend? - Compression?

    >
    >
    >
    > Does the ID need to contain all the information necessary to recreate the
    >
    > original URL?
    >
    >
    >
    > --
    >
    > John Gordon A is for Amy, who fell down the stairs
    >
    > B is for Basil, assaulted by bears
    >
    > -- Edward Gorey, "The Gashlycrumb Tinies"
     
    Richard, Nov 13, 2012
    #3
  4. Richard

    Miki Tebeka Guest

    > I want to create a URL-safe unique ID for URL's.
    > What do you recommend? - Compression?

    You can use base62 with a running counter, but then you'll need a (semi) centralized entity to come up with the next id.

    You can see one implementation at http://bit.ly/PSJkHS (AppEngine environment).
     
    Miki Tebeka, Nov 14, 2012
    #4
  5. Richard

    Chris Kaynor Guest

    One option would be using a hash. Python's built-in hash, a 32-bit
    CRC, 128-bit MD5, 256-bit SHA or one of the many others that exist,
    depending on the needs. Higher bit counts will reduce the odds of
    accidental collisions; cryptographically secure ones if outside
    attacks matter. In such a case, you'd have to roll your own means of
    converting the hash back into the string if you ever need it for
    debugging, and there is always the possibility of collisions. A
    similar solution would be using a pseudo-random GUID using the url as
    the seed.

    You could use a counter if all IDs are generated by a single process
    (and even in other cases with some work).

    If you want to be able to go both ways, using base64 encoding is
    probably your best bet, though you might get benefits by using
    compression.
    Chris


    On Tue, Nov 13, 2012 at 3:56 PM, Richard <> wrote:
    > Good point - one way encoding would be fine.
    >
    > Also this is performed millions of times so ideally efficient.
    >
    >
    > On Wednesday, November 14, 2012 10:34:03 AM UTC+11, John Gordon wrote:
    >> In <> Richard <> writes:
    >>
    >>
    >>
    >> > I want to create a URL-safe unique ID for URL's.

    >>
    >> > Currently I use:

    >>
    >> > url_id = base64.urlsafe_b64encode(url)

    >>
    >>
    >>
    >> > >>> base64.urlsafe_b64encode('docs.python.org/library/uuid.html')

    >>
    >> > 'ZG9jcy5weXRob24ub3JnL2xpYnJhcnkvdXVpZC5odG1s'

    >>
    >>
    >>
    >> > I would prefer more concise ID's.

    >>
    >> > What do you recommend? - Compression?

    >>
    >>
    >>
    >> Does the ID need to contain all the information necessary to recreate the
    >>
    >> original URL?
    >>
    >>
    >>
    >> --
    >>
    >> John Gordon A is for Amy, who fell down the stairs
    >>
    >> B is for Basil, assaulted by bears
    >>
    >> -- Edward Gorey, "The Gashlycrumb Tinies"

    >
    > --
    > http://mail.python.org/mailman/listinfo/python-list
     
    Chris Kaynor, Nov 14, 2012
    #5
  6. I found the MD5 and SHA hashes slow to calculate.
    The builtin hash is fast but I was concerned about collisions. What
    rate of collisions could I expect?

    Outside attacks not an issue and multiple processes would be used.


    On Wed, Nov 14, 2012 at 11:26 AM, Chris Kaynor <> wrote:
    > One option would be using a hash. Python's built-in hash, a 32-bit
    > CRC, 128-bit MD5, 256-bit SHA or one of the many others that exist,
    > depending on the needs. Higher bit counts will reduce the odds of
    > accidental collisions; cryptographically secure ones if outside
    > attacks matter. In such a case, you'd have to roll your own means of
    > converting the hash back into the string if you ever need it for
    > debugging, and there is always the possibility of collisions. A
    > similar solution would be using a pseudo-random GUID using the url as
    > the seed.
    >
    > You could use a counter if all IDs are generated by a single process
    > (and even in other cases with some work).
    >
    > If you want to be able to go both ways, using base64 encoding is
    > probably your best bet, though you might get benefits by using
    > compression.
    > Chris
    >
    >
    > On Tue, Nov 13, 2012 at 3:56 PM, Richard <> wrote:
    >> Good point - one way encoding would be fine.
    >>
    >> Also this is performed millions of times so ideally efficient.
    >>
    >>
    >> On Wednesday, November 14, 2012 10:34:03 AM UTC+11, John Gordon wrote:
    >>> In <> Richard <> writes:
    >>>
    >>>
    >>>
    >>> > I want to create a URL-safe unique ID for URL's.
    >>>
    >>> > Currently I use:
    >>>
    >>> > url_id = base64.urlsafe_b64encode(url)
    >>>
    >>>
    >>>
    >>> > >>> base64.urlsafe_b64encode('docs.python.org/library/uuid.html')
    >>>
    >>> > 'ZG9jcy5weXRob24ub3JnL2xpYnJhcnkvdXVpZC5odG1s'
    >>>
    >>>
    >>>
    >>> > I would prefer more concise ID's.
    >>>
    >>> > What do you recommend? - Compression?
    >>>
    >>>
    >>>
    >>> Does the ID need to contain all the information necessary to recreate the
    >>>
    >>> original URL?
    >>>
    >>>
    >>>
    >>> --
    >>>
    >>> John Gordon A is for Amy, who fell down the stairs
    >>>
    >>> B is for Basil, assaulted by bears
    >>>
    >>> -- Edward Gorey, "The Gashlycrumb Tinies"

    >>
    >> --
    >> http://mail.python.org/mailman/listinfo/python-list
     
    Richard Baron Penman, Nov 14, 2012
    #6
  7. Richard

    Richard Guest

    These URL ID's would just be used internally for quick lookups, not exposed publicly in a web application.

    Ideally I would want to avoid collisions altogether. But if that means significant extra CPU time then 1 collision in 10 million hashes would be tolerable.
     
    Richard, Nov 14, 2012
    #7
  8. Richard

    Richard Guest

    I found md5 / sha 4-5 times slower than hash. And base64 a lot slower.

    No database or else I would just use their ID.


    On Wednesday, November 14, 2012 11:59:55 AM UTC+11, Christian Heimes wrote:
    > Am 14.11.2012 01:41, schrieb Richard Baron Penman:
    >
    > > I found the MD5 and SHA hashes slow to calculate.

    >
    > > The builtin hash is fast but I was concerned about collisions. What

    >
    > > rate of collisions could I expect?

    >
    >
    >
    > Seriously? It takes about 1-5msec to sha1() one MB of data on a modern
    >
    > CPU, 1.5 on my box. The openssl variants of Python's hash code release
    >
    > the GIL so you use the power of all cores.
     
    Richard, Nov 14, 2012
    #8
  9. Richard

    Roy Smith Guest

    In article <>,
    Richard <> wrote:

    > Hello,
    >
    > I want to create a URL-safe unique ID for URL's.
    > Currently I use:
    > url_id = base64.urlsafe_b64encode(url)
    >
    > >>> base64.urlsafe_b64encode('docs.python.org/library/uuid.html')

    > 'ZG9jcy5weXRob24ub3JnL2xpYnJhcnkvdXVpZC5odG1s'
    >
    > I would prefer more concise ID's.
    > What do you recommend? - Compression?


    If you're generating random id strings, there's only two ways to make
    them shorter. Either encode fewer bits of information, or encode them
    more compactly.

    Let's start with the second one. You're already using base64, so you're
    getting 6 bits per character. You can do a little better than that, but
    not much. The set of URL-safe characters is the 96-ish printable ascii
    set, minus a few pieces of punctuation. Maybe you could get it up to
    6.3 or 6.4 bits per character, but that's about it. For the complexity
    this would add it's probably not worth it.

    The next step is to reduce the number of bits you are encoding. You
    said in another post that "1 collision in 10 million hashes would be
    tolerable". So you need:

    >>> math.log(10*1000*1000, 2)

    23.25349666421154

    24 bits worth of key. Base64 encoded, that's only 4 characters.
    Actually, I probably just proved that I don't really understand how
    probabilities work, so maybe what you really need is 32 or 48 or 64
    bits. Certainly not the 264 bits you're encoding with your example
    above.

    So, something like:

    hash = md5.md5('docs.python.org/library/uuid.html').digest()
    hash64 = base64.urlsafe_b64encode(hash)
    id = hash64[:8] # or 12, or whatever

    But, I still don't really understand your use case. You've already
    mentioned the following requirements:

    "just be used internally for quick lookups, not exposed publicly"
    "URL-safe"
    "unique"
    "1 collision in 10 million hashes would be tolerable"
    "one way encoding would be fine"
    "performed millions of times so ideally efficient"

    but haven't really explained what it is that you're trying to do.

    If they're not going to be exposed publicly, why do you care if they're
    URL-safe?

    What's wrong with just using the URLs directly as dictionary keys and
    not worrying about it until you've got some hard data showing that this
    is not sufficient?
     
    Roy Smith, Nov 14, 2012
    #9
  10. On Tue, 13 Nov 2012 16:13:58 -0800, Miki Tebeka wrote:

    >> I want to create a URL-safe unique ID for URL's. What do you recommend?
    >> - Compression?

    > You can use base62 with a running counter, but then you'll need a (semi)
    > centralized entity to come up with the next id.
    >
    > You can see one implementation at http://bit.ly/PSJkHS (AppEngine
    > environment).


    Perhaps this is a silly question, but if you're using a running counter,
    why bother with base64? Decimal or hex digits are URL safe. If there are
    no concerns about predictability, why not just use the counter directly?

    You can encode a billion IDs in 8 hex digits compared to 16 base64
    characters:


    py> base64.urlsafe_b64encode('1000000000')
    'MTAwMDAwMDAwMA=='
    py> "%x" % 1000000000
    '3b9aca00'


    Short and sweet and easy: no base64 calculation, no hash function, no
    database lookup, just a trivial int to string conversion.



    --
    Steven
     
    Steven D'Aprano, Nov 14, 2012
    #10
  11. Richard

    Steve Howell Guest

    On Nov 13, 6:04 pm, Steven D'Aprano <steve
    > wrote:
    > On Tue, 13 Nov 2012 16:13:58 -0800, Miki Tebeka wrote:
    > >> I want to create a URL-safe unique ID for URL's. What do you recommend?
    > >> - Compression?

    > > You can use base62 with a running counter, but then you'll need a (semi)
    > > centralized entity to come up with the next id.

    >
    > > You can see one implementation athttp://bit.ly/PSJkHS(AppEngine
    > > environment).

    >
    > Perhaps this is a silly question, but if you're using a running counter,
    > why bother with base64? Decimal or hex digits are URL safe. If there are
    > no concerns about predictability, why not just use the counter directly?
    >
    > You can encode a billion IDs in 8 hex digits compared to 16 base64
    > characters:
    >
    > py> base64.urlsafe_b64encode('1000000000')
    > 'MTAwMDAwMDAwMA=='
    > py> "%x" % 1000000000
    > '3b9aca00'
    >
    > Short and sweet and easy: no base64 calculation, no hash function, no
    > database lookup, just a trivial int to string conversion.
    >
    > --
    > Steven


    If you're dealing entirely with integers, then this works too:

    import base64

    def encode(n):
    s = ''
    while n > 0:
    s += chr(n % 256)
    n //= 256
    return base64.urlsafe_b64encode(s)

    def test():
    seen = set()
    for i in range(999900000, 1000000000):
    s = encode(i)
    if s in seen:
    raise Exception('non-unique encoding')
    seen.add(s)
    print encode(1000000000)

    test()

    It prints this for 1000000000:

    AMqaOw==
     
    Steve Howell, Nov 14, 2012
    #11
  12. Richard

    Richard Guest

    I am dealing with URL's rather than integers
     
    Richard, Nov 14, 2012
    #12
  13. Richard

    Richard Guest

    So the use case - I'm storing webpages on disk and want a quick retrieval system based on URL.
    I can't store the files in a single directory because of OS limitations so have been using a sub folder structure.
    For example to store data at URL "abc": a/b/c/index.html
    This data is also viewed locally through a web app.

    If you can suggest a better approach I would welcome it.
     
    Richard, Nov 14, 2012
    #13
  14. Richard

    Richard Guest

    > The next step is to reduce the number of bits you are encoding. You
    >
    > said in another post that "1 collision in 10 million hashes would be
    >
    > tolerable". So you need:
    >
    >
    >
    > >>> math.log(10*1000*1000, 2)

    >
    > 23.25349666421154



    I think a difficulty would be finding a hash algorithm that maps evenly across those bits.
     
    Richard, Nov 14, 2012
    #14
  15. Richard

    Roy Smith Guest

    In article <>,
    Richard <> wrote:

    > So the use case - I'm storing webpages on disk and want a quick retrieval
    > system based on URL.
    > I can't store the files in a single directory because of OS limitations so
    > have been using a sub folder structure.
    > For example to store data at URL "abc": a/b/c/index.html
    > This data is also viewed locally through a web app.
    >
    > If you can suggest a better approach I would welcome it.


    Ah, so basically, you're reinventing Varnish?

    Maybe do what Varnish (and MongoDB, and a few other things) do? Bypass
    the file system entirely. Juar mmap() a chunk of memory large enough to
    hold everything and let the OS figure out how to page things to disk.
     
    Roy Smith, Nov 14, 2012
    #15
  16. Richard

    Richard Guest

    thanks for pointer to Varnish.

    I found MongoDB had a lot of size overhead so that it ended up using 4x the data stored.
     
    Richard, Nov 14, 2012
    #16
  17. On Wed, Nov 14, 2012 at 2:25 PM, Richard <> wrote:
    > So the use case - I'm storing webpages on disk and want a quick retrieval system based on URL.
    > I can't store the files in a single directory because of OS limitations so have been using a sub folder structure.
    > For example to store data at URL "abc": a/b/c/index.html
    > This data is also viewed locally through a web app.
    >
    > If you can suggest a better approach I would welcome it.


    The cost of a crypto hash on the URL will be completely dwarfed by the
    cost of storing/retrieving on disk. You could probably do some
    arithmetic and figure out exactly how many URLs (at an average length
    of, say, 100 bytes) you can hash in the time of one disk seek.

    ChrisA
     
    Chris Angelico, Nov 14, 2012
    #17
  18. Richard

    Richard Guest

    yeah good point - I have gone with md5 for now.


    On Wednesday, November 14, 2012 3:06:18 PM UTC+11, Chris Angelico wrote:
    > On Wed, Nov 14, 2012 at 2:25 PM, Richard <> wrote:
    >
    > > So the use case - I'm storing webpages on disk and want a quick retrieval system based on URL.

    >
    > > I can't store the files in a single directory because of OS limitations so have been using a sub folder structure.

    >
    > > For example to store data at URL "abc": a/b/c/index.html

    >
    > > This data is also viewed locally through a web app.

    >
    > >

    >
    > > If you can suggest a better approach I would welcome it.

    >
    >
    >
    > The cost of a crypto hash on the URL will be completely dwarfed by the
    >
    > cost of storing/retrieving on disk. You could probably do some
    >
    > arithmetic and figure out exactly how many URLs (at an average length
    >
    > of, say, 100 bytes) you can hash in the time of one disk seek.
    >
    >
    >
    > ChrisA
     
    Richard, Nov 14, 2012
    #18
  19. Richard

    Richard Guest

    yeah good point - I have gone with md5 for now.


    On Wednesday, November 14, 2012 3:06:18 PM UTC+11, Chris Angelico wrote:
    > On Wed, Nov 14, 2012 at 2:25 PM, Richard <> wrote:
    >
    > > So the use case - I'm storing webpages on disk and want a quick retrieval system based on URL.

    >
    > > I can't store the files in a single directory because of OS limitations so have been using a sub folder structure.

    >
    > > For example to store data at URL "abc": a/b/c/index.html

    >
    > > This data is also viewed locally through a web app.

    >
    > >

    >
    > > If you can suggest a better approach I would welcome it.

    >
    >
    >
    > The cost of a crypto hash on the URL will be completely dwarfed by the
    >
    > cost of storing/retrieving on disk. You could probably do some
    >
    > arithmetic and figure out exactly how many URLs (at an average length
    >
    > of, say, 100 bytes) you can hash in the time of one disk seek.
    >
    >
    >
    > ChrisA
     
    Richard, Nov 14, 2012
    #19
  20. On 14.11.2012 01:41, Richard Baron Penman wrote:
    > I found the MD5 and SHA hashes slow to calculate.


    Slow? For URLs? Are you kidding? How many URLs per second do you want to
    calculate?

    > The builtin hash is fast but I was concerned about collisions. What
    > rate of collisions could I expect?


    MD5 has 16 bytes (128 bit), SHA1 has 20 bytes (160 bit). Utilizing the
    birthday paradox and some approximations, I can tell you that when using
    the full MD5 you'd need around 2.609e16 hashes in the same namespace to
    get a one in a million chance of a collision. That is, 26090000000000000
    filenames.

    For SHA1 This number rises even further and you'd need around 1.71e21 or
    1710000000000000000000 hashes in one namespace for the one-in-a-million.

    I really have no clue about how many URLs you want to hash, and it seems
    to be LOTS since the speed of MD5 seems to be an issue for you. Let me
    estimate that you'd want to calculate a million hashes per second then
    when you use MD5, you'd have about 827 years to fill the namespace up
    enough to get a one-in-a-million.

    If you need even more hashes (say a million million per second), I'd
    suggest you go with SHA-1, giving you 54 years to get the one-in-a-million.

    Then again, if you went for a million million hashes per second, Python
    would probably not be the language of your choice.

    Best regards,
    Johannes

    --
    >> Wo hattest Du das Beben nochmal GENAU vorhergesagt?

    > Zumindest nicht öffentlich!

    Ah, der neueste und bis heute genialste Streich unsere großen
    Kosmologen: Die Geheim-Vorhersage.
    - Karl Kaos über Rüdiger Thomas in dsa <hidbv3$om2$>
     
    Johannes Bauer, Nov 14, 2012
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Ronald
    Replies:
    6
    Views:
    7,050
    Andy Mortimer [MS]
    Feb 23, 2004
  2. Max
    Replies:
    5
    Views:
    32,799
    Sudsy
    Feb 28, 2004
  3. Mullin
    Replies:
    3
    Views:
    5,535
  4. ToshiBoy
    Replies:
    6
    Views:
    877
    ToshiBoy
    Aug 12, 2008
  5. Token Type
    Replies:
    9
    Views:
    385
    Chris Angelico
    Sep 9, 2012
Loading...

Share This Page