Async Client with 1K connections?

Discussion in 'Python' started by William Chang, Feb 10, 2004.

  1. Before I take the plunge, I'd appreciate any advice on the feasibility
    and degree of difficulty of the following...

    I need extremely efficient and robust _client_ software for some
    common protocols like HTTP and POP3, supporting 1,000 simultaneous
    independent connections and commensurate network throughput. The data
    get written to files or sockets, so no GUI needed.

    I am not a Python programmer :-( but I am a "fan" :) and I have been
    reading about asyncore/Medusa/Twisted -- which would be my best bet?

    Any advantage to using a particular unix/version -- Linux 32/64bit?
    FreeBSD 4/5? Solaris Sun/Intel?

    If anyone who is expert in this area may be available, please contact
    me at "w c h a n g at a f f i n i dot com". (I'm in the SF Bay Area.)

    My background is C -- I was the principal author of Infoseek (RIP),
    including the Python modularization that was the core of Ultraseek aka
    Inktomi Enterprise Search aka Verity. (For those of you old enough to
    remember!) Unfortunately, I moved upstairs and never did much Python.

    Thanks in advance, --William
     
    William Chang, Feb 10, 2004
    #1
    1. Advertising

  2. William Chang

    Paul Rubin Guest

    (William Chang) writes:
    > I need extremely efficient and robust _client_ software for some
    > common protocols like HTTP and POP3, supporting 1,000 simultaneous
    > independent connections and commensurate network throughput. The data
    > get written to files or sockets, so no GUI needed.


    You're writing a monstrous web spider in Python?

    > I am not a Python programmer :-( but I am a "fan" :) and I have been
    > reading about asyncore/Medusa/Twisted -- which would be my best bet?


    With enough hardware, you can do practically anything. Some Apache
    servers fork off that many processes.
     
    Paul Rubin, Feb 10, 2004
    #2
    1. Advertising

  3. William Chang

    Paul Rubin Guest

    (William Chang) writes:
    > I need extremely efficient and robust _client_ software for some
    > common protocols like HTTP and POP3, supporting 1,000 simultaneous
    > independent connections and commensurate network throughput. The data
    > get written to files or sockets, so no GUI needed.
    >
    > I am not a Python programmer :-( but I am a "fan" :) and I have been
    > reading about asyncore/Medusa/Twisted -- which would be my best bet?


    Seriously, I'd probably use asyncore since it's the simplest. Twisted
    is more flexible but maybe you don't need that.

    Why do you want to write this client in Python? What is it doing?

    Rather than going crazy tuning the software, you can parallelize it
    and run it on multiple boxes. Does that work for you?

    > Any advantage to using a particular unix/version -- Linux 32/64bit?
    > FreeBSD 4/5? Solaris Sun/Intel?


    Google has something like 8000 servers in its farm, running 32 bit
    Linux, so they're probably onto something. Solaris is a lot slower.
    64 bit Linux is maybe too new to deploy in some big production system.
     
    Paul Rubin, Feb 10, 2004
    #3
  4. Hi !

    See Erlang : the web-server-sample can serve more than 50000 connexions on
    one standard cpu.
     
    Michel Claveau/Hamster, Feb 10, 2004
    #4
  5. William Chang

    Bill Scherer Guest

    [P&M]

    William Chang wrote:

    >Before I take the plunge, I'd appreciate any advice on the feasibility
    >and degree of difficulty of the following...
    >
    >I need extremely efficient and robust _client_ software for some
    >common protocols like HTTP and POP3, supporting 1,000 simultaneous
    >independent connections
    >

    I've got an httpd stress tool that uses asyncore. I can run up 1020
    independent simulated clients on my RH9 box(1x3Ghz cpu, 1GB ram),
    driving at over 600 requests per second against a modest (2x1Ghz)
    webserver, just pulling a static page.

    >and commensurate network throughput.
    >

    That could vary a lot, couldn't it?

    >The data get written to files or sockets, so no GUI needed.
    >

    Writing to files could slow you down a lot, depending on how much needs
    to be written, how fast your disks are, how you go about getting the
    data from the async client to the file, etc.. Much of the same goes for
    sockets, too.

    >I am not a Python programmer :-( but I am a "fan" :) and I have been
    >reading about asyncore/Medusa/Twisted -- which would be my best bet?
    >

    I should think all can do the job for you, depending on the details
    which you haven't told us.

    >Any advantage to using a particular unix/version -- Linux 32/64bit?
    >FreeBSD 4/5? Solaris Sun/Intel?
    >
    >If anyone who is expert in this area may be available, please contact
    >me at "w c h a n g at a f f i n i dot com". (I'm in the SF Bay Area.)
    >
    >My background is C -- I was the principal author of Infoseek (RIP),
    >including the Python modularization that was the core of Ultraseek aka
    >Inktomi Enterprise Search aka Verity. (For those of you old enough to
    >remember!) Unfortunately, I moved upstairs and never did much Python.
    >
    >Thanks in advance, --William
    >
    >
     
    Bill Scherer, Feb 10, 2004
    #5
  6. William Chang

    Dave Brueck Guest

    > Before I take the plunge, I'd appreciate any advice on the feasibility
    > and degree of difficulty of the following...
    >
    > I need extremely efficient and robust _client_ software for some
    > common protocols like HTTP and POP3, supporting 1,000 simultaneous
    > independent connections and commensurate network throughput. The data
    > get written to files or sockets, so no GUI needed.


    1000+ connections is not a problem, although (on Linux at least, and probably
    others) you'll probably need to make sure your process is allowed to have open
    more file descriptors, especially if you're turning around and writing data to
    disk (since that uses file descriptors too). This is OS-specific and has
    nothing to do with Python, but IIRC you can do something like
    os.sysconf(os.sysconf_names['SC_OPEN_MAX']) to see how many fd's your process
    can have open.

    > I am not a Python programmer :-( but I am a "fan" :) and I have been
    > reading about asyncore/Medusa/Twisted -- which would be my best bet?


    You're probably going to be ok either way, but what are your throughput
    requirements exactly? Are these connections pulling down HTML pages and small
    images or are they big, multi-megabyte downloads? How big is your connection?
    For 99% of uses asyncore or Twisted will be fine - but if you need very high
    numbers of new connections per second (hundreds) or throughput (hundreds of
    Mbps) then you might need to modify the framework or build your own - still in
    Python but more tailored to your specific needs - in order to get those levels
    of performance.

    -Dave
     
    Dave Brueck, Feb 10, 2004
    #6
  7. William Chang

    Peter Hansen Guest

    Paul Rubin wrote:
    >
    > (William Chang) writes:
    > > I need extremely efficient and robust _client_ software for some
    > > common protocols like HTTP and POP3, supporting 1,000 simultaneous
    > > independent connections and commensurate network throughput. The data
    > > get written to files or sockets, so no GUI needed.
    > >
    > > I am not a Python programmer :-( but I am a "fan" :) and I have been
    > > reading about asyncore/Medusa/Twisted -- which would be my best bet?

    >
    > Seriously, I'd probably use asyncore since it's the simplest. Twisted
    > is more flexible but maybe you don't need that.


    I agree Twisted is more flexible, but having tried both I'd argue that
    it is also simpler. I was able to get farther, faster, just by following
    the simple examples (e.g. http://www.twistedmatrix.com/documents/howto/clients)
    on the web site than I was with asyncore. I also found the source
    _much_ cleaner and more readable when it came time to look there as well.

    -Peter
     
    Peter Hansen, Feb 10, 2004
    #7
  8. William Chang

    Paul Rubin Guest

    Bill Scherer <> writes:
    > >The data get written to files or sockets, so no GUI needed.
    > >

    > Writing to files could slow you down a lot, depending on how much
    > needs to be written, how fast your disks are, how you go about
    > getting the data from the async client to the file, etc.. Much of the
    > same goes for sockets, too.


    That's a good point, you should put everything into one file serially,
    then sort it afterwards to separate out data from individual
    connections.
     
    Paul Rubin, Feb 10, 2004
    #8
  9. Thank you all for the discussion! Some additional information:

    One of the intended uses is indeed a next-gen web spider. I did the
    math, and yes I will need about 10 cutting-edge PCs to spider like
    you-know-who. But I shouldn't need 100 -- and would rather not
    spend money unnecessarily... Throughput per PC would be on
    the order of 1MB/s assuming 200x5KB downloads/sec using 1-2000
    simultaneous connections. (That's 17M pages per day per PC.)
    My search & content engine can index and store at such a rate,
    but can the spider initiate (at least) 200 new requests per second,
    assuming each request lasts 5-10 seconds?

    Of course, that assumes the spider algorithm/coordinator is pretty
    intelligent and well-engineered. And the hardware stay up, etc.
    Managing storage is certainly nontrivial; at such a scale nothing is
    to be taken for granted!

    Nevertheless, it shouldn't cost millions. Maybe $100K :)

    Time for a sanity check? --William
     
    William Chang, Feb 11, 2004
    #9
  10. William Chang

    Paul Rubin Guest

    "William Chang" <> writes:
    > Thank you all for the discussion! Some additional information:
    >
    > One of the intended uses is indeed a next-gen web spider. I did the
    > math, and yes I will need about 10 cutting-edge PCs to spider like
    > you-know-who. But I shouldn't need 100 -- and would rather not
    > spend money unnecessarily... Throughput per PC would be on
    > the order of 1MB/s assuming 200x5KB downloads/sec using 1-2000
    > simultaneous connections. (That's 17M pages per day per PC.)


    That's orders of magnitude less than you-know-who. Also, don't forget
    how many queries you have to take from users, and the amount of disk seeks
    needed for each one.

    > Nevertheless, it shouldn't cost millions. Maybe $100K :)


    10 MB of internet connectivity is at least a few K$/month all by itself.
     
    Paul Rubin, Feb 11, 2004
    #10
  11. William Chang

    Aahz Guest

    In article <>,
    William Chang <> wrote:
    >
    >One of the intended uses is indeed a next-gen web spider. I did the
    >math, and yes I will need about 10 cutting-edge PCs to spider like
    >you-know-who.


    Note that while you-know-who makes extensive use of Python, I don't
    think they're using it for spidering/searching. I do have some
    background writing a spider in Python, using Verity's engine for
    indexing/retrieval, but we were using threading rather than
    asyncore-style operations.
    --
    Aahz () <*> http://www.pythoncraft.com/

    "Argue for your limitations, and sure enough they're yours." --Richard Bach
     
    Aahz, Feb 11, 2004
    #11
  12. (Aahz) wrote:
    > Note that while you-know-who makes extensive use of Python, I don't
    > think they're using it for spidering/searching. I do have some
    > background writing a spider in Python, using Verity's engine for
    > indexing/retrieval, but we were using threading rather than
    > asyncore-style operations.


    Interesting, did you try maxing out the number of threads/connections?
    On an UltraSparc with hardware thread/lwp support, a thousand threads
    can co-exist reliably, at least for computations and disk I/O. Linux
    is another matter entirely.

    --William
     
    William Chang, Feb 13, 2004
    #12
  13. Paul Rubin <http://> wrote:
    > "William Chang" <> writes:
    > > ... Throughput per PC would be on
    > > the order of 1MB/s assuming 200x5KB downloads/sec using 1-2000
    > > simultaneous connections. (That's 17M pages per day per PC.)

    >
    > That's orders of magnitude less than you-know-who.


    Do you know how frequently you-know-who refreshes its entire index? A year
    ago things were pretty dire, easily over 10% dead links, if I recall correctly.
    10 PCs at 17M/day each will refresh 3B pages in 18 days, easily world-class.

    > ... Also, don't forget
    > how many queries you have to take from users, and the amount of disk seeks
    > needed for each one.


    Sure, that's what I do. However, spidering and querying are independent tasks,
    generally speaking.

    > 10 MB of internet connectivity is at least a few K$/month all by itself.


    Yes, $2500 to be specific.

    There's no reason to be intimidated (if I may use that word) by you-know-who's
    marketing message (80,000 machines). Back in '96 Infoseek could handle 10M
    queries per day on a single Sun E4000 with 8CPU (<200Mhz), 4GB, 20x4GB RAID.
    Sure the WWW is much bigger now, but so are the disk drives!

    -- William
     
    William Chang, Feb 13, 2004
    #13
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Jonas
    Replies:
    3
    Views:
    270
    Jp Calderone
    Jan 5, 2004
  2. jobs
    Replies:
    2
    Views:
    903
  3. Steven
    Replies:
    0
    Views:
    386
    Steven
    Nov 30, 2005
  4. Laszlo Nagy

    Async client for PostgreSQL?

    Laszlo Nagy, Sep 1, 2012, in forum: Python
    Replies:
    2
    Views:
    315
  5. Werner Thie

    Re: Async client for PostgreSQL?

    Werner Thie, Sep 1, 2012, in forum: Python
    Replies:
    0
    Views:
    243
    Werner Thie
    Sep 1, 2012
Loading...

Share This Page