Async Client with 1K connections?

W

William Chang

Before I take the plunge, I'd appreciate any advice on the feasibility
and degree of difficulty of the following...

I need extremely efficient and robust _client_ software for some
common protocols like HTTP and POP3, supporting 1,000 simultaneous
independent connections and commensurate network throughput. The data
get written to files or sockets, so no GUI needed.

I am not a Python programmer :-( but I am a "fan" :) and I have been
reading about asyncore/Medusa/Twisted -- which would be my best bet?

Any advantage to using a particular unix/version -- Linux 32/64bit?
FreeBSD 4/5? Solaris Sun/Intel?

If anyone who is expert in this area may be available, please contact
me at "w c h a n g at a f f i n i dot com". (I'm in the SF Bay Area.)

My background is C -- I was the principal author of Infoseek (RIP),
including the Python modularization that was the core of Ultraseek aka
Inktomi Enterprise Search aka Verity. (For those of you old enough to
remember!) Unfortunately, I moved upstairs and never did much Python.

Thanks in advance, --William
 
P

Paul Rubin

I need extremely efficient and robust _client_ software for some
common protocols like HTTP and POP3, supporting 1,000 simultaneous
independent connections and commensurate network throughput. The data
get written to files or sockets, so no GUI needed.

You're writing a monstrous web spider in Python?
I am not a Python programmer :-( but I am a "fan" :) and I have been
reading about asyncore/Medusa/Twisted -- which would be my best bet?

With enough hardware, you can do practically anything. Some Apache
servers fork off that many processes.
 
P

Paul Rubin

I need extremely efficient and robust _client_ software for some
common protocols like HTTP and POP3, supporting 1,000 simultaneous
independent connections and commensurate network throughput. The data
get written to files or sockets, so no GUI needed.

I am not a Python programmer :-( but I am a "fan" :) and I have been
reading about asyncore/Medusa/Twisted -- which would be my best bet?

Seriously, I'd probably use asyncore since it's the simplest. Twisted
is more flexible but maybe you don't need that.

Why do you want to write this client in Python? What is it doing?

Rather than going crazy tuning the software, you can parallelize it
and run it on multiple boxes. Does that work for you?
Any advantage to using a particular unix/version -- Linux 32/64bit?
FreeBSD 4/5? Solaris Sun/Intel?

Google has something like 8000 servers in its farm, running 32 bit
Linux, so they're probably onto something. Solaris is a lot slower.
64 bit Linux is maybe too new to deploy in some big production system.
 
M

Michel Claveau/Hamster

Hi !

See Erlang : the web-server-sample can serve more than 50000 connexions on
one standard cpu.
 
B

Bill Scherer

[P&M]

William said:
Before I take the plunge, I'd appreciate any advice on the feasibility
and degree of difficulty of the following...

I need extremely efficient and robust _client_ software for some
common protocols like HTTP and POP3, supporting 1,000 simultaneous
independent connections
I've got an httpd stress tool that uses asyncore. I can run up 1020
independent simulated clients on my RH9 box(1x3Ghz cpu, 1GB ram),
driving at over 600 requests per second against a modest (2x1Ghz)
webserver, just pulling a static page.
and commensurate network throughput.
That could vary a lot, couldn't it?
The data get written to files or sockets, so no GUI needed.
Writing to files could slow you down a lot, depending on how much needs
to be written, how fast your disks are, how you go about getting the
data from the async client to the file, etc.. Much of the same goes for
sockets, too.
I am not a Python programmer :-( but I am a "fan" :) and I have been
reading about asyncore/Medusa/Twisted -- which would be my best bet?
I should think all can do the job for you, depending on the details
which you haven't told us.
 
D

Dave Brueck

Before I take the plunge, I'd appreciate any advice on the feasibility
and degree of difficulty of the following...

I need extremely efficient and robust _client_ software for some
common protocols like HTTP and POP3, supporting 1,000 simultaneous
independent connections and commensurate network throughput. The data
get written to files or sockets, so no GUI needed.

1000+ connections is not a problem, although (on Linux at least, and probably
others) you'll probably need to make sure your process is allowed to have open
more file descriptors, especially if you're turning around and writing data to
disk (since that uses file descriptors too). This is OS-specific and has
nothing to do with Python, but IIRC you can do something like
os.sysconf(os.sysconf_names['SC_OPEN_MAX']) to see how many fd's your process
can have open.
I am not a Python programmer :-( but I am a "fan" :) and I have been
reading about asyncore/Medusa/Twisted -- which would be my best bet?

You're probably going to be ok either way, but what are your throughput
requirements exactly? Are these connections pulling down HTML pages and small
images or are they big, multi-megabyte downloads? How big is your connection?
For 99% of uses asyncore or Twisted will be fine - but if you need very high
numbers of new connections per second (hundreds) or throughput (hundreds of
Mbps) then you might need to modify the framework or build your own - still in
Python but more tailored to your specific needs - in order to get those levels
of performance.

-Dave
 
P

Peter Hansen

Paul said:
Seriously, I'd probably use asyncore since it's the simplest. Twisted
is more flexible but maybe you don't need that.

I agree Twisted is more flexible, but having tried both I'd argue that
it is also simpler. I was able to get farther, faster, just by following
the simple examples (e.g. http://www.twistedmatrix.com/documents/howto/clients)
on the web site than I was with asyncore. I also found the source
_much_ cleaner and more readable when it came time to look there as well.

-Peter
 
P

Paul Rubin

Bill Scherer said:
Writing to files could slow you down a lot, depending on how much
needs to be written, how fast your disks are, how you go about
getting the data from the async client to the file, etc.. Much of the
same goes for sockets, too.

That's a good point, you should put everything into one file serially,
then sort it afterwards to separate out data from individual
connections.
 
W

William Chang

Thank you all for the discussion! Some additional information:

One of the intended uses is indeed a next-gen web spider. I did the
math, and yes I will need about 10 cutting-edge PCs to spider like
you-know-who. But I shouldn't need 100 -- and would rather not
spend money unnecessarily... Throughput per PC would be on
the order of 1MB/s assuming 200x5KB downloads/sec using 1-2000
simultaneous connections. (That's 17M pages per day per PC.)
My search & content engine can index and store at such a rate,
but can the spider initiate (at least) 200 new requests per second,
assuming each request lasts 5-10 seconds?

Of course, that assumes the spider algorithm/coordinator is pretty
intelligent and well-engineered. And the hardware stay up, etc.
Managing storage is certainly nontrivial; at such a scale nothing is
to be taken for granted!

Nevertheless, it shouldn't cost millions. Maybe $100K :)

Time for a sanity check? --William
 
P

Paul Rubin

William Chang said:
Thank you all for the discussion! Some additional information:

One of the intended uses is indeed a next-gen web spider. I did the
math, and yes I will need about 10 cutting-edge PCs to spider like
you-know-who. But I shouldn't need 100 -- and would rather not
spend money unnecessarily... Throughput per PC would be on
the order of 1MB/s assuming 200x5KB downloads/sec using 1-2000
simultaneous connections. (That's 17M pages per day per PC.)

That's orders of magnitude less than you-know-who. Also, don't forget
how many queries you have to take from users, and the amount of disk seeks
needed for each one.
Nevertheless, it shouldn't cost millions. Maybe $100K :)

10 MB of internet connectivity is at least a few K$/month all by itself.
 
A

Aahz

One of the intended uses is indeed a next-gen web spider. I did the
math, and yes I will need about 10 cutting-edge PCs to spider like
you-know-who.

Note that while you-know-who makes extensive use of Python, I don't
think they're using it for spidering/searching. I do have some
background writing a spider in Python, using Verity's engine for
indexing/retrieval, but we were using threading rather than
asyncore-style operations.
 
W

William Chang

Note that while you-know-who makes extensive use of Python, I don't
think they're using it for spidering/searching. I do have some
background writing a spider in Python, using Verity's engine for
indexing/retrieval, but we were using threading rather than
asyncore-style operations.

Interesting, did you try maxing out the number of threads/connections?
On an UltraSparc with hardware thread/lwp support, a thousand threads
can co-exist reliably, at least for computations and disk I/O. Linux
is another matter entirely.

--William
 
W

William Chang

Paul Rubin said:
That's orders of magnitude less than you-know-who.

Do you know how frequently you-know-who refreshes its entire index? A year
ago things were pretty dire, easily over 10% dead links, if I recall correctly.
10 PCs at 17M/day each will refresh 3B pages in 18 days, easily world-class.
... Also, don't forget
how many queries you have to take from users, and the amount of disk seeks
needed for each one.

Sure, that's what I do. However, spidering and querying are independent tasks,
generally speaking.
10 MB of internet connectivity is at least a few K$/month all by itself.

Yes, $2500 to be specific.

There's no reason to be intimidated (if I may use that word) by you-know-who's
marketing message (80,000 machines). Back in '96 Infoseek could handle 10M
queries per day on a single Sun E4000 with 8CPU (<200Mhz), 4GB, 20x4GB RAID.
Sure the WWW is much bigger now, but so are the disk drives!

-- William
 

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,053
Latest member
BrodieSola

Latest Threads

Top