Michael said:
It'll basically be a front end NNTP server. Network I/O will be intense.
About 300 copies of the script will be running at any time.
NNTP? Sounds like a strange contest. Is this a coding contest or a
contest contest? SMTP, POP, IMAP, HTTP and maybe an IM protocol I can
see, but NNTP, huh? All right. Are there even 300 regular users of NNTP
left?

Okay, okay. No matter.
Regardless of what protocol you're building this out of, "running 300
copies of the script" simultaneously is an incredibly bad idea. Doing so
with an interpreted language like Perl, Python, Tcl, VBScript, LISP,
whatever will certainly need heavy duty hardwarte, but not for the
actual work of your script. The CPU and memory and probably the disks
too will all be occupied initializing and tearing down all those
interpreters. The actual meat of what your script does will use only a
tiny fraction of the resources, and you'll need many times the memory
and CPU than you would otherwise.
Perl is fine for what you want to do, but you want to do it
*multithreaded*, with one instance of the script running per box as a
daemon. That ought to be plenty fast given adequate hardware, and your
choice of CPU architecture, compiler and compiler optimizations should
be mostly irrelevant. Throw an adequate number of MIPS at it focus more
on writing properly optimized Perl and making sure your disk and network
I/O are adequate.
If you or someone else insists on implementing this with hundreds of
separate processes, then it's hard to imagine a worse design decision
than spawning hundreds of interpreters simultaneously with tens or
hundreds of thousands of launches and teardowns per day. If you want
separate processes, you should be using C/C++ or something else that
compiles to tight binaries, period.
Splitting hairs over Opterons vs. Xeons vs. supercooled Athlons or
mastodons is pointless. You should have hardware capable of handling
your largest realistically-projected traffic spikes with a good amount
of headroom remaining, period. You can do it with Celerons if you really
want to: just pick hardware that gives you the SPECmarks, the I/O and
the uptime you need at a good price. If you're running it on one box
rather than load-balancing a few smaller ones, and have nothing
"collecting" requests during downtime, then redundant and/or
hot-swappable components (especially disk and power supplies) become
more important. Trying to get a "perfect fit" with little or no headroom
is completely wrong.
Indeed, if availability is critical, you should either design the system
to be load balanced across 1..n machines with no single point of
failure, or you should be holding your contest over a protocol that can
store-and-forward gracefully, unlike private NNTP. With something like
HTTP you can load-balance straightforwardly and run a few smaller and
more disposable servers instead of one big high-availability one. With
an email-based contest, on the other hand, you can have dumb
store-and-forward MX hosts specified in DNS that will catch inbound
entries and pass them along automatically in the event of downtime on
your "collector", effectively offloading the job of inbound-message
fault tolerance to your ISP.