[ANN] Thin v0.5.2 Cheezburger released

  • Thread starter Marc-André Cournoyer
  • Start date
M

Marc-André Cournoyer

Hey all,

Version 0.5.2 (codename Cheezburger) of the fastest Ruby server is out!

Thin is a Ruby web server that glues together 3 of the best Ruby
libraries in web history:
* the Mongrel parser: the root of Mongrel speed and security
* Event Machine: a network I/O library with extremely high
scalability, performance and stability
* Rack: a minimal interface between webservers and Ruby frameworks
Which makes it, with all humility, the most secure, stable, fast and
extensible Ruby web server
bundled in an easy to use gem for your own pleasure.

http://code.macournoyer.com/thin/

== What's new?

* Add cluster support through the -s option in the thin script, start
3 thins like this:
thin start -s3 -p3000
3 thin servers will be started on port 3000, 3001, 3002, also the
port number will be
injected in the pid and log filenames.
* Fix IOError when writing to logger when starting server as a daemon.
* Really change directory when the -c option is specified.
* Add restart command to thin script.
* Fix typos in thin script usage message and expand chdir path.
* Rename thin script options to be the same as mongrel_rails script
[thronedrk]:
-o --host => -a --address
--log-file => --log
--pid-file => --pid
--env => --environment

win32 support is coming soon!

== Get it!

sudo gem install thin

Might take some time for the gem mirrors to be updated, try adding
--source http://code.macournoyer.com to the command if it doesn't work

If you installed a previous alpha version (if you have 0.5.2 already
installed)
uninstall it before: sudo gem uninstall thin

WARNING: Thin is still alpha software, if you use it on your server
you understand the risks that are involved.

== Contribute

If you're using Thin, let me know and I'll put your site on http://code.macournoyer.com/thin/users/

Thin is driven by an active community of passionate coders and
benchmarkers. Please join us, contribute
or share some ideas in Thin Google Group: http://groups.google.com/group/thin-ruby/topics

Also on IRC: #thin on freenode

Thanks to all the people who contributed to Thin, EventMachine, Rack
and Mongrel.

Marc-Andre Cournoyer
http://macournoyer.com
 
T

Thomas Hurst

* Marc-Andr? Cournoyer ([email protected]) said:
* Add cluster support through the -s option in the thin script, start 3
thins like this:
thin start -s3 -p3000
3 thin servers will be started on port 3000, 3001, 3002, also the port
number will be injected in the pid and log filenames.

Is it infeasable to support preforking after the listen socket's been
opened, so many processes serve off the same port? I've done this for a
while now with FastCGI and other Ruby servers.

When I questioned Zed about it re mongrel a few years ago he mentioned
something about IO streams getting confused, which I've never seen;
perhaps OS-specific, on systems where concurrent accept() on a socket
isn't supported?

Be nice to have a multiprocess server that didn't require a fancy load
balancer, at least on "safe" platforms, if possible.
 
F

Francis Cianfrocca

[Note: parts of this message were removed to make it a legal post.]

Is it infeasable to support preforking after the listen socket's been
opened, so many processes serve off the same port? I've done this for a
while now with FastCGI and other Ruby servers.


EventMachine doesn't currently support forking after starting up TCP
listeners, but that's mostly because it can't be requested with the current
Ruby glue. (There's also a minor issue with CLOEXEC on Unix platforms.)
We're working on some significant API changes in EventMachine that in fact
will allow preforking among many other things.

The usual issue with preforking is that it doesn't work well on every
platform. From the light testing I've done myself, it's reasonably fair on
Linux. It's not good at all on Windows, unless they've changed something
since the last time I tested it. What's your experience been?
 
K

khaines

Is it infeasable to support preforking after the listen socket's been
opened, so many processes serve off the same port? I've done this for a
while now with FastCGI and other Ruby servers.

When I questioned Zed about it re mongrel a few years ago he mentioned
something about IO streams getting confused, which I've never seen;
perhaps OS-specific, on systems where concurrent accept() on a socket
isn't supported?

The problem with this approach is really just that it's viability is an
accident. It's not described in any spec that I have ever seen on how
Linux networks should behave. It's not guaranteed to continue working,
and there's no guarantee that it will continue to be implemented in a way
that fairly distributes requests.

So yes, on some platforms (Linux), it works, and has worked for quite a
while. But IMHO, using it for anything important is risky, and if one's
app gets enough traffic that it needs a cluster, it would seem to me that
it's probably important enough to warrant using something that's actually
designed to operate as a load balancer and that is guaranteed to be a load
balancer in the future.


Kirk Haines
 
T

Thomas Hurst

* Francis Cianfrocca ([email protected]) said:
EventMachine doesn't currently support forking after starting up TCP
listeners, but that's mostly because it can't be requested with the
current Ruby glue. (There's also a minor issue with CLOEXEC on Unix
platforms.) We're working on some significant API changes in
EventMachine that in fact will allow preforking among many other
things.

Excellent, thanks.
The usual issue with preforking is that it doesn't work well on every
platform. From the light testing I've done myself, it's reasonably
fair on Linux. It's not good at all on Windows, unless they've changed
something since the last time I tested it. What's your experience
been?

I have next to no experience with Ruby on Windows; I don't think it
really matters. Nobody who matters deploys live applications on it, and
those poor deranged people who do can make do without the capability
just as they do now.

On OS's that don't support concurrent accept()s you can just do what a
preforking Apache does on those systems; use a lockfile. I'm quite sure
no OS is going to be getting away with breaking that.
 
J

Jeremy Evans

Thomas said:
Is it infeasable to support preforking after the listen socket's been
opened, so many processes serve off the same port? I've done this for a
while now with FastCGI and other Ruby servers.

When I questioned Zed about it re mongrel a few years ago he mentioned
something about IO streams getting confused, which I've never seen;
perhaps OS-specific, on systems where concurrent accept() on a socket
isn't supported?

Be nice to have a multiprocess server that didn't require a fancy load
balancer, at least on "safe" platforms, if possible.

ruby-style [1] supports what you want, serving Rails via Mongrel or
SCGI. It starts a supervising process that opens port(s) and then forks
children to listen on those port(s). You can have multiple children
listening on the same port. It's primarily designed to supporting
restarting the application without losing requests (since the listening
socket is never closed), but it works for load balancing as well.

I've tried making it work with evented Mongrel, and I couldn't get it to
work, probably for reasons Francis mentioned.

[1] https://rubyforge.org/projects/ruby-style/

Jeremy
 
A

ara.t.howard

ruby-style [1] supports what you want, serving Rails via Mongrel or
SCGI. It starts a supervising process that opens port(s) and then
forks
children to listen on those port(s). You can have multiple children
listening on the same port. It's primarily designed to supporting
restarting the application without losing requests (since the
listening
socket is never closed), but it works for load balancing as well.

fcgi support?

a @ http://codeforpeople.com/
 
T

Thomas Hurst

The problem with this approach is really just that it's viability is
an accident. It's not described in any spec that I have ever seen on
how Linux networks should behave. It's not guaranteed to continue
working, and there's no guarantee that it will continue to be
implemented in a way that fairly distributes requests.

I'm quite sure that calling accept() on a filehandle shared across
processes is quite valid by POSIX, though it may not define it as having
to be safe to call concurrently. If it isn't on your particular OS,
just:

loop do
if lockfile.flock(LOCK_EX)
client = server.accept
lockfile.flock(LOCK_UN)
process(client)
end
end

If your OS can't support concurrent accept or make something like the
above work, it can't run, among many other things, Apache using any of
the Unix MPM's. I don't know about you, but those OS's are not
interesting to me as deployment platforms, if at all.

Standards wise, with preforking and the proliferation of multithreaded
Ruby servers, and the introduction of Ruby 1.9 with native threading,
I'd be more concerned with people forking after spawning threads; POSIX
doesn't define the behavior of applications which call non async signal
safe functions in such an event, and it's actually known to cause
problems in systems people may actually like to use.
So yes, on some platforms (Linux), it works, and has worked for quite
a while. But IMHO, using it for anything important is risky, and if
one's app gets enough traffic that it needs a cluster,

You don't need much traffic to require a "cluster" (which I normally
associate with needing to scale out to multiple systems), you just need
a few slow running actions which can't run concurrently. And if you do
need to scale out at short notice, it's probably nice to not have to
immediately run out and find an appropriate load balancer.
it would seem to me that it's probably important enough to warrant
using something that's actually designed to operate as a load balancer
and that is guaranteed to be a load balancer in the future.

Load balancers are pretty poor for this sort of thing though; if you
have one server that's running slowly, most load balancers will still
quite happily hand off clients to it and be oblivious to them queuing up
waiting for it to actually accept(). After years of using these things,
there's what, one project to try to teach a single load balancer about
this sort of thing?

With multiple processes accepting off the same socket, even if one or
two are very busy, the queued up connections can be handed off to
another process; you could even dynamically spawn off and kill child
processes listening on the socket without having to mess about with port
allocations and load balancer configurations.
 
A

ara.t.howard

Load balancers are pretty poor for this sort of thing though; if you
have one server that's running slowly, most load balancers will still
quite happily hand off clients to it and be oblivious to them
queuing up
waiting for it to actually accept(). After years of using these
things,
there's what, one project to try to teach a single load balancer about
this sort of thing?



ha-proxy makes this easy to prevent - you can configure it to fwd
request only to non-busy backends, queueing all other requests
internally.

a @ http://drawohara.com/
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,930
Messages
2,570,072
Members
46,522
Latest member
Mad-Ram

Latest Threads

Top