[ANN] Multiplexer - linear non-blocking I/O

R

Robert Klemme

Mikael Brockman said:
I agree. What I find inelegant is threads with non-blocking I/O and
Definitely.

kludges like splitting the data into several system calls.

Your example is quite special. Usually, when writing servers that serve
huge chunks of data (like HTTP servers that also serve binary content, e.g.
for download) then the usual (and proper) approach is to copy the file in
chunks. Nobody writes a server that reads a 1GB file into memory first
before sending it over the line. So IMHO your test case is a bit
artificial.

Or put another way round: if there was a need for select like IO handling in
Ruby, then I'd assume someone would have come up with a similar solution
already. I don't know such a solution - so probably nobody did feel the
need yet.

Kind regards

robert
 
M

Mikael Brockman

Robert Klemme said:
Your example is quite special. Usually, when writing servers that
serve huge chunks of data (like HTTP servers that also serve binary
content, e.g. for download) then the usual (and proper) approach is to
copy the file in chunks. Nobody writes a server that reads a 1GB file
into memory first before sending it over the line.

True. The files I'm sending are only a couple of megabytes. Still
takes a long time to send to, say, someone on 56K.
Or put another way round: if there was a need for select like IO
handling in Ruby, then I'd assume someone would have come up with a
similar solution already. I don't know such a solution - so probably
nobody did feel the need yet.

Don't I count? :) There's another project called IO::Reactor[1], but
it doesn't do the callcc jig.

Footnotes:
[1] http://www.deveiate.org/code/IO-Reactor.html
 
T

Tanaka Akira

Your example is quite special. Usually, when writing servers that
serve huge chunks of data (like HTTP servers that also serve binary
content, e.g. for download) then the usual (and proper) approach is to
copy the file in chunks. Nobody writes a server that reads a 1GB file
into memory first before sending it over the line.

True. The files I'm sending are only a couple of megabytes. Still
takes a long time to send to, say, someone on 56K.[/QUOTE]

An evil user may connect your server without reading.

% ruby -rsocket -e 'TCPSocket.open("your-server", 12345); sleep'

If the server writes to such client and TCP window is fill up, the
server process (including other threads) may blocks.

So I don't think it's negligible.
I think this problem should be solved by nonblocking flag.

I'm not sure that nonblocking flag is easy enough to use in Ruby,
though.
 
M

Mikael Brockman

Tanaka Akira said:
True. The files I'm sending are only a couple of megabytes. Still
takes a long time to send to, say, someone on 56K.

An evil user may connect your server without reading.

% ruby -rsocket -e 'TCPSocket.open("your-server", 12345); sleep'

If the server writes to such client and TCP window is fill up, the
server process (including other threads) may blocks.

So I don't think it's negligible.
I think this problem should be solved by nonblocking flag.

I'm not sure that nonblocking flag is easy enough to use in Ruby,
though.[/QUOTE]

It is with Multiplexer. :)
 
T

Tanaka Akira

Mikael Brockman said:
It is with Multiplexer. :)

I see two advantages of Multiplexer.

* sockets are nonblocking mode by default
* fewer concurrency issues

Are there other advantages?

I think Ruby's IO methods should behave as blocking mode even if
underlying file descriptor is nonblocking mode, except other threads
works well. The current implementation may have some problems,
though. (ex. IO#read behaves differently in nonbloking mode.)
 
R

Robert Klemme

True. The files I'm sending are only a couple of megabytes. Still
takes a long time to send to, say, someone on 56K.

I'm sorry, what do you mean by this? Typical buffer sizes are usually much
smaller than "a couple of megabytes" so these files would be sent in chunks,
too.
Don't I count? :)

Well, *apart* from yours of course.
There's another project called IO::Reactor[1], but
it doesn't do the callcc jig.

Footnotes:
[1] http://www.deveiate.org/code/IO-Reactor.html

Ah, didn't know that one. Thanks for the pointer! As far as I can see that
solution does work with chunks also - and usage looks quite complicated.
Did you find any information about performance (or other) comparisons of
IO::Reactor vs. threaded IO?

Btw, if the only problem is blocking while sending huge chunks of data, then
what *I* would do is this: I'd override send() (and others that might be
necessary) to do just that. Then one can still use the simple threaded
approach and does not have to care about thread blocking. (Maybe this should
even be part of the std lib's socket implementation?)

Kind regards

robert
 
K

Kalaky

Do your self a favor and take some time to read:

http://citeseer.ist.psu.edu/schmidt95using.html

And think again about Threaded/Non-blocking I/O.

Keep in mind the "Four Horsemen of Poor Performance":

1. Data copies
2. Context switches
3. Memory allocation
4. Lock contention


True. The files I'm sending are only a couple of megabytes. Still
takes a long time to send to, say, someone on 56K.

I'm sorry, what do you mean by this? Typical buffer sizes are usually much
smaller than "a couple of megabytes" so these files would be sent in chunks,
too.
Don't I count? :)

Well, *apart* from yours of course.
There's another project called IO::Reactor[1], but
it doesn't do the callcc jig.

Footnotes:
[1] http://www.deveiate.org/code/IO-Reactor.html

Ah, didn't know that one. Thanks for the pointer! As far as I can see that
solution does work with chunks also - and usage looks quite complicated.
Did you find any information about performance (or other) comparisons of
IO::Reactor vs. threaded IO?

Btw, if the only problem is blocking while sending huge chunks of data, then
what *I* would do is this: I'd override send() (and others that might be
necessary) to do just that. Then one can still use the simple threaded
approach and does not have to care about thread blocking. (Maybe this should
even be part of the std lib's socket implementation?)

Kind regards

robert
 
M

Mikael Brockman

Robert Klemme said:
I'm sorry, what do you mean by this? Typical buffer sizes are usually
much smaller than "a couple of megabytes" so these files would be sent
in chunks, too.

Yeah, you're right. But sending any data to a high-latency or timed-out
connection takes a potentially long time.
Btw, if the only problem is blocking while sending huge chunks of
data, then what *I* would do is this: I'd override send() (and others
that might be necessary) to do just that. Then one can still use the
simple threaded approach and does not have to care about thread
blocking. (Maybe this should even be part of the std lib's socket
implementation?)

If blocking on read() is a problem, too, then Multiplexer is essentially
what you get when you solve it and refactor away the (in my case)
redundant multi-threading.
 
D

David G. Andersen

Your example is quite special. Usually, when writing servers that serve
huge chunks of data (like HTTP servers that also serve binary content, e.g.
for download) then the usual (and proper) approach is to copy the file in
chunks. Nobody writes a server that reads a 1GB file into memory first
before sending it over the line. So IMHO your test case is a bit
artificial.

Actually, this is an interesting special case -- most high-performance
webservers and FTP servers do exactly what you've described using
the sendfile() system call or by mmap()ing the file and then
sending on it. The reason to use sendfile() is to reduce the
number of data copies (zero-copy write):

fd = open("file", O_RDONLY);
sock = accept(...)

sendfile(fd, sock, ...);

Unfortunately, sendfile doesn't seem to have particularly friendly
nonblocking IO semantics or an asynchronous IO implementation.
You _can_ call sendfile in a select loop, but that still greatly
bumps up your system call count. Kernel threads are pretty much
the only way to get around it.

Sendfile is a very nice trick for high performance servers.
It could be, for instance, a convenient tiny-C component to
add to webrick or other HTTP server frameworks to reduce some of
the overhead of file transmission by taking all of the
data touches out of ruby and putting them in the OS.

-Dave
 
N

nobu.nokada

Hi,

At Sun, 28 Nov 2004 18:10:40 +0900,
Tanaka Akira wrote in [ruby-talk:121646]:
I think Ruby's IO methods should behave as blocking mode even if
underlying file descriptor is nonblocking mode, except other threads
works well. The current implementation may have some problems,
though. (ex. IO#read behaves differently in nonbloking mode.)

Line oriented methods (gets, puts, etc.) should behave
similarly in both modes.
 
R

Robert Klemme

If blocking on read() is a problem, too, then Multiplexer is essentially
what you get when you solve it and refactor away the (in my case)
redundant multi-threading.

:) I also forgot the problems of bidirectional communication that make
threaded communication much more complicated, too. The threaded solution
is really simple and most elegant *only* if there is only one way
communication or if the protocol always makes clear who is sender and
receiver at a certain point in time.

Thanks for taking your time to sort these things out!

Kind regards

robert
 
R

Robert Klemme

Kalaky said:
Do your self a favor and take some time to read:

http://citeseer.ist.psu.edu/schmidt95using.html

Thanks for that link! I'll read it in the evening.
And think again about Threaded/Non-blocking I/O.

Keep in mind the "Four Horsemen of Poor Performance":

In the context of our discussion, I'd say there's not too much difference
between the two - considering that we are not talking about a specific
application:
1. Data copies

no difference.
2. Context switches

These make a difference although OTOH the way continuations are
implemented in Ruby context switches may not differ.
3. Memory allocation

The threaded solution needs additional mem for stack etc. but the same
remark as for context switches applies.
4. Lock contention

If shared resources are to be accessed safely then you will have no lock
contention with a non threaded solution (i.e. threads waiting for a lock
to become available) but you might have to employ some form of locking
nevertheless to ensure data integrity, resulting in similar behavior.

Lock contention has to be tackled on a higher, more abstract level: you
have to design with concurrency in mind to minimize access to shared
resources and thus lock contention - regardless whether you use threads or
other means to perform concurrent work. Here's another good source of
information about the matter

http://www.amazon.com/exec/obidos/tg/detail/-/0201310090/

Kind regards

robert
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,796
Messages
2,569,645
Members
45,368
Latest member
EwanMacvit

Latest Threads

Top