Blocking IO thread-per-connection model: possible to avoid polling?


G

Giovanni Azua

Hello,

I'm firstly implementing this "thread-per-connection model" on the Server
component where one thread is responsible for reading the requests and
sending results back to the client, it is not responsible for actually
processing the requests though. I can gracefully (cleanup) stop the thread
by doing socket.getChannel().close() see the snippet below. However, in
order to send data, I also need to interrupt the Thread while it is blocked
waiting for input. Apparently the only way to do this without closing the
channel as side effect is to do polling?

TIA,
Best regards,
Giovanni

ObjectInputStream in = null;
try {
in = new ObjectInputStream(clientSocket.getInputStream());
while (true) {
try {
// >>> it is blocked here <<<
MessageData data = (MessageData) in.readObject();

// add requests to the BlockingQueue for processing
requestQueue.add(new Request(data, this));

// >>> send stuff here <<<
// if (resultAvailable()) {
// out.writeObject(result);
// }
}
catch (ClosedChannelException exception) {
// stop requested
break;
}
}
}
catch (IOException exception) {
exception.printStackTrace();
throw new RuntimeException(exception);
}
catch (ClassNotFoundException exception) {
exception.printStackTrace();
throw new RuntimeException(exception);
}
finally {
try {
in.close();
}
catch (IOException exception) {
exception.printStackTrace();
}
}
 
Ad

Advertisements

R

Robert Klemme

Why do you need to interrupt the thread in order to send data? You
should be able to just get the output stream from the socket when you
create it, and then use that any time you want to send data.

Absolutely agree.
The thread that reads from the socket shouldn't need to be responsible
for sending at all (except possibly as an optimization in the case where
it knows right away it has something to send as a response to something
it's just read).

One should only be aware that this might have impacts on the read
throughput which can be achieved. That of course depends on buffers,
message size, message rate and probably also CPU load generated by
processing read data.

Kind regards

robert
 
G

Giovanni Azua

Hello Pete,

Please note I am using classic Socket and Blocking IO and not NIO.

Why do you need to interrupt the thread in order to send data? You
should be able to just get the output stream from the socket when you
create it, and then use that any time you want to send data.
Any time I want? Even if it means to write to the OutputStream from a
different thread than the one receiving data? It is not clear from the
documentation I can do this safely on a Socket. I think is not possible
unless I get the underlying SocketChannel or?
The thread that reads from the socket shouldn't need to be responsible
for sending at all (except possibly as an optimization in the case where
it knows right away it has something to send as a response to something
it's just read).
I would not like to have my "Worker Threads" IO bound in any way, I would
not prefer to have them responsible for sending data. The other idea is two
have two-threads-per-connection model, one for receiving and one for sending
.... but this is not the model I was trying to implement in my OP.

TIA,
Best regards,
Giovanni
 
D

Daniel Pitts

I agree that the documentation is not clear on this point. However, it
is a fundamental criteria for BSD sockets and any API inherited from
them that sockets be thread-safe and full duplex. Java sockets are the
same.

You would not want to use the same InputStream simultaneously from
multiple threads, nor the same OutputStream simultaneously from multiple
threads, but reading from one thread and writing from another is fully
supported. The Java sockets API would be broken if it weren't.


You will need to do performance measurements to determine the
best-performing architecture. However, I will point out that your i/o
threads are all i/o bound on the same resource: your network adapter.
There is overhead in handing work off to other threads from a main
"traffic cop" thread (such as your worker threads waiting on received
data) and it's entirely possible that overall latency would be _better_
if you avoided that overhead by simply having the main worker threads
handling at least some of the i/o (i.e. that i/o which can easily be
determined immediately, rather than requiring some lengthy processing).

That said, your first concern should be correctness, and it's likely the
design is easier to implement if each thread has a clear and simple duty
to perform. Your goal of not having the worker threads send any data at
all is consistent with that approach and so is probably better to pursue
at least initially. You can always investigate potential optimizations
later.

Pete


I've seen one approach for this kind of work, especially when multiple
"messages" can be sent over the wire in any order:

Reader thread: Reads and parses the incoming data, dispatches to be
worked on. Work goes either to worker thread pool or is executed inline.
You can easily create an interface which lets you plug in either approach.

Writer thread: Reads from a Queue (often a BlockingQueue, maybe even
priority queue), for messages to send. Sends message over the wire.

This works well enough for most streams, and can even be used in NIO to
have fewer threads than streams.
 
Ad

Advertisements

G

Giovanni Azua

Hi Peter,

I agree that the documentation is not clear on this point. However, it
is a fundamental criteria for BSD sockets and any API inherited from
them that sockets be thread-safe and full duplex. Java sockets are the
same.

You would not want to use the same InputStream simultaneously from
multiple threads, nor the same OutputStream simultaneously from multiple
threads, but reading from one thread and writing from another is fully
supported. The Java sockets API would be broken if it weren't.
Thank you! Yes I found about full-duplex supported by Java Sockets after
researching a bit :)

I finished creating the remoting support for my project based on the
"one-thread-per-connection" model. Actually, in order to have a stable and
predictable middleware load we were strongly advised to write blocking
Clients (send request and wait for response) so things got real simple as
only one Thread per connection is needed in the Middleware side: read
request, block until it is processed, and send back response. A very tricky
part was to Unit tests the whole remoting solution .. sigh.

I will be doing the NIO version soon.

Best regards,
Giovanni

PS: thank you all for the help on these questions ... your answers were
pretty enlightening.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top