Network programming

S

SS

Has anyone come across this before:

I am pacing out a load of traffic from multiple clients to a
multi-threaded server (one thread/client), and back again.

As I add more clients, at some point I *think* I reach the limitation
of the network bandwidth. Anyway, at this point the server and clients
all eventually enter OutputStream.write() (where the OutputStream is
the stream attached to the relevant socket) and never return. No
exceptions are thrown either, and everything just sits there.

Each client is only active while it sends a specific amount of data
(eg. 10MB), then it ceases execution. I would have thought therefore,
that if the bandwidth limit was passed, the individual transmission
speeds would be reduced until clients start to finished their
transmissions and the bandwidth would be regained. The transmissions
should be ultimately successful, assuming the network buffers were not
flooded (in which case I would expect an exception from write()).

But, it never finishes, no matter how long I leave it.

If I then start to kill individual clients, exceptions are thrown in
the server as the sockets are closed, and these exceptions can be seen
to come from OutputStream.write(), indicating that the server had
entered write(), and never left until the client died.

It seems to me like write() should either throw an exception or return
pretty much immediately, but it does neither, therefore locking up the
server.

Anyone know what this problem is?

Thanks a lot,
SS


It's a Linux platform btw, using Kaffe and Sun VMs and Sun compiler.
 
P

Pat Ryan

I'm no expert, but it sounds like it could be a buffer deadlock - a call to
write to a socket will block if the write buffer is full. So if reading and
writing happen in the same thread, and both client and server are blocked on
a write, that would be a deadlock...
 
S

SS

Pat Ryan said:
I'm no expert, but it sounds like it could be a buffer deadlock - a call to
write to a socket will block if the write buffer is full. So if reading and
writing happen in the same thread, and both client and server are blocked on
a write, that would be a deadlock...


Yeah, that sounds about right. I was assuming that if the write buffer
was full then I'd get an IOException from write(), but if indeed it
does block then deadlock would occur as you described, which is what
seems to be happening.
Thanks for your help.
SS
 
S

SS

Pat Ryan said:
I'm no expert, but it sounds like it could be a buffer deadlock - a call to
write to a socket will block if the write buffer is full. So if reading and
writing happen in the same thread, and both client and server are blocked on
a write, that would be a deadlock...

Ok, so this leads me to another problem then.
Basically I need write() to be non-blocking. You can make read()
non-blocking using setSoTimeout(), but it doesn't seem to affect
write().

For example, suppose I am streaming data from a server to a client,
and the client for some reason stops reading, but leaves the socket
open.
The server eventually fills the write buffers, then gets stuck in
write().
I would want to be able to throw away any data that could not be
written and timeout the server if write failure occurs for some period
of time.
However, as long as the client is alive, this is impossible because
control of the server is never passed back from write().

Anyone have any ideas on this?

Thanks a lot.
 
B

BarryNL

SS said:
Ok, so this leads me to another problem then.
Basically I need write() to be non-blocking. You can make read()
non-blocking using setSoTimeout(), but it doesn't seem to affect
write().

For example, suppose I am streaming data from a server to a client,
and the client for some reason stops reading, but leaves the socket
open.
The server eventually fills the write buffers, then gets stuck in
write().
I would want to be able to throw away any data that could not be
written and timeout the server if write failure occurs for some period
of time.
However, as long as the client is alive, this is impossible because
control of the server is never passed back from write().

Anyone have any ideas on this?

Thanks a lot.

It's been a while since I played with this, but I seem to remember that
the java.nio.channels.SocketChannel class can do this.
 
T

Thomas Schodt

SS said:
Basically I need write() to be non-blocking. You can make read()
non-blocking using setSoTimeout(), but it doesn't seem to affect
write().

Switch to UDP ?
 
S

Steve Horsley

SS said:
Ok, so this leads me to another problem then.
Basically I need write() to be non-blocking. You can make read()
non-blocking using setSoTimeout(), but it doesn't seem to affect
write().

For example, suppose I am streaming data from a server to a client,
and the client for some reason stops reading, but leaves the socket
open.
The server eventually fills the write buffers, then gets stuck in
write().
I would want to be able to throw away any data that could not be
written and timeout the server if write failure occurs for some period
of time.
However, as long as the client is alive, this is impossible because
control of the server is never passed back from write().

Anyone have any ideas on this?

Thanks a lot.

Implement your own FIFO with a size limit that throws and Exception or
just discards the excess (your choice). The FIFO needs to be
synchronized and needs to block read() calls when empty.

Start a thread that loops reading the FIFO and writing the socket. It
will spend most of its time blocked in either read() or write().

Now just write to the FIFO rather than direct to the socket.

Steve
 
S

SS

Thomas Schodt said:
Switch to UDP ?

Can't do that unfortunately - the idea is to test a TCP connection
under stress conditions.
I'm surprised at this blocking limitation - I will have a go with
java.nio.channels.SocketChannel though...
 
S

SS

Implement your own FIFO with a size limit that throws and Exception or
just discards the excess (your choice). The FIFO needs to be
synchronized and needs to block read() calls when empty.

Start a thread that loops reading the FIFO and writing the socket. It
will spend most of its time blocked in either read() or write().

Now just write to the FIFO rather than direct to the socket.

Steve

I'm not sure that I completely understand you, but it doesn't seem
like it will make any difference to me. You could use a queue to pace
out data I suppose - is that what you are implying?

However the socket has effectively got an unknown bandwidth because it
depends on the performance of the client. I want to know if I can
write to the socket or not, so I really need an indication of whether
the buffers are full or not. The whole point is not to end up blocking
in write().

I had a look at SocketChannel and this does actually seem to do what I
want (albeit in a somewhat obtuse way), however unfortunately it only
exists in version 1.4, and I am stuck with version 1.3!

I'm still surprised at the functionality of write(). By always
blocking there is always the possibility that the server could stall
forever if the client fails to read() and remains connected - which is
completely unacceptable.

Has nobody come accross this problem themselves?

Cheers,
SS
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,764
Messages
2,569,567
Members
45,041
Latest member
RomeoFarnh

Latest Threads

Top