K
Kerry Shetline
Problems cleanly breaking a socket connection
The problem I'm having is this: My server sends my client the message
"BYE", signaling that the connection between the two is about to
close. The server makes sure that the socket's output stream is
flushed, and then closes the socket.
Sometimes this works fine. The client gets the BYE message, closes the
socket from its side, and both the client and server are happy. But
quite often, the client never gets this BYE message, but instead fails
with this exception:
"Software caused connection abort: recv failed"
If I do something which shouldn't be necessary -- sleep for one second
after sending the BYE message, and before closing the socket on the
server side -- the client always gets the BYE message, and everything
goes smoothly.
How come flushing the output stream isn't enough ensure that the
client (on the same computer -- not even remote at this point) gets
the last message I sent before I close the socket? If I'm supposed to
wait a bit, rather than pulling an arbitrary time out of the air like
one second, how would I know how long to wait? (Waiting for a response
from the client to the BYE message would only move this same problem
over to the client -- *someone* has to be the last one to send a
message and close their side of the socket first.)
I'm using a simple pair of client/server apps, using code roughly
based on this Sun sample code:
http://java.sun.com/docs/books/tutorial/networking/sockets/clientServer.html
The chief differences are:
(1) I'm not doing knock-knock jokes. (I hope that isn't a crucial
part of the technology.)
(2) For test purposes, I've automated the client responses -- there
is no waiting for keyboard input.
(3) The server can accept multiple clients, as per the method
suggested at the bottom of the above web page.
(4) On the server side, I'm putting outgoing messages into queue on
a separate thread. The end result is that both the client and the
server can be waiting for input from each other at the same time, a
situation that never arises in the knock-knock example. Disabling this
queue, however, didn't fix the problem, so I don't think it's a
crucial factor.
Anyone have any ideas what I could be doing wrong? I'm hoping that
this problem sounds familiar enough to someone to get a "Oh, yeah,
THAT!" kind of response without having to see the specifics of my
code. Is the half-assed solution of adding ad-hoc pauses a solution
other people others have adopted, not finding a better way?
The problem I'm having is this: My server sends my client the message
"BYE", signaling that the connection between the two is about to
close. The server makes sure that the socket's output stream is
flushed, and then closes the socket.
Sometimes this works fine. The client gets the BYE message, closes the
socket from its side, and both the client and server are happy. But
quite often, the client never gets this BYE message, but instead fails
with this exception:
"Software caused connection abort: recv failed"
If I do something which shouldn't be necessary -- sleep for one second
after sending the BYE message, and before closing the socket on the
server side -- the client always gets the BYE message, and everything
goes smoothly.
How come flushing the output stream isn't enough ensure that the
client (on the same computer -- not even remote at this point) gets
the last message I sent before I close the socket? If I'm supposed to
wait a bit, rather than pulling an arbitrary time out of the air like
one second, how would I know how long to wait? (Waiting for a response
from the client to the BYE message would only move this same problem
over to the client -- *someone* has to be the last one to send a
message and close their side of the socket first.)
I'm using a simple pair of client/server apps, using code roughly
based on this Sun sample code:
http://java.sun.com/docs/books/tutorial/networking/sockets/clientServer.html
The chief differences are:
(1) I'm not doing knock-knock jokes. (I hope that isn't a crucial
part of the technology.)
(2) For test purposes, I've automated the client responses -- there
is no waiting for keyboard input.
(3) The server can accept multiple clients, as per the method
suggested at the bottom of the above web page.
(4) On the server side, I'm putting outgoing messages into queue on
a separate thread. The end result is that both the client and the
server can be waiting for input from each other at the same time, a
situation that never arises in the knock-knock example. Disabling this
queue, however, didn't fix the problem, so I don't think it's a
crucial factor.
Anyone have any ideas what I could be doing wrong? I'm hoping that
this problem sounds familiar enough to someone to get a "Oh, yeah,
THAT!" kind of response without having to see the specifics of my
code. Is the half-assed solution of adding ad-hoc pauses a solution
other people others have adopted, not finding a better way?