Socket is still connected after Server-Side socket termination.

T

Thomas Schodt

pek said:
Basic Idea
In the client side I have a Thread that loops through
the socket's inputstream and prints out any server response. I also
print out the socket's connection status (socket.isConnected()).

You misunderstand what isConnected() reports.

It only reports the local status of the socket.


bool Socket.isConnected()
what the javadoc should say:
Indicates if connect() has been called on this socket.
Initially this method returns false. After a connection is established,
this method method returns true. It will never change back to false for
any reason (like the connection failing).
 
E

Esmond Pitt

1. read up on TCP/IP protocol to learn just how it handles close, and
if both ends are even supposed to be immediately notified of the other
end's close.

They aren't. The only way you can find out about a TCP connection close
or abort is to try to read or write. This is well known and readily
available information that's posted here about a thousand times a year.
2. use Wireshark or other protocol sniffer
3. watch packets in your case.

A totally pointless waste of time & money given the above.
4. read the fine print of what isConnected in supposed to tell you.

IOW whether or not you ever connected your Socket in your own JVM.
Nothing about the connection. Nothing else either.
5. If it turns out TCP/IP is not designed to tell you of disconnect
sufficiently quickly,

It isn't. It wasn't.
The protocol also had heartbeat "are you still alive" packets. It may
be that Sun added that support to basic TCP/IP, not EOF.

Socket.setKeepAlive(true). This is an optional part of TCP/IP. Nothing
to do with Sun. Or EOF.
 
P

pek

I think this is a poor design. The server should:
- wait for incoming connections from clients
- when a connection is received set up context (if needed)
for the client
- process requests from the client, generating at least one response
per request
- when the connection closes discard any client-specific
context.
- wait for the next connection.

The client should:
- open a connection to the server
- send requests to the server
- read server response(s) to each request
- close the connection when it's done.

IMO the server should NEVER intentionally close a connection: as you've
seen this can cause problems. The method I outlined is much cleaner:
- the client knows when it has finished talking to the server and so
can close the connection.
- in this scheme any connection closures seen by the client are
ALWAYS an error.
- the logic of this scheme means that the server will be waiting for
a new request from the client when the connection is closed and so
will be ready to handle it or close the connection without needing
to disentangle incomplete processing.
- designing the protocol so that every client request generates at
least one server response makes error checking easy (the client
always gets a response or sees the connection close due to an error.
If a simple ACK response is short, the overheads are minimal.

If you design the protocol so that the messages contain text then use of
a packet sniffer is a lot easier. If you add debugging code that prints
all messages sent and received then you don't need a packet sniffer and
debugging process:process connections within a single computer is
simple. Specifying the protocol in terms of formatted records and using
record buffers to handle the messages rather than raw
streams also simplifies the protocol logic and helps a lot with
debugging. I normally design messages as length-delimited records
containing comma separated fields, but ymmv.

Lastly, the read/write loop on your client is probably a bad idea.
Unless your client is quite unusual this just adds complexity without
improving throughput. It also chews up CPU with its polling loop. To me
it smacks of inappropriate optimization: queues and scan loops should
only be introduced if monitoring code in the client shows that simple
"write - wait for response - read" logic is positively identified as a
bottleneck.


Why? A server should be written to service multiple clients which can
connect and disconnect while the server continues to run. If you want to
stop it, use a dead simple client that connects, sends STOP, waits for
OK and then disconnects. Besides, such a client is often useful for
seeing what the server is doing, getting statistics, etc.


Disagree! How many client/server designs had your adviser implemented
successfully?


Sockets work fine for both C and Java if your message exchange protocol
is a clean design.

You probably didn't understand (at all) what is my problem. I know
this isn't the best way. I am not using this. This is an example. I
purposely want to close the connection from server side to see if the
client can catch the closure. The message that I send to the server
(as I mentioned before) is FAKE. That means that I'm just using it for
the example and not in the final version.
 
R

Roedy Green

I am currently developing on localhost. Windows (as wireshock makes it
clear) doesn't support monitoring localhost since there is no physical
interface (modem etc.)

Hmm. I guess WireShark ties in only at the LAN driver level. It might
be fairly easy to round up some old beater of a machine to run on a
LAN so you can monitor traffic. Perhaps one of the other sniffers will
intercept local traffic. see http://mindprod.com/jgloss/sniffer.html

I remember years ago working with a Norton virus monitor. It set up a
proxy mail server on your machine that simply relayed traffic through
to the real mail server. A protocol sniffer might work like that,
forcing you to talk to a dummy address which then talks to the real
address.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,770
Messages
2,569,584
Members
45,075
Latest member
MakersCBDBloodSupport

Latest Threads

Top