M
Mike
Perhaps the statement in the subject is a bit strong (never say never), but
anyway: I'm having a bit of a problem with a co-worker who created a custom
TCP socket-based client-server application (the protocol is proprietary).
This application processes aver 15 million transactions (short-lived,
average less than 1 second each) per day, concentrated over a few hours, and
with a farily high level of concurrency. It is mission-critical to our
company. Anyway, the client side of this application creates a new socket
connection to the server for *every* transaction, and closes it upon
completion. Correspondingly, the server accepts each new connection and
then discards it at the end of each transaction.
My co-worker's rationale is that creating a new connection represents just a
fraction of the total time it takes to process each transaction. He also
states that this "saves memory because the application does not have to keep
these pooled connections in memory". I completely, vehemently, disagree
with this. First, although the connection time may be small relative to the
time it takes to process a transaction, the overall effect is amplified by
the sheer number of transactions. Secondly, the amount of garbage being
created - on both the client and the server - by constantly creating new
connections has to be far worse than the memory needed to pool a hundred or
so connections. Not to mention system resources - the socket itself - that
must be reclaimed by the system.
I'm looking for any comments to either debunk my thinking, or that I can use
as ammunition to debunk my co-worker. Its really rather important, so I
would greatly appreciate any help here.
Thanks,
Mike
anyway: I'm having a bit of a problem with a co-worker who created a custom
TCP socket-based client-server application (the protocol is proprietary).
This application processes aver 15 million transactions (short-lived,
average less than 1 second each) per day, concentrated over a few hours, and
with a farily high level of concurrency. It is mission-critical to our
company. Anyway, the client side of this application creates a new socket
connection to the server for *every* transaction, and closes it upon
completion. Correspondingly, the server accepts each new connection and
then discards it at the end of each transaction.
My co-worker's rationale is that creating a new connection represents just a
fraction of the total time it takes to process each transaction. He also
states that this "saves memory because the application does not have to keep
these pooled connections in memory". I completely, vehemently, disagree
with this. First, although the connection time may be small relative to the
time it takes to process a transaction, the overall effect is amplified by
the sheer number of transactions. Secondly, the amount of garbage being
created - on both the client and the server - by constantly creating new
connections has to be far worse than the memory needed to pool a hundred or
so connections. Not to mention system resources - the socket itself - that
must be reclaimed by the system.
I'm looking for any comments to either debunk my thinking, or that I can use
as ammunition to debunk my co-worker. Its really rather important, so I
would greatly appreciate any help here.
Thanks,
Mike