That does not answer the OP's question. Are you sure someone here would
know how to correct that reply? It's much more likely that such a
correction would happen in comp.unix.programmer.
this was meant more as an example, not as a complete or comprehensive
answer to the OP, which would have taken actually writing something more
than a quoted expression.
well, the general answer for the matter of "ok, server stopped sending
stuff, how to deal with blocking?..." is "ok, set socket to non-blocking
mode, and just use non-blocking IO instead" (because normally
non-blocking IO is what is more useful anyways, as blocking for socket
IO is rarely a useful behavior...).
another possible answer is "select()", which can return a status for if
there is any data waiting in the socket, but IMO non-blocking IO is
generally more useful anyways (otherwise, how does one know the complete
message arrived before trying to read it?...).
actually, more generally, it makes sense to sit around waiting for
messages to arrive on a socket, and then to re-dispatch them in an
event-driven manner once complete and parsed (say, using callbacks or
similar).
Some answers might lead the OP astray. Almost by definition, the OP
does not know what the right answer is and a wrong one that goes
uncorrected might lead to all sorts of unnecessary convolutions in the
code.
whether or not the answer is ideal or not generally is not the issue,
more it is a matter of people declaring most everything OT here (and it
is not like people are "smarter" elsewhere, as most places on usenet is
flaming and trolls anyways...).
usual answer to most things is "hell, it works for now, good enough"
and, if sometime later, if it doesn't work, well either one can fix it
then, or be like "hell, not my problem". after all, most code is
released with a no-warranty clause, and it is peoples' own fault if they
use code in a way which compromises something without due-diligence, as
that clause is there for a reason...
say, if code is given out with a no warranty clause, and someone else
uses it in a place where (for example) it risks compromising peoples'
health or safety, it is the fault of the person who used the code in
such a context, and not the responsibility of the original developer.
although a person developing code for such a situation is likely
obligated to put in their best effort to ensure correctness and
reliability, but most people are not developing for such systems.
like, the goal is usually something like "throw something together, and
make it work", and the faster the work gets done, the happier most
people are (provided it works acceptably, as it looks bad to the bosses
or customers or similar if the code endlessly blows up in peoples'
faces, and may make the developer look bad...).
so, one saves time, and one saves money, thus hopefully avoiding the
problem of projects running over-schedule and over-budget.
granted, yes, a lot may come to priority weighting though, like if it is
more important that the code work reliably, one may exercise more
diligence, but for "run of the mill" stuff, it is more a "who cares"
scenario, except if the bugs are bad enough to get really annoying or
impede usability.
so, one can trade-off between whether saving time and reliability/... is
more of a priority.
also, I guess a certain level of effort needs to go into writing
"acceptable" code, since if the code is sufficiently poor as to
compromise maintainability or ones' ability to get the project done
effectively (within budget and schedule), then this may also risk eating
up time and money as well (again, looking bad to bosses, to customers,
or to investors, or similar...).
or such...