But please note that you do not invent your own JSON parser
either - you use something already done. In Java there are json.org,
gson etc..
in Java, yes, one may use the libraries.
on the C end, one may choose to throw one together, or use a JavaScript
VM if one is available, ...
it all depends.
I can not follow you way of thinking. With multiple interactions
in parallel there are no strict correlation between latency and
throughput.
there is a rough correlation though.
for the part about TCP, this was related to how TCP worked (in its
traditional form), namely the existence of a 64kB maximum window size.
apparently, this is out of date, as there is a feature known as
RFC-1323, which is enabled by default on Windows Vista and newer, which
allows a larger TCP window.
http://tools.ietf.org/html/rfc1323
for the part about moderating kB/s, this has a lot more to do with a
users' internet connection.
say, hypothetically, a user has dial-up.
now, what if the data being sent does not fit over dial-up (one is
trying to send 10kB/s, but a 56k modem can only handle ~6.5kB/s or so)?
well, then, the connection will backlog (the connection will send at the
rate it can send, and anything else will have to wait).
similar limits may exist over the internet, but in a less direct form:
consider, the internet is prone to occasionally drop a packet here or there.
so, stream is going over the internet, and a (single) packet drops, what
happens:
well, all the data up to the dropped packet reaches the other end, the
other end may send a packet back indicating the point recieved;
the sender will start resending data from that point;
the reciever will start transmitting again.
this results in essentially a ping-time delay in which no data can be sent.
if the sender is sending messages at a fixed rate, what happens?
well then, the messages will pile up, waiting to be sent;
after transmission resumes, several updates worth of data need to be sent;
if all of the updates fit within the bandwidth of the connection
(end-to-end), then there is may be no obvious stall (updates can all be
sent at full speed);
if the enough data back-logs so as to exceed the bandwidth available,
then it has to wait to be sent, and if the sender just keeps naively
sending updates, then essentially one gets a stall (and the data being
received by the receiver will start becoming progressively more
out-of-date).
these properties can be observed with things like internet radio and
video streaming (if the connection is fast enough, playback happens in
real-time without obvious stalls or re-buffering, even though the rate
at which the data comes over the internet is often very irregular).
similar also applies to internet telephony as well.
if one tries to operate within a fixed-bandwidth window, similar to
internet radio, most minor stalls can be glossed over (this limit being
a bit lower than the end-to-end transfer rate of the connection). going
lower is better, since the lower one goes, the more room there for error
there is.
the main issue is, namely, that the data being sent has to be able to
fit within these bandwidth limits (hence, why data compression is highly
desirable in this case).
an online game basically amounts to a bidirectional stream between the
client and server, with the server sending out a stream of updates
(typically, everything going on in the immediate view of the client),
and the client sends a stream of their attempted actions (in response to
what they see on screen).
if everything is working well, then the delays and irregularities of
their internet connection is mostly hidden, and to them it all seems
like they are interacting with the world in real-time (usually there is
a lot of trickery here as well, mostly based around linear extrapolation
and so on).
side note: each end may transmit time-stamps as part of their updates,
and the other end may transmit the last-received timestamps, partly so
that the timing delays can be estimated and partially compensated for.
another (similar concept) for players playing games is the concept of
"leading", where a person will take aim at a moving enemy, estimate the
speed of the projectile and where the enemy will be at the time, and aim
and fire at that location instead (then the enemy will essentially "run
into" the traveling projectile). note that if a player always aims at
where the enemy is "right now", very often they will miss (as by the
time the projectile reaches the destination, the enemy has already moved
out of the way).
so, the game does similar in an attempt to hide the "travel time" that
is the internet.
or such...