I was a bit unhappy to read this, because what you describe here is
just what I've tried yesterday in my test-game with 2 balls, so if
I've pointed that out, you wouldn't have to say I
I'll confess that I may have missed some intervening comments.
(Hmmm, I see you've dropped the snakes -- I must have missed that; most
of the following is still snake oriented)
Also, please take into account that my code samples are
pseudo-code, not real Python.
Now, that's *not* how I'm tending to do it; that's what I was thinking
of while trying the turn-based approach, and describing here;
Christopher Subich asked how it was done initially, so I brought my
hanging system out of shame again, and added something I was thinking
about to improve it, while I already realize that's not the way to go
- from your previous comprehensive posts.
I think (today at least, ask me tomorrow and the answer might
change -- after all, my missive yesterday morning ran so long I was late
getting to work! That doesn't improve one's mood <G>).... Uh... I think
your initial effort probably should be just to get the separation of
functionality working regardless of speed.
On that thought, I have a few non-essential musings: Is your
"world" open or bounded (ie, can you go off one side and come in from
the opposite, or do you bounce); Do your "shots" have a lifetime or
(related to bounded) end on impact with anything. And for initial
simplification, I'd consider a "hit" to be a shot intersecting only the
"head" -- let shots and bodies cross each other (this lets the server
only worry about head position/vector and shot position/vector -- no
overhead for tracking all tail segments for impact).
As stated above, that's how I'm trying it right now. Still, if doing
it turn-base, I would have to create a new thread every time.
If doing turn-based, you don't use threads -- nothing happens
/until/ an input is received.
I have some other questions though - please see below.
DLB> Also, recommend you use threading, not thread as the module.
Surely will, why?
It's a higher level interface, with fancier support. "thread"
just has the start operation, and a single type of lock. "threading" has
the ability to set a thread to daemon status (it goes away when all
non-daemon threads end, no need to explicitly tell it to stop), "join"
(wait for thread to finish), basic lock, reentrant lock, condition
variables, semaphores, and events.
DLB> SERVER
DLB> An input thread would look something like:
DLB> while True:
DLB> data = socket.read() #assumes both clients write to same socket
DLB> #otherwise use a select(client_socket_list)
DLB> #followed by socket.read for the flagged socket(s)
DLB> parse data
DLB> lock global
DLB> save in global state variables
DLB> unlock global
Now, few questions. Do I need to time.sleep(0.xxx) in any of these
while True: loops, not to overwhelm CPU? I can measure the time at
The input thread, if used (since I can visualize variations of
the server logic that don't need threads) should be blocking on the
socket read. If anything, it is the /clients/ that should have sleeps()
to avoid overloading the server.
But (I did warn you my views can change from day to day) if the
server does not have to perform periodic computations: ie, it computes
effects based on the last received input, and sends those results to the
clients, it can then wait until a client reacts to those results. In
/this/ case, the entire server is just
while True:
data = socket.read()
compute new world state (include time of next "impact")
for c in client_list:
send world state to c, including "now" and "impact" time
#if a client hasn't responded by "impact" time
#the impact takes place and the client(s) should reflect
#the damage
The periodic computation mode gets closer to a true simulation
engine. The server will need the two threads. An input thread which
blocks until it gets asynchronous client data, and a computation thread
synchronized to the simulation clock (say 4 updates per second)
beginning and end of each iteration to make things happen fixed number
of times per second, but should I? And another: do I get it right that
instead of "lock global" you mean:
while global.locked:
"lock global" means acquiring some predefined lock object that
is used to prevent overlapping access to the global data.
NO!, the lock itself does the blocking -- you don't poll.
Still using pseudo-code, and assuming a single socket is used by
all clients (otherwise you need a select and indexed read into the
correct socket).
#####
World_Lock = threading.Lock()
..... #initialize world state
def inputThread():
while True:
data = socket.read()
#assumes data identifies the action type,
#the client, and the object ID
World_Lock.acquire() #block if needed
#update world state with data
World_Lock.release()
def updateThread():
Tnext = time.time() #or some low-level time value
# maybe time.clock()
while True:
Tnext = Tnext + 0.25 #sec -- four updates/second
World_Lock.acquire() #block if input is in action
#compute state for this time increment
# note: this time I'm doing everything inside
# the lock!
for c in client_list:
send world state to client "c"
# alternate is to do a deep copy inside the lock
# and then compute/send the update outside
# the lock (lets input run more often)
World_Lock.release()
time.sleep(Tnext - time.time())
# this adjusts for the processing time inside lock
UT : threading.Thread(updateThread)
IT : threading.Thread(inputThread)
# wait for game end? Do not use a polling loop!
And I also wonder how do I make sure that 2 threads don't pass this
"while" loop simultaneously and both try locking global. There is a
probability, not?
Read the manual for the threading module and the various locks,
events, etc.
In my yesterday experiment, I have a separate thread for each of 2
clients, and what I do there is:
def thr_send_status(player_sock):
while 1:
t, sub_addr = player_sock.recvfrom(128) #player ready to accept
player_sock.sendto(encode_status(g.get_status()), sub_addr)
Problem here is that you've again tied things together -- you
are only sending an update to a client when you receive data from the
same client... If a player walks away from the keyboard, he'll never
update.
Computing the world update AND SENDING it to ALL clients has to
be separate from receiving input from ANY client.
I'm reading 1 byte from client every time before sending new update to
him. OK, Ok, I know that's not good, ok. Now, I like your idea much
Too much overhead... You should be receiving packets that say
something like "client 2 object 1 is at (x, y) moving (dx, dy)" (pack
that as some string of bytes, not this long text description: (2, 1, x,
y, dx, dy); "client 2 object 2 (a tail "bullet") created at (x, y)
moving (dx, dy)"
Assuming dx, dy are, say, pixels/second, and the server is
throttled to 4 updates/second, each update will compute positions as
x, y = x + dx/4, y + dy/4
See if any objects intersect (assuming a circular "object")
if sqrt((x1-x2)^2 + (y1-y2)^2) < radius:
set world state to indicate a collision of objects
(on next update, remove flagged objects?)
more, where you say we first measure the processing speed of each
client, and send data to every client as often as he can process it:
To simplify, you'll notice that "today's" installment has the
server throttled to just four updates/second... Even a WfW3.11 on a
40MHz i386 should handle that <G> Par of my idea of simplifying to get
the logic correct, then add speed ups.
Yes, this I great, but what if speed changes? Should I retest this
around every 10 seconds? I mean, server sending too much data to a
client is ok for server, but very bad for a client, since it starts
hanging and loses synchronization, unless I use a timestamp to throw
Time stamping all transmissions is probably going to be needed
too -- it would let you detect lag -- when the client notices the
received packets are (client_time - packet_time > allowable_lag) -- and
send a special packet to the server to throttle down lower... But I
don't have any easy way to throttle back up if the situation improves
(actually, the server could also check incoming packets for lag and
automatically apply the throttle to that client)
So they simply exchange data all the time in separate threads, even
through separate sockets (not to dispatch the confirmations from
You're adding your packet confirmations on top of the TCP
overhead (TCP doesn't consider a packet to have been received until the
low-level protocol has acknowledged it). That may be why you received
one suggestion to use UDP sockets -- UDP is a "send and forget". The
send just sends and never expects a response -- next update interval,
send a new packet. The assumption is that it is not critical if a packet
is lost in transit as the next packet should carry all the data anyways.
different players into corresponding threads via global flags), which
makes the "ready" thing ok from speed point of view, but I prefer
your way, quoted above, so my question is still valid - do I
remeasure speed?
I'm also almost sure it's wrong to have separate sockets and threads
for each player, you say I should select()/dispatch instead, but I'm
afraid of that some *thing* being wrong with select() for Windows.
The Windows problem with select is that it ONLY works for
sockets. UNIX/Linux select treats sockets and file I/O identically, so
you can use ONE thread to select on both the inbound socket AND the
keyboard. Windows needs a separate thread to handle the keyboard input.
Now, you see, sending user motions to server appears to be faster (or
at least not slower) than getting new status, so, as a result, I have
the following picture: I place my pointer somewhere, and the ball runs
to it, and then runs a bit beyond it, because by the moment server
knows the ball is already at my pointer, client still thinks it's not.
You have two programs both trying to compute the updates, I
suspect. If you notice, in all my examples, client "input" goes directly
to the server. The client only renders the screen based on what the
server sends back. That way, both clients will be rendering based upon
identical information (especially with the above 4 update/second sample)
Several seconds later, thins "a bit beyond" becomes "a lot beyond",
and finally both balls run away from game field, being helpless; seems
like I/O thread being much more productive than send_status one?
No idea about the "run away" -- what I'd have expected is
"nervousness" -- the ball jittering around a non-moving mouse pointer
(whoops, I went past, need to back up... whoops too much, go the other
way).
This all was when I had *no* time.sleeps in the while 1: loops.
So I added time.sleep(0.005) to server's send_status loop and to
client's send_user_action loop. Things became better, but still balls
Well, if there are no blocking I/O calls anywhere, a thread will
suck up all the CPU the OS will give it, then move to another thread.
(Or, in Python, something like 100 byte-codes, I think).
are spinning around the mouse pointer instead of running under it.
Every time client tries moving towards the mouse, it's ball is already
at another place at server, and it runs in wrong direction.
As I said above, it sounds like you have multiple programs EACH
computing the position of the ball. My rough scheme puts all knowledge
of object position and motion in one program (the server), and the
clients only render the display based on what comes from the server.
The client only sends /changes in motion/ (or, for this apparent
magnetic mouse pointer, the mouse position) to the server. The server
computes, from the previous object position and motion, where the object
is at the new quantum (time step), and tells the clients that this is
where to draw the object.
I need to get to work, running late again...
--