socket function that loops AND returns something

B

Brad Tilley

I have a function that starts a socket server looping continuously
listening for connections. It works. Clients can connect to it and send
data.

The problem is this: I want to get the data that the clients send out of
the loop but at the same time keep the loop going so it can continue
listening for connections. If I return, I exit the function and the loop
stops. How might I handle this?

def listen(ip_param, port_param):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
while 1:
print "\nWaiting for new connection...\n"
s.listen(1)
conn, addr = s.accept()
print "Client", addr[0], "connected and was directed to port",
addr[1]
data = conn.recv(1024)
# I want to get 'data' out of the loop, but keep the loop going
print "Client sent this message:", data
conn.close()
 
S

Steve Holden

Brad said:
I have a function that starts a socket server looping continuously
listening for connections. It works. Clients can connect to it and send
data.

The problem is this: I want to get the data that the clients send out of
the loop but at the same time keep the loop going so it can continue
listening for connections. If I return, I exit the function and the loop
stops. How might I handle this?

def listen(ip_param, port_param):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
while 1:
print "\nWaiting for new connection...\n"
s.listen(1)
conn, addr = s.accept()
print "Client", addr[0], "connected and was directed to port",
addr[1]
data = conn.recv(1024)
# I want to get 'data' out of the loop, but keep the loop going
print "Client sent this message:", data
conn.close()

The classic solution to this problem is to create a new thread or fork a
new process to handle each connection and have the original server
imeediately loop to await a new connection.

This is what the threading and forking versions of the SocketServer
classes do. Take a look at the ForkingMixIn and ThreadingMixIn classes
of the SocketServer.py library to see how to turn your synchronous
design into something asynchronous.

regards
Steve
 
D

Donn Cave

Quoth Brad Tilley <[email protected]>:
| I have a function that starts a socket server looping continuously
| listening for connections. It works. Clients can connect to it and send
| data.
|
| The problem is this: I want to get the data that the clients send out of
| the loop but at the same time keep the loop going so it can continue
| listening for connections. If I return, I exit the function and the loop
| stops. How might I handle this?
|
| def listen(ip_param, port_param):
| s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
| while 1:
| print "\nWaiting for new connection...\n"
| s.listen(1)
| conn, addr = s.accept()
| print "Client", addr[0], "connected and was directed to port", addr[1]
| data = conn.recv(1024)
| # I want to get 'data' out of the loop, but keep the loop going
| print "Client sent this message:", data
| conn.close()

Well, some people would address this problem by spawning a thread
(and then they'd have two problems! as the saying goes.)

It depends on what you want. Your main options:

- store whatever state you need to resume, and come back
and accept a new connection when you're done with the data -
class SocketConnection:
def __init__(self, ip_param, port_param):
...
def recv(self):
... accept
... recv
... close
return data
- turn it upside down, call the processing function
def listen(ip_param, port_param, data_function):
...
data = conn.recv(1024)
data = data_function(data)
if data:
conn.send(data)
conn.close()
...
- fork a new process (UNIX oriented.)
...
pid = os.fork()
if pid:
wait_for_these_pids_eventually.append(pid)
else:
try:
conn, addr = s.accept()
s.close()
conn_function(conn)
finally:
os._exit(0)
- spawn a new thread (similar idea)

Processes and threads will allow your service to remain responsive
while performing tasks that take a long time, because these tasks
will run concurrently. They do bring their own issues, though, and
it isn't worth it unless it's worth it, which only you can tell.
It's a pretty lengthy computation that takes so much time that a
thread or process is really called for.

If the inverted, functional approach suits your application, note
that you can use a Python class instance here to keep state between
connections - like,
class ConnectionReceiver:
...
def data_function(self, data):
self.count = self.count + 1
print >> sys.stderr, 'datum', self.count
...
...
receiver = ConnectionReceiver()
listen(ip_param, port_param, receiver.data_function)

Donn Cave, (e-mail address removed)
 
B

Bryan Olson

Brad said:
> [...] I want to get the data that the clients send out of
> the loop but at the same time keep the loop going so it can
> continue listening for connections. [...]

I'd classify the ways to do it somewhat differently than the
other responses:

-Start multiple lines of execution
- using threads
- using processes (which, in Python, is less portable)

-Wait for action on multiple sockets within a single thread
- using os.select
- using some less-portable asynchronous I/O facility

Those are the low-level choices. Higher level facilities, such
as Asyncore and Twisted, are themselves based on one or more of
those.

Different servers have different needs, but when in doubt use
threads. Threading on the popular operating systems has
improved vastly in the last several years. Running a thousand
simultaneous threads is perfectly reasonable. Programmers using
threads have to be aware of things like race conditions, but
when threads are handling separate connections, most of their
operations are independent of other threads.
 
M

Michael Sparks

Bryan Olson wrote:
....
Different servers have different needs, but when in doubt use
threads. Threading on the popular operating systems has
improved vastly in the last several years. Running a thousand
simultaneous threads is perfectly reasonable.

If you want code to be portable this is false, and I'm amazed to see
this claim on c.l.p to be honest. It's a fairly good way to kill a
fair number of still well used OSs.

Just because a handful of OSs handle threading well these days does
not mean that you will end up with portable code this way. (Portable
in that you get the same overall behaviour - not the simple concept
of the code running)

_Small_ numbers of threads are very portable I would agree, but not
thousand(s).

Best Regards,


Michael.
--
(e-mail address removed)
British Broadcasting Corporation, Research and Development
Kingswood Warren, Surrey KT20 6NP

This message (and any attachments) may contain personal views
which are not the views of the BBC unless specifically stated.
 
C

Cameron Laird

.
.
.
I'd classify the ways to do it somewhat differently than the
other responses:

-Start multiple lines of execution
- using threads
- using processes (which, in Python, is less portable)

-Wait for action on multiple sockets within a single thread
- using os.select
- using some less-portable asynchronous I/O facility

Those are the low-level choices. Higher level facilities, such
as Asyncore and Twisted, are themselves based on one or more of
those.

Different servers have different needs, but when in doubt use
threads. Threading on the popular operating systems has
improved vastly in the last several years. Running a thousand
simultaneous threads is perfectly reasonable. Programmers using
threads have to be aware of things like race conditions, but
when threads are handling separate connections, most of their
operations are independent of other threads.
.
.
.
Are you advising Python programmers to use only thread-based
approaches, and to judge Asyncore and Twisted on that basis?
Do you intend that readers believe that it "is perfectly
reasonable" to design in terms of a single Python process
which manages up to "a thousand simultaneous *Python* threads"?
 
B

Bryan Olson

Cameron Laird wrote:
[...]
>
> Are you advising Python programmers to use only thread-based
> approaches, and to judge Asyncore and Twisted on that basis?
No.

> Do you intend that readers believe that it "is perfectly
> reasonable" to design in terms of a single Python process
> which manages up to "a thousand simultaneous *Python*
> threads"?

Yes.
 
C

Cameron Laird

.
.
.
Thank you for this and your other unambiguous clarifications.

My daily world includes several Win* boxes running on 100-200 MHz
*86 processors, with memory ranging from 32 Mb up. Perhaps I
should make time during the next month to write and run a few
benchmarks applicable to my needs; I confess I haven't done so for
the case of a thousand simultaneous Python threads.
.
.
.
 
C

Cameron Laird

.
.
.
Thank you for this and your other unambiguous clarifications.

My daily world includes several Win* boxes running on 100-200 MHz
*86 processors, with memory ranging from 32 Mb up. Perhaps I
should make time during the next month to write and run a few
benchmarks applicable to my needs; I confess I haven't done so for
the case of a thousand simultaneous Python threads.
.
.
.
Do
http://groups.google.com/groups?th=181172231bcfb4a
and especially
http://groups.google.com/groups?frame=left&th=4e2e83a9ad69f788
make my hesitation about reliance on kilothreads on older platforms
more understandable?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,007
Latest member
obedient dusk

Latest Threads

Top