Event Driven programming - Doubts

K

Kottiyath

Hi,
I have been looking at Twisted and lately Circuits as examples for
event driven programming in Python.
Even though I understood how to implement the code in these and
what is deferred etc, I have not yet understood the implementation of
deferred. I went through a lot of tutorials, but I guess most places
they expect that the user already understands how events are
generated. The tutorials mention that there is no more threads once
twisted is used.

My question is as follows:
I have not understood how the callbacks are hit without (a)
blocking the code or (b) having new threads.

The usual example given is that of a program waiting for data coming
through a socket. In the tutorials, it is mentioned that -in an event
driven program, we schedule the code to hit when the remote server
gets back to us - .
Now, my question is - somebody has to still wait on that socket and
check whether the data is received, and once all the data is received,
call the appropriate callbacks.

Is twisted creating a micro-thread which just waits on the socket and
once the data is received, calls callFromThread for it to run on the
main loop?

If so, Even though data locking etc is not a problem, are we not still
having threads? Will it not still cause scalability problems in high
traffic?
If not, could somebody let me know how it is done?
 
D

David Stanek

If so, Even though data locking etc is not a problem, are we not still
having threads? Will it not still cause scalability problems in high
traffic?
If not, could somebody let me know how it is done?

This somewhat depends on the application. Is it IO bound or CPU bound?
From what I understand about twisted you will only have one process
and one thread which means you will only be using one CPU.
 
J

James Mills

Hi,
I have been looking at Twisted and lately Circuits as examples for
event driven programming in Python.

Wonderful! :) "circuits" that is :)
Even though I understood how to implement the code in these and
what is deferred etc, I have not yet understood the implementation of
deferred. I went through a lot of tutorials, but I guess most places
they expect that the user already understands how events are
generated. The tutorials mention that there is no more threads once
twisted is used.

deferred to me is nothing more than
scheduling "things" to happen "later".

ie: Timers, or Timed Events.

I clearly biased here as I'm the developer
of circuits, so I can't comment on Twisted's
design or architecture, but the way to
achieve Twisted's so-called deferred (events?)
is to use the Timer component, Timed Events.
My question is as follows:
I have not understood how the callbacks are hit without (a)
blocking the code or (b) having new threads.

Again speaking in terms of circuits,
but also in the event-driven paradigm:

a) Event Handlers _can_ block - but ideally shouldn't (ihmo).
b) Forking new threads/processes per event is mostly not necessary.
(There are exceptions).
The usual example given is that of a program waiting for data coming
through a socket. In the tutorials, it is mentioned that -in an event
driven program, we schedule the code to hit when the remote server
gets back to us - .
Now, my question is - somebody has to still wait on that socket and
check whether the data is received, and once all the data is received,
call the appropriate callbacks.

In the case of circuits' TCPServer component
you simply poll it in your main event-loop.
The pattern looks like this:

from circuits.lib.sockets import TCPServer

server = TCPServer(8000)
while True:
server.poll()
server.flush()

The TCPServer component has various builtin
event handlers that do all the work for you
and expose other more useful events to
the application, such as:
* connect
* disconnect
* read
* error
Is twisted creating a micro-thread which just waits on the socket and
once the data is received, calls callFromThread for it to run on the
main loop?

I can't comment - I tend to avoid threads
where ever possible.
If so, Even though data locking etc is not a problem, are we not still
having threads? Will it not still cause scalability problems in high
traffic?

Regarding scalability btw ... (in case you're interested)
circuits comes with circuits.lib.web and several
sets of Web Components. A lot of code was
borrowed from the BaseHTTPServer from the
python standard library and bits and pieces
from CherryPy and the cgi module.... In terms
of performance, it has a "raw" performance
~3k req/s on good hardware.

Do have fun in your endeavours :)

cheers
James

Circuits: http://trac.softcircuit.com.au/circuits/
 
B

Bryan Olson

Kottiyath said:
[...] I have not yet understood the implementation of
deferred. I went through a lot of tutorials, but I guess most places
they expect that the user already understands how events are
generated. The tutorials mention that there is no more threads once
twisted is used.

My question is as follows:
I have not understood how the callbacks are hit without (a)
blocking the code or (b) having new threads.

The usual example given is that of a program waiting for data coming
through a socket. In the tutorials, it is mentioned that -in an event
driven program, we schedule the code to hit when the remote server
gets back to us - .
Now, my question is - somebody has to still wait on that socket and
check whether the data is received, and once all the data is received,
call the appropriate callbacks.

Is twisted creating a micro-thread which just waits on the socket and
once the data is received, calls callFromThread for it to run on the
main loop?

No. Event-driven frameworks are built upon some system call that waits
for events from multiple sources at once. The Python standard library
exposes several such calls, the most portable of which is select(), in
the module also named "select".

http://docs.python.org/library/select.html

So how is something like deferred implemented? When you schedule action
on your socket, you are telling the framework to include your socket in
in subsequent calls to select(), until the socket selects as ready.

The framework keeps a single global list of sockets on which clients are
waiting, and from this list it builds the select() parameters. A single
call to select() waits on behalf of all the callbacks. The select() call
returns a list of sockets ready for I/0; the framework iterates over the
ready list, invoking the corresponding callbacks one by one. After the
last callback returns, the framework loops back to select() again.

select() is not the only call to do multi-source I/O, and I'm not an
expert on these frameworks, so take the above as a simplified general
description.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,007
Latest member
obedient dusk

Latest Threads

Top