thread. question

T

Tim Wintle

Hi,
This is my first post here - google hasn't helped much but sorry if this
has been asked before.

I've been wondering about some of the technicalities of locks in python
(2.4 and 2.5 mainly).

I'm using the "old" thread module as (1) I prefer the methods and (2) It
should be a tiny bit faster.

As far as I can see, the thread module is fairly much a direct wrapper
around "PyThread_allocate_lock" etc. - defined in
"thread_pthread.h" (took me ages to find the definitions in a header
file!), so as far as I can tell the implementation is very closely
wrapped around the system's threading implementation.

Does that mean that every python thread is actually a real separate
thread (for some reason I thought they were implemented in the
interpreter)?

Does that also mean that there is very little extra interpreter-overhead
to using locks (assuming they very infrequently block)? Since if you use
the thread module direct the code doesn't seem to escape from C during
the acquire() etc. or am I missing something important?

Thanks,

Tim Wintle
 
B

bieffe62

Hi,
This is my first post here - google hasn't helped much but sorry if this
has been asked before.

I've been wondering about some of the technicalities of locks in python
(2.4 and 2.5 mainly).

I'm using the "old" thread module as (1) I prefer the methods and (2) It
should be a tiny bit faster.

As far as I can see, the thread module is fairly much a direct wrapper
around "PyThread_allocate_lock" etc. - defined in
"thread_pthread.h" (took me ages to find the definitions in a header
file!), so as far as I can tell the implementation is very closely
wrapped around the system's threading implementation.

Does that mean that every python thread is actually a real separate
thread (for some reason I thought they were implemented in the
interpreter)?

I've hever seen the C implementation of python thread, but I know that
python threads
are real OS threads, not 'green threads' inside the interpreter.

Does that also mean that there is very little extra interpreter-overhead
to using locks (assuming they very infrequently block)? Since if you use
the thread module direct the code doesn't seem to escape from C during
the acquire() etc. or am I missing something important?

One significant difference with doing threads in lower level languages
is that in Python the single interpreter instructions
are atomic. This make things a bit safer but also less reactive. It is
especially bad if you use C modules which have function
calls which take long time (bit AFAIK most of IO standard modules have
work around for it),
because your program will not change thread until the C function
returns.
The reason - or one of the reasons - for this is that the python
threads share the interpreter, which is locked
by a thread before execution of a statement and released at the end
( this is the 'Global Interpreter Lock' or GIL of
which you probably found many references if you googled for python
threads ). Among the other things, this seem to mean that
a multi-threaded python application cannot take full advantage of
multi-core CPUs (but the detailed reason for that escapes me...).

For these reasons, on this list, many suggest that if you have real-
time requirements, probably you should consider using
sub-processes instead of threads.
Personally I have used threads a few times, although always with
very soft real-time requirements, and I have never been bitten by the
GIL (meaning that anyway I was able top meet my timing requirements).
So, to add another achronym to my post, YMMV.

Thanks,

Tim Wintle

Ciao
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,055
Latest member
SlimSparkKetoACVReview

Latest Threads

Top