Threading - Why Not Lock Objects Rather than lock the interpreter

F

Fuzzyman

Looking at threading code it seems that the Global Interpreter Lock is
a bit of a 'brute force' way of preventing two threads attempting to
access sensitive objects simultaneously....

Why not have a new type of object with a 'lock' facility... if an
object is locked then any thread trying to access the object other
than the one obtaining the lock is blocked until the lock is
released..... rather than blocking *all* threads until one thread has
finished with the object.....

It could be implemented as a new attribute of existing objects... or
new types of objects.....

Fuzzy
 
C

Christopher A. Craig

Looking at threading code it seems that the Global Interpreter Lock is
a bit of a 'brute force' way of preventing two threads attempting to
access sensitive objects simultaneously....

Why not have a new type of object with a 'lock' facility... if an
object is locked then any thread trying to access the object other
than the one obtaining the lock is blocked until the lock is
released..... rather than blocking *all* threads until one thread has
finished with the object.....

It could be implemented as a new attribute of existing objects... or
new types of objects.....

It would slow down the interpreter drastically. The important thing
here is that for the above to be true, you'ld need to define
"sensitive objects" to be "objects with a reference count" (i.e. all
of them). Because of that if you did this you'ld be doing about as
much locking and unlocking as you would code execution. For example,
calling the following function would require at least 2 lock/unlock
pairs (there may be more, I'm not thinking too hard about it):

def donothing(): pass
 
D

Daniel Dittmar

Fuzzyman said:
Looking at threading code it seems that the Global Interpreter Lock is
a bit of a 'brute force' way of preventing two threads attempting to
access sensitive objects simultaneously....

Why not have a new type of object with a 'lock' facility... if an
object is locked then any thread trying to access the object other
than the one obtaining the lock is blocked until the lock is
released..... rather than blocking *all* threads until one thread has
finished with the object.....

Because the reference counter of objects has to be synchronized.

Daniel
 
A

Andrew MacIntyre

It would slow down the interpreter drastically. The important thing
here is that for the above to be true, you'ld need to define
"sensitive objects" to be "objects with a reference count" (i.e. all
of them). Because of that if you did this you'ld be doing about as
much locking and unlocking as you would code execution. For example,
calling the following function would require at least 2 lock/unlock
pairs (there may be more, I'm not thinking too hard about it):

def donothing(): pass

As an example of what can happen, I played around with building a
pre 2.3alpha CVS Python with OpenWatcom on OS/2.

Building the Python core as a DLL resulted in a Pystone rating 25% less
than a static linked Python.exe.

This was traced to the OpenWatcom malloc() implementation always using
thread locks when linked to code in a DLL, even when extra threads are
never created (ie only the primary thread is running). Enabling PyMalloc
fixed that.

While Python does many malloc()s and free()s, it does many, many more
object accesses.

So the performance degradation from an object locking strategy can be
savage.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,767
Messages
2,569,572
Members
45,045
Latest member
DRCM

Latest Threads

Top