@synchronized dect'r &c.

C

castironpi

To whoso has a serious interest in multi-threading:

What advanced thread techniques does Python support?

1) @synchronized

@synchronized
def function( arg ):
behavior()

Synchronized prevents usage from more than one caller at once: look up
the function in a hash, acquire its lock, and call.

def synchronized( func ):
def presynch( *ar, **kwar ):
with _synchlock:
lock= _synchhash.setdefault( func, allocate_lock() )
with lock:
return func( *ar, **kwar )
return presynch

2) trylock:

trylock.acquire() returns False if the thread that currently owns it
is currently blocking on a lock that the calling thread currently
owns. with trylock: instead throws an exception. Implementation
pending. If a timeout is specified, returns one of three values:
Success, Failure, and Deadlock.

3) upon_acquiring( lockA, lockB )( function, *ar, **kwar )

upon_acquiring spawns new thread upon acquiring locks A and B.
Optional UponAcquirer( *locks ) instance can guarantee they are always
acquired in the same order, similar to the strategy of acquiring locks
in order of ID, but does not rely on the implementation detail of
having them. Just acquire them in the order with which the instance
was initialized.

The similar construction:

while 1:
lockA.acq()
lockB.acq()

Is likewise efficient (non-polling), except in the corner case of
small numbers of locks and large numbers of free-acquire pairs, such
as in large numbers of lock clients, spec. threads.

4) @with_lockarg

with_lockarg wraps an acquisition call, as in 2 or 3, and passes a
lock group to the function as a first parameter: yes, even
supersceding object instance parameters.

def function( locks, self, *ar, **kwar ):
behavior_in_lock()
locks.release()
more_behavior()

function is called with the locks already held, so sadly though, with
locks: idiom is not applicable.

5) groupjoin

for j in range( len( strats ) ):
for k in range( j+ 1, len( strats ) ):
branch:
i.matches[ j,k ]= Match( strats[j], strats[k] )
#Match instances may not be initialized until...
joinallthese()

This ideal may be implemented in current syntax as follows:

thg= ThreadGroup()
for j in range( len( strats ) ):
for k in range( j+ 1, len( strats ) ):
@thg.branch( freeze( i ), freeze( j ) )
def anonfunc( i, j ):
i.matches[ j,k ]= Match( strats[j], strats[k] )
#Match instances may not be initialized until...
thg.groupjoin()

Footnotes:

2: trylock actually checks a graph for cyclicity, not merely if the
individual callee is already waiting for the caller.
3: upon_acquiring as usual, a parameter can be passed to indicate to
the framework to preserve calling order, rather than allowing with
lockC to run prior to a series of threads which only use lockA and
lockB.
4: x87 hardware supports memory block pairs and cache pairs, which set
a reverse-bus bit upon truth of rudimentary comparisons, alleviating
the instruction stack of checking them every time through a loop;
merely jump to address when match completes. Fortunately, the blender
doubles as a circuit-board printer after hours, so production can
begin at once.
 
C

castironpi

Corrections.

Typographical error in the implementation of #1.

def synchronized( func ):
def presynch( *ar, **kwar ):
with _synchlock:
lock= _synchhash.setdefault( func, allocate_lock() )
with lock:
return func( *ar, **kwar )
return presynch

On footnote #4, one might need a vector of jump addresses, one for
each context in which the word might be modified. Yes, this involves
a miniature "many small" design in actual hardware memory, and
definitely opens some doors to faster processing. As such, it may not
be the best "first" structural element in paralell off-loading, but
it's a good one. And yes, it's specialty RAM, for which RAM may not
even be the right place. If a few KB of it is enough, just bump it up
next to the cache, which may make for shorter cycles on the jump-back
later. You probably don't want to be setting the instruction pointer
from a KB's worth of addresses, so there's probably an extra cycle
involved in setting the jump register, halting the IP, and signalling
a jump. Interrupts may be suited too. Does the OS need to provide an
API before a compiler can make use of it?

On #4, the signatures func( self, locks ) vs. func( locks, self ) is
open: just if you sometimes want locks to be the second parameter, and
other times the first, as for non-class-member functions, there will
be two methods, or a parameter to signal the difference.
 
P

Paul McGuire

1)  @synchronized

@synchronized
def function( arg ):
   behavior()

Synchronized prevents usage from more than one caller at once: look up
the function in a hash, acquire its lock, and call.

def synchronized( func ):
   def presynch( *ar, **kwar ):
      with _synchlock:
         lock= _synchhash.setdefault( func, allocate_lock() )
         with lock:
            return func( *ar, **kwar )
   return presynch

No need for a global _synchhash, just hang the function lock on the
function itself:

def synch(f):
f._synchLock = Lock()
def synchedFn(*args, **kwargs):
with f._synchLock:
f(*args, **kwargs)
return synchedFn

You might also look at the PythonDecoratorLibrary page of the Python
wiki, there is a synchronization decorator there that allows the
function caller to specify its own lock object (in case a single lock
should be shared across multiple functions).

-- Paul
 
C

castironpi

No need for a global _synchhash, just hang the function lock on the
function itself:

def synch(f):
    f._synchLock = Lock()
    def synchedFn(*args, **kwargs):
        with f._synchLock:
            f(*args, **kwargs)
    return synchedFn

You might also look at the PythonDecoratorLibrary page of the Python
wiki, there is a synchronization decorator there that allows the
function caller to specify its own lock object (in case a single lock
should be shared across multiple functions).

-- Paul- Hide quoted text -

- Show quoted text -

Why not just:

def synched( f ):
l= Lock()
def presynched( *a, **kwa ):
with l:
return f( *a, **kwa )

It's not like the lock is ever used anywhere else. Besides, if it is,
isn't the correct spelling:

class Asynched:
def __init__( self, func ):
self.func, self.lock= func, Lock()
def __call__( self, *a, **kwa ):
return self.func( *a, **kwa )

and

def synched( func ):
return Asynched( func )

or even

synched= Asynched

?
 
C

castironpi

Why not just:

def synched( f ):
   l= Lock()
   def presynched( *a, **kwa ):
      with l:
          return f( *a, **kwa )

It's not like the lock is ever used anywhere else.  Besides, if it is,
isn't the correct spelling:

class Asynched:
   def __init__( self, func ):
      self.func, self.lock= func, Lock()
   def __call__( self, *a, **kwa ):
      return self.func( *a, **kwa )

and

def synched( func ):
   return Asynched( func )

or even

synched= Asynched

?- Hide quoted text -

- Show quoted text -

So, you live around here? Where'd you park? ;)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,583
Members
45,074
Latest member
StanleyFra

Latest Threads

Top