Python 2.6's multiprocessing lock not working on second use?

F

Frédéric Sagnes

Hi,

I ran a few tests on the new Python 2.6 multiprocessing module before
migrating a threading code, and found out the locking code is not
working well. In this case, a pool of 5 processes is running, each
trying to get the lock and releasing it after waiting 0.2 seconds
(action is repeated twice). It looks like the multiprocessing lock
allows multiple locking after the second pass. Running the exact same
code with threads works correctly.

Further down is the test code, output is great when running with
threads (the sequence of lock/unlock looks good), but the output gets
mixed up (mutliple locks in a row) when running with processes.

My setup is : Mac OS X 10.5 running Python 2.6.1 from MacPython

Did I do something wrong, or is there a limitation for multiprocessing
locks that I am not aware of?

Thank you for your help!

-- Fred

-------------------------------

#!/usr/bin/python
# -*- coding: utf-8 -*-

from multiprocessing import Process, Queue, Lock
from Queue import Empty
from threading import Thread
import time

class test_lock_process(object):
def __init__(self, lock, id, queue):
self.lock = lock
self.id = id
self.queue = queue
self.read_lock()

def read_lock(self):
for i in xrange(2):
self.lock.acquire()
self.queue.put('[proc%d] Got lock' % self.id)
time.sleep(.2)
self.queue.put('[proc%d] Released lock' % self.id)
self.lock.release()

def test_lock(processes=10, lock=Lock(), process=True, queue=None):
print_result = False
if queue == None:
print_result = True
queue = Queue()

threads = []
for i in xrange(processes):
if process: threads.append(Process(target=test_lock_process,
args=(lock,i,queue,)))
else: threads.append(Thread(target=test_lock_process, args=
(lock,i,queue,)))

for t in threads:
t.start()

for t in threads:
t.join()

if print_result:
try:
while True: print queue.get(block=False)
except Empty:
pass

if __name__ == "__main__":
#test_lock(processes=5, process=True)
test_lock(processes=5)
 
G

Gabriel Genellina

En Fri, 16 Jan 2009 14:41:21 -0200, escribiste en el grupo
gmane.comp.python.general
I ran a few tests on the new Python 2.6 multiprocessing module before
migrating a threading code, and found out the locking code is not
working well. In this case, a pool of 5 processes is running, each
trying to get the lock and releasing it after waiting 0.2 seconds
(action is repeated twice). It looks like the multiprocessing lock
allows multiple locking after the second pass. Running the exact same
code with threads works correctly.

I've tested your code on Windows and I think the problem is on the Queue
class. If you replace the Queue with some print statements or write to a
log file, the sequence lock/release is OK.
You should file a bug report on http://bugs.python.org/
 
F

Frédéric Sagnes

En Fri, 16 Jan 2009 14:41:21 -0200, escribiste en el grupo
gmane.comp.python.general


I've tested your code on Windows and I think the problem is on the Queue
class. If you replace the Queue with some print statements or write to a
log file, the sequence lock/release is OK.
You should file a bug report onhttp://bugs.python.org/

Thanks for your help gabriel, I just tested it without the queue and
it works! I'll file a bug about the queues.

Fred

For those interested, the code that works (well, it always did, but
this shows the real result):

class test_lock_process(object):
def __init__(self, lock):
self.lock = lock
self.read_lock()

def read_lock(self):
for i in xrange(5):
self.lock.acquire()
logging.info('Got lock')
time.sleep(.2)
logging.info('Released lock')
self.lock.release()

if __name__ == "__main__":
logging.basicConfig(format='[%(process)04d@%(relativeCreated)04d] %
(message)s', level=logging.DEBUG)

lock = Lock()

processes = []
for i in xrange(2):
processes.append(Process(target=test_lock_process, args=
(lock,)))

for t in processes:
t.start()

for t in processes:
t.join()
 
F

Frédéric Sagnes

En Fri, 16 Jan 2009 14:41:21 -0200, escribiste en el grupo
gmane.comp.python.general
I've tested your code on Windows and I think the problem is on the Queue
class. If you replace the Queue with some print statements or write to a
log file, the sequence lock/release is OK.
You should file a bug report onhttp://bugs.python.org/

Thanks for your help gabriel, I just tested it without the queue and
it works! I'll file a bug about the queues.

Fred

For those interested, the code that works (well, it always did, but
this shows the real result):

class test_lock_process(object):
    def __init__(self, lock):
        self.lock = lock
        self.read_lock()

    def read_lock(self):
        for i in xrange(5):
            self.lock.acquire()
            logging.info('Got lock')
            time.sleep(.2)
            logging.info('Released lock')
            self.lock.release()

if __name__ == "__main__":
    logging.basicConfig(format='[%(process)04d@%(relativeCreated)04d] %
(message)s', level=logging.DEBUG)

    lock = Lock()

    processes = []
    for i in xrange(2):
        processes.append(Process(target=test_lock_process, args=
(lock,)))

    for t in processes:
        t.start()

    for t in processes:
        t.join()

Opened issue #4999 [http://bugs.python.org/issue4999] on the matter,
referencing this thread.
 
J

Jesse Noller

En Fri, 16 Jan 2009 14:41:21 -0200, escribiste en el grupo
gmane.comp.python.general
I ran a few tests on the new Python 2.6multiprocessingmodule before
migrating a threading code, and found out the locking code is not
working well. In this case, a pool of 5 processes is running, each
trying to get the lock and releasing it after waiting 0.2 seconds
(action is repeated twice). It looks like themultiprocessinglock
allows multiple locking after the second pass. Running the exact same
code with threads works correctly.
I've tested your code on Windows and I think the problem is on the Queue
class. If you replace the Queue with some print statements or write to a
log file, the sequence lock/release is OK.
You should file a bug report onhttp://bugs.python.org/

Thanks for your help gabriel, I just tested it without the queue and
it works! I'll file a bug about the queues.

Fred

For those interested, the code that works (well, it always did, but
this shows the real result):

class test_lock_process(object):
def __init__(self, lock):
self.lock = lock
self.read_lock()

def read_lock(self):
for i in xrange(5):
self.lock.acquire()
logging.info('Got lock')
time.sleep(.2)
logging.info('Released lock')
self.lock.release()

if __name__ == "__main__":
logging.basicConfig(format='[%(process)04d@%(relativeCreated)04d] %
(message)s', level=logging.DEBUG)

lock = Lock()

processes = []
for i in xrange(2):
processes.append(Process(target=test_lock_process, args=
(lock,)))

for t in processes:
t.start()

for t in processes:
t.join()

Opened issue #4999 [http://bugs.python.org/issue4999] on the matter,
referencing this thread.

Thanks, I've assigned it to myself. Hopefully I can get a fix put
together soonish, time permitting.
-jesse
 
J

Jesse Noller

Jesse Noller said:
Opened issue #4999 [http://bugs.python.org/issue4999] on the matter,
referencing this thread.

Thanks, I've assigned it to myself. Hopefully I can get a fix put
together soonish, time permitting.

Sounds like it might be hard or impossible to fix to me. I'd love to
be proved wrong though!

If you were thinking of passing time.time() /
clock_gettime(CLOCK_MONOTONIC) along in the Queue too, then you'll
want to know that it can differ by significant amounts on different
processors :-(

Good luck!

Consider my parade rained on. And after looking at it this morning,
yes - this is going to be hard, and should be fixed for a FIFO queue
:\

-jesse
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,053
Messages
2,570,431
Members
47,075
Latest member
TysonV438

Latest Threads

Top