Multiprocessing.Array bug / shared numpy array

Discussion in 'Python' started by Felix, Oct 8, 2009.

  1. Felix

    Felix Guest

    Hi,

    The documentation for the Multiprocessing.Array says:

    multiprocessing.Array(typecode_or_type, size_or_initializer, *,
    lock=True)¶

    ....
    If lock is False then access to the returned object will not be
    automatically protected by a lock, so it will not necessarily be
    “process-safe”.
    ....

    However:
    In [48]: mp.Array('i',1,lock=False)
    ---------------------------------------------------------------------------
    AssertionError Traceback (most recent call
    last)

    /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/
    multiprocessing/__init__.pyc in Array(typecode_or_type,
    size_or_initializer, **kwds)
    252 '''
    253 from multiprocessing.sharedctypes import Array
    --> 254 return Array(typecode_or_type, size_or_initializer,
    **kwds)
    255
    256 #


    /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/
    multiprocessing/sharedctypes.pyc in Array(typecode_or_type,
    size_or_initializer, **kwds)
    85 if lock is None:
    86 lock = RLock()
    ---> 87 assert hasattr(lock, 'acquire')
    88 return synchronized(obj, lock)
    89

    AssertionError:

    -------
    I.e. it looks like lock=false is not actually supported. Or am I
    reading this wrong? If not, I can submit a bug report.


    I am trying to create a shared, read-only numpy.ndarray between
    several processes. After some googling the basic idea is:

    sarr = mp.Array('i',1000)
    ndarr = scipy.frombuffer(sarr._obj,dtype='int32')

    Since it will be read only (after being filled once in a single
    process) I don't think I need any locking mechanism. However is this
    really true given garbage collection, reference counts and other
    implicit things going on?

    Or is there a recommended better way to do this?

    Thanks
    Felix, Oct 8, 2009
    #1
    1. Advertising

  2. Felix

    Robert Kern Guest

    On 2009-10-08 15:14 PM, Felix wrote:

    > I am trying to create a shared, read-only numpy.ndarray between
    > several processes. After some googling the basic idea is:
    >
    > sarr = mp.Array('i',1000)
    > ndarr = scipy.frombuffer(sarr._obj,dtype='int32')
    >
    > Since it will be read only (after being filled once in a single
    > process) I don't think I need any locking mechanism. However is this
    > really true given garbage collection, reference counts and other
    > implicit things going on?
    >
    > Or is there a recommended better way to do this?


    I recommend using memory-mapped arrays for such a purpose.

    You will want to ask further numpy questions on the numpy mailing list:

    http://www.scipy.org/Mailing_Lists

    --
    Robert Kern

    "I have come to believe that the whole world is an enigma, a harmless enigma
    that is made terrible by our own mad attempt to interpret it as though it had
    an underlying truth."
    -- Umberto Eco
    Robert Kern, Oct 8, 2009
    #2
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. drife
    Replies:
    1
    Views:
    352
    Travis E. Oliphant
    Mar 1, 2006
  2. Duncan Smith
    Replies:
    3
    Views:
    409
    Duncan Smith
    Apr 25, 2007
  3. Replies:
    2
    Views:
    479
    Robert Kern
    Nov 13, 2007
  4. W. eWatson
    Replies:
    2
    Views:
    913
    W. eWatson
    Nov 23, 2009
  5. Tom Kacvinsky

    Installing numpy over an older numpy

    Tom Kacvinsky, Jun 15, 2012, in forum: Python
    Replies:
    1
    Views:
    336
    Miki Tebeka
    Jun 15, 2012
Loading...

Share This Page