WeakValueDict and threadsafety

D

Darren Dale

I am using a WeakValueDict in a way that is nearly identical to the
example at the end of http://docs.python.org/library/weakref.html?highlight=weakref#example
, where "an application can use objects IDs to retrieve objects that
it has seen before. The IDs of the objects can then be used in other
data structures without forcing the objects to remain alive, but the
objects can still be retrieved by ID if they do." My program is
multithreaded, so I added the necessary check for liveliness that was
discussed at http://docs.python.org/library/weakref.html?highlight=weakref#weak-reference-objects
.. Basically, I have:

import threading
import weakref

registry = weakref.WeakValueDictionary()
reglock = threading.Lock()

def get_data(oid):
with reglock:
data = registry.get(oid, None)
if data is None:
data = make_data()
registry[id(data)] = data
return data

I'm concerned that this is not actually thread-safe. When I no longer
hold strong references to an instance of data, at some point the
garbage collector will kick in and remove that entry from my registry.
How can I ensure the garbage collection process does not modify the
registry while I'm holding the lock?

Thanks,
Darren
 
D

Darren Dale

You can't, but it shouldn't matter.

So long as you have a strong reference in 'data' that particular object
will continue to exist. Other entries in 'registry' might disappear while
you are holding your lock but that shouldn't matter to you.

What is concerning though is that you are using `id(data)` as the key and
then presumably storing that separately as your `oid` value. If the
lifetime of the value stored as `oid` exceeds the lifetime of the strong
references to `data` then you might get a new data value created with the
same id as some previous value.

In other words I think there's a problem here, but nothing to do with the
lock.

Thank you for the considered response. In reality, I am not using
id(data). I took that from the example in the documentation at
python.org in order to illustrate the basic approach, but it looks
like I introduced an error in the code. It should read:

def get_data(oid):
with reglock:
data = registry.get(oid, None)
if data is None:
data = make_data(oid)
registry[oid] = data
return data

Does that look better? I am actually working on the h5py project
(bindings to hdf5), and the oid is an hdf5 object identifier.
make_data(oid) creates a proxy object that stores a strong reference
to oid.

My concern is that the garbage collector is modifying the dictionary
underlying WeakValueDictionary at the same time that my multithreaded
code is trying to access it, producing a race condition. This morning
I wrote a synchronized version of WeakValueDictionary (actually
implemented in cython):

class _Registry:

def __cinit__(self):
def remove(wr, selfref=ref(self)):
self = selfref()
if self is not None:
self._delitem(wr.key)
self._remove = remove
self._data = {}
self._lock = FastRLock()

__hash__ = None

def __setitem__(self, key, val):
with self._lock:
self._data[key] = KeyedRef(val, self._remove, key)

def _delitem(self, key):
with self._lock:
del self._data[key]

def get(self, key, default=None):
with self._lock:
try:
wr = self._data[key]
except KeyError:
return default
else:
o = wr()
if o is None:
return default
else:
return o

Now that I am using this _Registry class instead of
WeakValueDictionary, my test scripts and my actual program are no
longer producing segfaults.
 
8

88888 Dihedral

You can't, but it shouldn't matter.

So long as you have a strong reference in 'data' that particular object
will continue to exist. Other entries in 'registry' might disappear while
you are holding your lock but that shouldn't matter to you.

What is concerning though is that you are using `id(data)` as the key and
then presumably storing that separately as your `oid` value. If the
lifetime of the value stored as `oid` exceeds the lifetime of the strong
references to `data` then you might get a new data value created with the
same id as some previous value.

In other words I think there's a problem here, but nothing to do with the
lock.

Thank you for the considered response. In reality, I am not using
id(data). I took that from the example in the documentation at
python.org in order to illustrate the basic approach, but it looks
like I introduced an error in the code. It should read:

def get_data(oid):
with reglock:
data = registry.get(oid, None)
if data is None:
data = make_data(oid)
registry[oid] = data
return data

Does that look better? I am actually working on the h5py project
(bindings to hdf5), and the oid is an hdf5 object identifier.
make_data(oid) creates a proxy object that stores a strong reference
to oid.

My concern is that the garbage collector is modifying the dictionary
underlying WeakValueDictionary at the same time that my multithreaded
code is trying to access it, producing a race condition. This morning
I wrote a synchronized version of WeakValueDictionary (actually
implemented in cython):

class _Registry:

def __cinit__(self):
def remove(wr, selfref=ref(self)):
self = selfref()
if self is not None:
self._delitem(wr.key)
self._remove = remove
self._data = {}
self._lock = FastRLock()

__hash__ = None

def __setitem__(self, key, val):
with self._lock:
self._data[key] = KeyedRef(val, self._remove, key)

def _delitem(self, key):
with self._lock:
del self._data[key]

def get(self, key, default=None):
with self._lock:
try:
wr = self._data[key]
except KeyError:
return default
else:
o = wr()
if o is None:
return default
else:
return o

Now that I am using this _Registry class instead of
WeakValueDictionary, my test scripts and my actual program are no
longer producing segfaults.
I'll prefer to get iterators and iterables that can accept a global signal
called a clock to replace these CS mess.
 
D

Darren Dale

Darren Dale said:
Darren Dale <[email protected]> wrote:
def get_data(oid):
    with reglock:
        data = registry.get(oid, None)
        if data is None:
            data = make_data(oid)
            registry[oid] = data
    return data
Does that look better? I am actually working on the h5py project
(bindings to hdf5), and the oid is an hdf5 object identifier.
make_data(oid) creates a proxy object that stores a strong reference
to oid.

Yes, that looks better.


Now that I am using this _Registry class instead of
WeakValueDictionary, my test scripts and my actual program are no
longer producing segfaults.

I think that so far as multi-thread race conditions are concerned Python
usually tries to guarantee that you won't get seg faults. So if you were
getting seg faults my guess would be that either you've found a bug in the
WeakValueDictionary implementation or you've got a bug in some of your code
outside Python.

Have you seen Alex Martelli's answer at
http://stackoverflow.com/questions/3358770/python-dictionary-is-thread-safe
? The way I read that, it seems pretty clear that deleting items from
a dict can lead to crashes in threaded code. (Well, he says as long as
you don't performing an assignment or a deletion in threaded code,
there may be issues, but at least it shouldn't crash.)
For example if your proxy object has a __del__ method to clean up the
object it is proxying then you could be creating a new object with the same
oid as one that is in the process of being destroyed (the object disappears
from the WeakValueDictionary before the __del__ method is actually called).

Without knowing anything about HDF5 I don't know if that's a problem but I
could imagine you could end up creating a new proxy object that references
something in the HDF5 library which you then destroy as part of cleaning up
a previous incarnation of the object but continue to access through the new
proxy.

We started having problems when HDF5 began recycling oids as soon as
their reference count went to zero, which was why we began using
IDProxy and the registry. The IDProxy implementation below does have a
__dealloc__ method, which we use to decrease the HDF5's internal
reference count to the oid. Adding these proxies and registry dealt
with the issue of creating a new proxy that references an old oid
(even in non-threaded code), but it created a rare (though common
enough) segfault in multithreaded code. This synchronized registry is
the best I have been able to do, and it seems to address the problem.
Could you suggest another approach?

cdef IDProxy getproxy(hid_t oid):
# Retrieve an IDProxy object appropriate for the given object
identifier
cdef IDProxy proxy
proxy = registry.get(oid, None)
if proxy is None:
proxy = IDProxy(oid)
registry[oid] = proxy

return proxy


cdef class IDProxy:

property valid:
def __get__(self):
return H5Iget_type(self.id) > 0

def __cinit__(self, id):
self.id = id
self.locked = 0

def __dealloc__(self):
if self.id > 0 and (not self.locked) and H5Iget_type(self.id)
and H5Iget_type(self.id) != H5I_FILE:
H5Idec_ref(self.id)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,733
Messages
2,569,440
Members
44,831
Latest member
HealthSmartketoReviews

Latest Threads

Top