I guess this might be overkill then...
That depends on your target. For the *current* CPython implementation,
yes, because it has an internal lock. But other versions (like Jython or
IronPython) may not behave that way.
class MyList(list):
def __init__(self):
self.l = threading.Lock()
Better to use an RLock, and another name instead of l:
self.lock = threading.RLock()
(A method may call another, and a Lock() won't allow that)
def append(self, val):
try:
self.l.acquire()
list.append(self, val)
finally:
if self.l.locked():
self.l.release()
I'd write it as:
def append(self, val):
self.lock.acquire()
try:
list.append(self, val)
finally:
self.lock.release()
....performing the same locking/unlocking for the other methods (i.e.
remove, extend, etc).
Note that even if you wrap *all* methods, operations like mylist += other
are still unsafe.
py> def f(self): self.mylist += other
....
py> import dis; dis.dis(f)
1 0 LOAD_FAST 0 (self)
3 DUP_TOP
4 LOAD_ATTR 0 (mylist)
7 LOAD_GLOBAL 1 (other)
10 INPLACE_ADD
11 ROT_TWO
12 STORE_ATTR 0 (mylist)
15 LOAD_CONST 0 (None)
18 RETURN_VALUE
INPLACE_ADD would call MyList.__iadd__ which you have wrapped. But you
have a race condition between that moment and the following STORE_ATTR, a
context switch may happen in the middle.
It may not be possible to create an absolutely thread-safe list without
some help on the client side. (Comments, someone?)