Dmitriy Vyukov said:
Maybe it's better to use only 1 table of mutexes for acquire/release
and for copy/swap?
Well, there _is_ a very important caveat with hashed locking that basically
prohibits any one thread from taking two locks from the same table at any
one time:
http://groups.google.com/group/microsoft.public.win32.programmer.kernel/msg/f7297761027a9459
I believe that two tables are necessary here, otherwise, I would have to an
analog of like two-phase locking on the copy operation:
http://groups.google.com/group/comp.programming.threads/browse_frm/thread/e0c011baf08844c4
This can arrange a locking order in a way that completely prevents any
thread from deadlocking. I was thinking about using it to do a STM that is
compatible with C + PThreads...
This will increase possibility of contention. But table must be large
enough anyway to take down contention to minimum.
Indeed.
But copy operation will have to acquire/release only one mutex instead
of two. I think that copy operation is relatively frequently issued
operation.
IMO, the "basic" problem is that copy needs to prevent any other threads
from rendering any mutations on the shared pointer location (e.g., swap
operation) _and_ the reference counter. The shared location and the counter
are essentially "non-related" in the sense that if you are going to
serialize access to both locations, you would need to use multi-word
interlocked operations, stm, two-phase locking, the method I use in the
mostly lock-free refcount, or of course you can rely on the good ol'
"big-global" locking scheme... Pointer's to shared locations _and_ pointer's
to actual refcount object's which are bound to the same mutex table could
potentially map into it in a way that's prone to deadlock... This is due to
a thread violating the "static" locking hierarchy implied with the mutex
table, which ends up being a full-blown race-condition. IMO, the only
"positive" point in that case is probably the fact that the race-condition
will cause deadlock, not silently corrupt some important data...
;^0