Locking of nodes in tree for concurrent access

O

oliver789

Hello,

I'm working on a tree where the node classes store their child nodes
in a TreeMap. Now I don't want to lock the entire tree when elements
are added or removed. So I'm thinking of the node class to have it's
own ReentrantReadWriteLock. When I traverse down the tree I close each
node's own read write lock when looking for the appropriate child node
to pick to continue the traversal and open the lock once I have picked
the right child node and continue the traversal to the next deeper
level in the same way. Because there is a context switch possible
while jumping to the next lower node (previous lock was opened, new
lock of lower level not yet acquired), the class that holds the root
node has an AtomicInteger that counts the visitors of the tree. If
visitorCount >= 1 no thread must add or remove any nodes in the tree.

So far I see no logical error in that approach. My concern is that
this approach might not be very efficient under heavy load (large
number of concurrent accesses of the tree), because for every visited
node in the tree the node's lock has to be opened and closed. That is
quite a bit of locking that is going on there. Does any body have a
better idea or knows some articles that describe better approaches?

Thanks, Oliver Holp
 
L

Lew

So far I see no logical error in that approach. My concern is that
this approach might not be very efficient under heavy load (large
number of concurrent accesses of the tree), because for every visited
node in the tree the node's lock has to be opened and closed. That is

For a good understanding of Java concurrency in practice, an excellent
resource is /Java Concurrency in Practice/ by Brian Goetz, et al.

Java 6 has much more efficient handling of locks than earlier versions.
Still, your approach does seem very inefficient at first blush.

You also have to be very, very careful about deadlock. Without seeing an
SSCCE it's hard to be sure, but it looks from your description as though you
might acquire locks in different orders, a recipe for disaster.
 
O

oliver789

Hi Lew,

thanks for the answer. At least someone was able to understand what I
was trying to explain about what I was thinking of .. ;-). I should
have mentioned that I don't store the data in the leaf nodes as this
is usually the case but in the respective node somewhere within the
tree. Some item is categorized by N criterias (a criteria is the value
of a basic data type such as String, int, etc. including null). N is
typically rearely larger than, say, 5. Iterating over the list of
criterias for every next criteria a new sub node is created.
You also have to be very, very careful about deadlock.
... but it looks from your description as though you
might acquire locks in different orders, a recipe for disaster.

All right. I see the point. I will have a look into the book you
mentioned. Maybe I get some hint the :)

Thanks, Oliver
 
T

Tom Anderson

I'm working on a tree where the node classes store their child nodes in
a TreeMap. Now I don't want to lock the entire tree when elements are
added or removed. So I'm thinking of the node class to have it's own
ReentrantReadWriteLock. When I traverse down the tree I close each
node's own read write lock when looking for the appropriate child node
to pick to continue the traversal and open the lock once I have picked
the right child node and continue the traversal to the next deeper level
in the same way. Because there is a context switch possible while
jumping to the next lower node (previous lock was opened, new lock of
lower level not yet acquired), the class that holds the root node has an
AtomicInteger that counts the visitors of the tree. If visitorCount >= 1
no thread must add or remove any nodes in the tree.

Lockpocalypse!

Firstly, i'm assuming that you don't do any rebalancing on this tree - i
think what you have is more like a hierarchical filesystem than the
traditional data-structures-and-algorithms idea of a tree. If i'm wrong,
then all bets are off!

Now, what do you mean by "no thread must add or remove any nodes in the
tree"? Do you mean that a thread that wants to that cannot enter the tree,
or that it can walk the tree, but isn't allowed to modify it?

If the former, then your AtomicInteger mechanism is exactly equivalent to
a ReadWriteLock, with reader threads acquiring it for read, and writer
threads for write. Except that your version is less clear in intent, and,
i respectfully suggest, more likely to contain bugs. And you haven't
achieved the goal of not needing "to lock the entire tree when elements
are added or removed". If that is what you mean, then i'd suggest ripping
out your hand-crafted mechanism, and just using a normal
ReentrantReadWriteLock. Also, since the root lock prevents access by more
than one thread when a writer is active, you can dispense with the
per-node locking mechanism entirely. Thus, you have one lock for the whole
tree; you have excellent concurrency in a read-rich environment, and
terrible concurrency in a write-rich environment.

If you meant that writers can enter the tree, but not make changes, then
the situation is less clear. You have achieved your goal of not locking
the entire tree, but it's not really clear to me how you can make this
work. If a writer has found the position where it wants to make a change,
how does it wait for the visitorCount to go to zero? What happens if there
are two writers that are in that position? What happens if they both want
to change the same node? What kind of lock does a writer use as it
descends the tree before modifying it - read or write? Even if you have
solutions to all those problems, you still have poor concurrency for
writes, and unnecessarily so - two writes to unrelated nodes should be
able to happen simultaneously, since they can't interfere with one
another. However, i think you can dispense with the root lock completely,
and rely on the per-node read-write lock: just acquire it for writing if
you want to modify the node. I *think* you don't need to worry about the
window of vulnerability that exists between unlocks and locks when
descending the tree - or rather, i can't see how closing it would protect
you. As long as there's read-write locking of a node which is being
examined, you won't have threading problems with the TreeMaps. Are you
worried about a writer overtaking a reader while descending the tree? If
it did, why would that be a problem?

If you did want to close the window of vulnerability, that's easily done:
as you move from one node to the next, rather than unlocking the old node
and then locking the new one, lock the new one first, and then unlock the
old one. However, if writers descend the tree using read locks, then this
won't give you any protection.

My overall suggestion would be for you to post a complete, consistent
formal description of your tree's structure, and the algorithms used to
walk it for insertion, retrieval and deletion. The ideal format for this
would be java code.

Could you use a completely different approach? If each entry in the tree
is identified by a sequence of criteria, then could you form these into
lists, and use those as keys in a HashMap? Then you just protect the whole
thing with a read-write lock. You have no concurrency for writes, but at
least writes should be very fast.

tom
 
T

Tom Anderson

You also have to be very, very careful about deadlock. Without seeing
an SSCCE it's hard to be sure, but it looks from your description as
though you might acquire locks in different orders, a recipe for
disaster.

AIUI, under his algorithm, a thread won't ever hold more than one lock at
once, so deadlocks won't happen.

But clearly, the description is not sufficiently clear to be sure.

tom
 
T

Tom Anderson

AIUI, under his algorithm, a thread won't ever hold more than one lock at
once, so deadlocks won't happen.

Aargh, except the AtomicInteger thing is effectively a lock, so you have
to think about deadlocks around that. And in one possible interpretation
of the OP's description, deadlocks would occur pretty much every time two
writers came along at the same time.
But clearly, the description is not sufficiently clear to be sure.

But i stand by that!

tom
 
O

oliver789

Hi tom,

thanks for your answer. I appreciate your comments that I consider
useful. First my comments to your suggestions. Then some better ideas
I got in the meanwhile.

Cheers, Oliver
Thus, you have one lock for the whole
tree; you have excellent concurrency in a read-rich environment, and
terrible concurrency in a write-rich environment.

Right. That is why I got the idea to close the child's own lock when
descending one level down and then releasing the parent's lock to
achieve better concurrency for writes.
Are you
worried about a writer overtaking a reader while descending the tree? If
it did, why would that be a problem?

Oh shit. I overlooked this problem. Good point.
as you move from one node to the next, rather than unlocking the old node
and then locking the new one, lock the new one first, and then unlock the
old one.

Yes, I had that Idea as well yesterday evening ;-).
If each entry in the tree
is identified by a sequence of criteria, then could you form these into
lists, and use those as keys in a HashMap?

This is exactly what I'm doing. The values in the map are the child
nodes of the next lower lovel. The node objects hold the lock for this
level and other objects needed for house-keeping. This way no re-
balancing can be done. But I guess that this solution is sufficient
for my purposes.

In the meanwhile I discovered the classes ConcurrentHashMap and
ConcurrentLinkedQueue. I always knew about them but not that they are
non-blocking and that ConcurrentHashMap has a method putIfAbsent. My
idea is now to solve everything with ConcurrentHashMap.putIfAbsent: I
iterate over the list of criterias, for every criteria I do a
putIfAbsent. If the Node already existed in the Map I get the existing
back, otherwise the new Node object. ConcurrentHashMap.putIfAbsent is
atomic and non-blocking. Ideal for my problem I would say :). Then I
use the new Node to traverse down the next level with the next
criteria in the criteria list. When the leaf node for the iteration
over the criterias has been reached the element is added to a
ConcurrentLinkedQueue, which is also non-blocking. If another element
is added with the same criteria list, it is simply pushed onto the
ConcurrentLinkedQueue.

Problem is now the case where the last element in a
ConcurrentLinkedQueue is removed leaving the ConcurrentLinkedQueue
empty. The node holding it should therefore be removed from the tree
to reclaim memory. I'm thinking of something like subclassing
ConcurrentHashMap and adding a method "removeAndReclaimIfEmpty" also
non-blocking analogously to putIfAbsent. Problem is that if I use a
lock to solve this problem, I have to close the lock not only when
removing the empty node, but of course also when adding an element to
the node. And then my nice efficient solution with putIfAbsent is
overthrown.

Another idea might be to have a lock in my root node that I don't
close for read or write (or a latch or binary semaphore). It is only
closed when I want to remove an empty node, which is only done when
there is no reader or writer. I could use my AtomicInteger counter for
this to see whether there is no thread active in the tree. Removing
empty nodes could be done when the tree is "idle". It does not need to
be done here and now. This seems to be fair enough.
My overall suggestion would be for you to post a complete, consistent
formal description of your tree's structure, and the algorithms used to
walk it for insertion, retrieval and deletion. The ideal format for this
would be java code.

Okay, I will do that once I get home. Don't have the code handy
here :).
 
O

oliver789

Hi tom,

thanks for your answer. I appreciate your comments that I consider
useful. First my comments to your suggestions. Then some better ideas
I got in the meanwhile ;-).

Cheers, Oliver
Thus, you have one lock for the whole
tree; you have excellent concurrency in a read-rich environment, and
terrible concurrency in a write-rich environment.

Right. That is why I got the idea to close the child's own lock when
descending one level down and then releasing the parent's lock to
achieve better concurrency for writes.
Are you
worried about a writer overtaking a reader while descending the tree? If
it did, why would that be a problem?

Oh shit. I overlooked this problem. Good point.
as you move from one node to the next, rather than unlocking the old node
and then locking the new one, lock the new one first, and then unlock the
old one.

Yes, I had that Idea as well yesterday evening ;-).
If each entry in the tree
is identified by a sequence of criteria, then could you form these into
lists, and use those as keys in a HashMap?

This is exactly what I'm doing. The values in the map are the child
nodes of the next lower lovel. The node objects hold the lock for this
level and other objects needed for house-keeping. This way no re-
balancing can be done. But I guess that this solution is sufficient
for my purposes.

In the meanwhile I discovered the classes ConcurrentHashMap and
ConcurrentLinkedQueue. I always knew about them but not that they are
non-blocking and that ConcurrentHashMap has a method putIfAbsent. My
idea is now to solve everything with ConcurrentHashMap.putIfAbsent: I
iterate over the list of criterias, for every criteria I do a
putIfAbsent. If the Node already existed in the Map I get the existing
back, otherwise the new Node object. ConcurrentHashMap.putIfAbsent is
atomic and non-blocking. Ideal for my problem I would say :). Then I
use the new Node to traverse down the next level with the next
criteria in the criteria list. When the leaf node for the iteration
over the criterias has been reached the element is added to a
ConcurrentLinkedQueue, which is also non-blocking. If another element
is added with the same criteria list, it is simply pushed onto the
ConcurrentLinkedQueue.

Problem is now the case where the last element in a
ConcurrentLinkedQueue is removed leaving the ConcurrentLinkedQueue
empty. The node holding it should therefore be removed from the tree
to reclaim memory. I'm thinking of something like subclassing
ConcurrentHashMap and adding a method "removeAndReclaimIfEmpty" also
non-blocking analogously to putIfAbsent. Problem is that if I use a
lock to solve this problem, I have to close the lock not only when
removing the empty node, but of course also when adding an element to
the node. And then my nice efficient solution with putIfAbsent is
overthrown.

Another idea might be to have a lock in my root node that I don't
close for read or write (or a latch or binary semaphore). It is only
closed when I want to remove an empty node, which is only done when
there is no reader or writer. I could use my AtomicInteger counter for
this to see whether there is no thread active in the tree. Removing
empty nodes could be done when the tree is "idle". It does not need to
be done here and now. This seems to be fair enough.
My overall suggestion would be for you to post a complete, consistent
formal description of your tree's structure, and the algorithms used to
walk it for insertion, retrieval and deletion. The ideal format for this
would be java code.

Okay, I will do that once I get home. Don't have the code handy
here :).
 
L

Lew

Please attribute.


Oh shit. I overlooked this problem. Good point.
Tom:
oliver:
Yes, I had that Idea as well yesterday evening ;-).

Deadlock warning: Make sure that one locker isn't waiting to lock the
new node while another locker that holds the new node is trying to
unlock the first one's old node.

My head spins trying to analyze this locking pattern. It might be too
hairy to be readily maintainable. Anyway, if you do this, be very,
very careful about deadlock.

o:
Problem is now the case where the last element in a
ConcurrentLinkedQueue is removed leaving the ConcurrentLinkedQueue
empty. The node holding it should therefore be removed from the tree
to reclaim memory. I'm thinking of something like subclassing
ConcurrentHashMap and adding a method "removeAndReclaimIfEmpty" also

That could be tricky to implement. ConcurrentHashMap gets its
concurrency efficiency by lock striping, and I don't see from the
documentation that any of its locking mechanism is accessible to
subclasses.
non-blocking analogously to putIfAbsent. Problem is that if I use a
lock to solve this problem, I have to close the lock not only when
removing the empty node, but of course also when adding an element to
the node. And then my nice efficient solution with putIfAbsent is
overthrown.

The problem is worse than that - you have to use the exact same locks
that ConcurrentHashMap already uses to manage concurrency, or you will
have big trouble.
 
O

oliver789

Hi Lew,

thanks for your comments. After all, you don't see a problem with the
solution with ConcurrentHashMap.putIfAbsent, do you? Because this is
the solution I want to go with. Then the problem with removing empty
nodes remains. I would go with a special solution here. Something like
this:

1) After remove of element, check whether node has no child nodes and
element list is empty. No locking for this for efficiency, which means
that in a critical section before finally removing the node a test has
to be done to verify that there still aren't any children and the
element list is still empty.

2) If after remove of element node turns out to be empty (no child
nodes, no elements), call a method in the holder of the root node to
remove the empty node with the empty node being passed on as a
parameter

3) RootNodeHolder.removeNode(aNode) is roughly implemented like this:

a) Close a binary semaphore in the parent of the node to be removed.
This semaphore is not closed for reads or writes to minimize lock
detention.
b) "Wait" till there is no other thread in that node, because closing
the semaphore only makes sure no thread can pass by the semaphore
after the one to remove the node but does not make sure that there was
no other thread in that section before the semaphore was closed
c) remove the node and open the binary semaphore

I don't know yet how to do the "wait" in 3b. I would need an atomic
counter again to see whether there is some other thread in the
critical section. This is not really elegant. Furthermore, the atomic
counter needs to increment and decrement for *every* operation on the
node, which renders my solution with ConcurrentHashMap.putIfAbsent
somewhat less efficient ...

Regards, Oliver
 
L

Lew

Please do not top-post.

Then the problem with removing empty
nodes remains. I would go with a special solution here. Something like
this:

Why not just use the 'remove()' method of ConcurrentHashMap? It
"[r]emoves the entry for a key only if currently mapped to a given
value."

The Javadocs know all, the Javadocs tell all.
1) After remove of element, check whether node has no child nodes and
element list is empty. No locking for this for efficiency, which means
that in a critical section before finally removing the node a test has
to be done to verify that there still aren't any children and the
element list is still empty.

You have to use the same locks for puts as you do for removes.
2) If after remove of element node turns out to be empty (no child
nodes, no elements), call a method in the holder of the root node to
remove the empty node with the empty node being passed on as a
parameter

3) RootNodeHolder.removeNode(aNode) is roughly implemented like this:

a) Close a binary semaphore in the parent of the node to be removed.

If you have ConcurrentHashMap, why do you have another semaphore?
This semaphore is not closed for reads or writes to minimize lock
detention [sic].

I don't understand how this would work.
b) "Wait" till there is no other thread in that node, because closing
the semaphore only makes sure no thread can pass by the semaphore
after the one to remove the node but does not make sure that there was
no other thread in that section before the semaphore was closed

You lost me here.
c) remove the node and open the binary semaphore

and here.
I don't know yet how to do the "wait" in 3b. I would need an atomic
counter again to see whether there is some other thread in the
critical section. This is not really elegant. Furthermore, the atomic
counter needs to increment and decrement for *every* operation on the
node, which renders my solution with ConcurrentHashMap.putIfAbsent
somewhat less efficient ...

You have so many locks and semaphores running around that I don't see
how it can ever work.

Provide an SSCCE. All this description makes me confused, and I still
don't know what you're aiming for.
 
O

oliver789

Hello,

below the code of my TupleSpaceNode class. The class won't compile
since referenced classes are not included. But the basic ideas can be
seen from the code. The problem with removing a node that after a take
becomes empty is not yet dealt with in the code snippet below. I
appreciate any input about alternative approaches or possible logical
errors.

Regards, Oliver


public class TupleSpaceNode
{
protected static int LastId = 0;

protected int internalId = -1;

// Use CopyOnWriteArrayList for takeListeners and writeListeners
since rarely objects are added or removed from
// the listeners lists, but they are often being iterated over
during take or write. This way we get around defining locks
// for iterating over these lists iterations.
protected List<TakeListener> takeListeners = new
CopyOnWriteArrayList<TakeListener>();
protected List<WriteListener> writeListeners = new
CopyOnWriteArrayList<WriteListener>();

protected ConcurrentLinkedQueue<Entry> entries = new
ConcurrentLinkedQueue<Entry>();

protected ConcurrentSkipListMap<Comparable<?>, TupleSpaceNode>
children = new ConcurrentSkipListMap<Comparable<?>, TupleSpaceNode>();

public TupleSpaceNode()
{
internalId = ++LastId;
}

public Entry read(Entry template, MatchKeyIterator
matchKeyIterator)
{
TupleSpaceNode node = this;

while (matchKeyIterator.hasNext() && node != null)
node = node.children.get(matchKeyIterator.next());

if (node == null)
return null;

return node.entries.peek();
}

public Entry take(Entry template, MatchKeyIterator
matchKeyIterator)
{
TupleSpaceNode node = this;

while (matchKeyIterator.hasNext() && node != null)
node = node.children.get(matchKeyIterator.next());

if (node == null)
return null;

Entry entry = node.entries.poll();

if (entry != null)
{
invokeTakeListeners(entry);
return entry;
}

return null;
}

public Lease writeSingleton(Entry entry, MatchKeyIterator
matchKeyIterator) throws EntryAlreadyExistsException
{
if (!matchKeyIterator.hasNext())
{
// since writeSingleton can be expected to be used rarely
(inserting flags to deal with concurrency, etc.)
// synchronization using synchronized is considered a good
performance / concurrency trade-off
synchronized (entries)
{
if (entries.isEmpty())
{
entries.add(entry);
invokeWriteListeners(entry);
}
else
throw new EntryAlreadyExistsException(entry);
}
return new DummyLease();
}

TupleSpaceNode child = new TupleSpaceNode();
TupleSpaceNode previousChild =
children.putIfAbsent(matchKeyIterator.next(), child);
if (previousChild != null)
child = previousChild;

return child.writeSingleton(entry, matchKeyIterator);
}

protected void invokeTakeListeners(Entry entry)
{
Iterator<TakeListener> cowIterator = takeListeners.iterator();
while (cowIterator.hasNext())
cowIterator.next().takeFromSpace(entry);
}

protected void invokeWriteListeners(Entry entry)
{
Iterator<WriteListener> cowIterator =
writeListeners.iterator();
while (cowIterator.hasNext())
cowIterator.next().writtenToSpace(entry);
}

public int getInternalId()
{
return internalId;
}

public Lease write(Entry entry, MatchKeyIterator matchKeyIterator)
{
if (!matchKeyIterator.hasNext())
{
entries.add(entry);
invokeWriteListeners(entry);
return new DummyLease();
}

TupleSpaceNode child = new TupleSpaceNode();
TupleSpaceNode previousChild =
children.putIfAbsent(matchKeyIterator.next(), child);
if (previousChild != null)
child = previousChild;

return child.write(entry, matchKeyIterator);
}
}
 
O

oliver789

I have some idea now how to remove empty nodes that is thread-safe,
simple and offers a good degree of concurrency. Let's start from the
beginning ... ;-). Objects are stored in the tree according to the
contents of variables (flagged with an annotation as match key
variables):

MyEntry entry = new MyEntry();
entry.criteria1 = "abc";
entry.criteria2 = null; // null means that any value for criteria2
will result in a match
entry.criteria3 = 123;

When storing the above defined object entry in the tree an iterator is
created with the criteria values "abc", null, and 123. This iterator
is created when any of the methods of the TreeNode class such as read,
write, take is invoked. These methods store the newly created iterator
in a list of active iterations through the tree. When the read, write,
take method has finished execution the iterator is signaled which
results in its removal from the iterator list. This way, the number of
threads currently traversing the tree can be seen from the iterator
list without having to keep any atomic counters, semaphores or
whatever anywhere within the tree.

A specific iterator class is created for the iteration through the
tree that has a next() method which can be blocked (binary semaphore,
latch or whatever). In the code I posted previously I already
reference a special iterator class MatchKeyIterator that would be
extended to implement this behavior. This way the iteration through
the tree by some other thread than the one to remove a node can be
temporarily suspended when the parent node is reached, of which a
child node is about to be removed. When the node removal is finished
all iterators in the iterator list that iterate down to the same
parent node of which the child is about to be removed are signaled and
then resume iteration.

I think this is a doable solution. It also only locks the branches in
the tree affected by the node removal by blocking only the iterators
that iterate along these branches.

Hope this was a good idea. What do you think? If any body is
interested I can put the thing on my homepage for download once it is
implemented. Shouldn't be all that much.

Regards, Oliver
 
O

oliver789

Please do not top-post.

Then the problem with removing empty
nodes remains. I would go with a special solution here. Something like
this:

Why not just use the 'remove()' method of ConcurrentHashMap?  It
"[r]emoves the entry for a key only if currently mapped to a given
value."

Every node holds a list of elements that fulfill all the search
criterias. When an element of that list is removed >and< the node has
no children (the children are nodes as well not elements to be stored
in the tree), then the node can be removed. Because 2 conditions need
to be checked a race condition is possible between those 2 checks. So
additionaly synchronization is needed here. Then removing the node
requires moving up to the parent node from where the empty child node
can be removed. During moving up to the parent node a new element may
be added to the node that before was empty. Without use of additional
synchonization objects this would remain unnoticed and the node that
meanwhile isn't empty any more would be removed.
If you have ConcurrentHashMap, why do you have another semaphore?

To make sure nobody can add an element to the node that due to a
previous element removal became empty.
 
L

Lew

criterias.

Sorry for the off-topic response, but the word is "criteria". The singular is
"criterion".

As to your locking strategy, it still seems hopelessly complex.
 
L

Lew

// Use CopyOnWriteArrayList for takeListeners and writeListeners
since rarely objects are added or removed from
// the listeners lists, but they are often being iterated over
during take or write. This way we get around defining locks

Actually, you just push the locks off to the library class. That is often a
good idea, but not sure it's totally necessary here.
// for iterating over these lists iterations.
protected List<TakeListener> takeListeners = new
CopyOnWriteArrayList<TakeListener>();
protected List<WriteListener> writeListeners = new
CopyOnWriteArrayList<WriteListener>();

protected ConcurrentLinkedQueue<Entry> entries = new
ConcurrentLinkedQueue<Entry>();

protected ConcurrentSkipListMap<Comparable<?>, TupleSpaceNode>

Comparable said:
children = new ConcurrentSkipListMap<Comparable<?>, TupleSpaceNode>();

public TupleSpaceNode()
{
internalId = ++LastId;

This isn't synchronized, which will cause trouble.

Variable names by convention should begin with a lower-case letter.
}

public Entry read(Entry template, MatchKeyIterator
matchKeyIterator)
{
TupleSpaceNode node = this;

while (matchKeyIterator.hasNext() && node != null)
node = node.children.get(matchKeyIterator.next());

if (node == null)
return null;

return node.entries.peek();
}

public Entry take(Entry template, MatchKeyIterator
matchKeyIterator)
{
TupleSpaceNode node = this;

while (matchKeyIterator.hasNext() && node != null)
node = node.children.get(matchKeyIterator.next());

if (node == null)
return null;

Entry entry = node.entries.poll();

if (entry != null)
{
invokeTakeListeners(entry);
return entry;
}

return null;
}

public Lease writeSingleton(Entry entry, MatchKeyIterator
matchKeyIterator) throws EntryAlreadyExistsException
{
if (!matchKeyIterator.hasNext())
{
// since writeSingleton can be expected to be used rarely
(inserting flags to deal with concurrency, etc.)
// synchronization using synchronized is considered a good
performance / concurrency trade-off
synchronized (entries)

'entries' is already a concurrent structure.
{
if (entries.isEmpty())
{
entries.add(entry);
invokeWriteListeners(entry);
}
else
throw new EntryAlreadyExistsException(entry);

How do you now that 'entry' already exists simply by the queue not being
empty? Couldn't there be a different entry in there?
}
return new DummyLease();
}

TupleSpaceNode child = new TupleSpaceNode();
TupleSpaceNode previousChild =
children.putIfAbsent(matchKeyIterator.next(), child);
if (previousChild != null)
child = previousChild;

return child.writeSingleton(entry, matchKeyIterator);
}

This little section has a bunch of unsynchronized stuff happening. The check
for 'previousChild' being null looks like a data race. I can't be sure. The
recursive call to 'writeSingleton()' disturbs me.
protected void invokeTakeListeners(Entry entry)
{
Iterator<TakeListener> cowIterator = takeListeners.iterator();
while (cowIterator.hasNext())
cowIterator.next().takeFromSpace(entry);
}

I also fear this business of storing and retrieving iterators. It looks like
a 'ConcurrentModificationException' waiting to happen.
protected void invokeWriteListeners(Entry entry)
{
Iterator<WriteListener> cowIterator =
writeListeners.iterator();
while (cowIterator.hasNext())
cowIterator.next().writtenToSpace(entry);
}

Too boggled to continue.
 
O

oliver789

Comparable<?> doesn't guarantee comparability between different keys.

You probably mean between keys of different type. Yes, I will change
the type parameter back to Serializable (because of RMI) as I had it
to begin with... ;-). Thanks.
'entries' is already a concurrent structure.

There can be a context switch between entries.isEmpty() and
entries.add(...). That's why. Problem is that if I use a mutex here I
must also use it in all other places where elements are added or
removed from <entries> what I haven't done. Shame on me. Need to look
for a better solution for writeSingleton that works without a mutex.
Think I get an idea why many tuple space implementations have no
writeSingleton method ... ;-).
How do you now that 'entry' already exists simply by the queue not being
empty? Couldn't there be a different entry in there?

Entries of a different type or entries of the same type but with
different criteria values would be stored in a different node.
This little section has a bunch of unsynchronized stuff happening. The check
for 'previousChild' being null looks like a data race. I can't be sure.

If a context switch happened, previousChild would still contain the
value associated with the key for the first time. The purpose of
putIfAbsent is to implement this in a single atomic method. So a
context switch would not result in storing the wrong object in
previousChild.
I also fear this business of storing and retrieving iterators. It looks like
a 'ConcurrentModificationException' waiting to happen.

That's why a CopyOnWriteArrayList is used where iterator() returns an
iterator that works on a copy of the ArrayList, hence the name
CopyOnWriteArrayList. This way CopyOnWriteArrayList is guaranteed not
to throw ConcurrentModificationException. Hava a look at the javadocs
for CopyOnWriteArrayList for a deeper explanation.

Regards, Oliver
 
L

Lew

You forgot to attribute the quote. Lew said:
oliver:
You probably mean between keys of different type. Yes, I will change

I meant "different keys", as evidenced by the fact that I said
"different keys". It was the fact that you do not guarantee that the
keys are of comparable types that makes it not a guarantee of
compatibility between different keys.

It would have been trivial and meaningless to say, "You do not
guarantee compatibility between different key types"; it was
"compatibility between different keys" that was the important
statement.
the type parameter back to Serializable (because of RMI) as I had it
to begin with... ;-). Thanks.

Is it really the "Serializable"ness of the keys that is key (sorry) to
their presence in the Map?

Somehow I don't think so.
There can be a context switch between entries.isEmpty() and
entries.add(...). That's why. Problem is that if I use a mutex here I
must also use it in all other places where elements are added or
removed from <entries> what I haven't done. Shame on me. Need to look
for a better solution for writeSingleton that works without a mutex.
Think I get an idea why many tuple space implementations have no
writeSingleton method ... ;-).

The problem is you are using different locks for different purposes,
and throwing locks willy-nilly at the problem without coordination or
control between them.

Lew:
oliver:
Entries of a different type or entries of the same type but with
different criteria values would be stored in a different node.

So each 'entries' can only contain exactly one entry or no entry. Why
does it need to be a List then?
If a context switch happened, previousChild would still contain the
value associated with the key for the first time. The purpose of
putIfAbsent is to implement this in a single atomic method. So a
context switch would not result in storing the wrong object in
previousChild.

What worries me is that different locks control the same structure for
different purposes. Your use was too complex for me to reason fully
about it, at least in the time I'm willing to give it, and that by
itself is a major flaw in the design.

When you access structures via different locks, you are failing to
synchronize access. To synchronize properly, everything must
coordinate via the same locking mechanism. Everything that touches
the common data must use common locks.
That's why a CopyOnWriteArrayList is used where iterator() returns an
iterator that works on a copy of the ArrayList, hence the name
CopyOnWriteArrayList. This way CopyOnWriteArrayList is guaranteed not
to throw ConcurrentModificationException. Hava a look at the javadocs
for CopyOnWriteArrayList for a deeper explanation.

Ok.

Your concurrency stuff still doesn't look write, but it's far too
complex for me to analyze.
 
O

oliver789

If anyone is interested in the implementation drop me a mail at
oliver78931.12.2008web.de (replace date stamp with @). I will reply
once I'm done with it.

Regards, Oliver
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,764
Messages
2,569,567
Members
45,041
Latest member
RomeoFarnh

Latest Threads

Top