ReadWriteLock rather on map's each field than whole map?

E

easy

I would like to apply
java.util.concurrent.locks.ReentrantReadWriteLock
on a HashMap(or ConcurrentHashMap)

Most examples on internet applies ReadWriteLock on whole map.
that is, every setter/write (whatever which "field") on map will block
other getter (read) operation or other setter on different field.

because I build a cache map for DB.
the map setter will occurs when read miss.
In the setter the object will be load from DB, and it may very time-
consumed.

Can I give each "field" of map its own "ReadWriteLock"? rather than
Lock on whole map.
(every new field should be given a new ReadWriteLock.)
(w/r on a "field" will not affect other field.)

I do not know if it will have better performance.
(when loading DB and setting it to a field, other read operation on
other field will not be bocked.)
or is that bad
when too many Lock object created.(there may be many many fields in
map)

does someone have idea?
 
R

Roedy Green

Can I give each "field" of map its own "ReadWriteLock"? rather than
Lock on whole map.

There is no need to lock an object unless you are changing its fields.
Most likely your objects in the map are immutable, so never need
locking.

The fields you are worried about keeping consistent are internal to
the Map, e.g. the lookup table of hashes, and chain links in a
HashMap. To do that you must lock the entire HashMap.
Any time you add a element you potentially adjust a number of
different internal elements, but you won't change any of the contained
objects.
 
E

easy

what if I would apply lock on "semantical" meaning?

class MyClass { // not applied with locks yet

HashMap<Key, Obj> storage;

Obj do_1(key k) {
return storage.get(k);
}

void do_2(key k) {
o = storage.get(k);
if (o.notYetUpdated) {
Obj o = update(); // it's time-consuming. I want it only be
done for each key only "once".
o.setIsAlreadyUpdate();
}
storage.set(k, o);
}
}

do_1 and do_2 would be in any order with any possible key concurrently
executed.
but I want to treat any api only with the same "key" in "hand-over-
hand" w/r lock manner.

thanks your comments. :p
 
L

Lew

easy said:
what if I would apply lock on "semantical" meaning?

class MyClass { // not applied with locks yet

HashMap<Key, Obj> storage;

Obj do_1(key k) {

Don't use underscores in identifiers, except for compile-time constants.
return storage.get(k);
}

void do_2(key k) {
o = storage.get(k);
if (o.notYetUpdated) {
Obj o = update(); // it's time-consuming. I want it only be
done for each key only "once".
o.setIsAlreadyUpdate();
}
storage.set(k, o);
}
}

do_1 and do_2 would be in any order with any possible key concurrently
executed.
but I want to treat any api only with the same "key" in "hand-over-
hand" w/r lock manner.

This will not work.

Since "o" is created newly from the update() method, why is it not created
"alreadyUpdated"? It's a different Obj from the one that has "notYetUpdated"
(which must be a public instance variable, a bad idiom). Or did you intend to
update a single instance of Obj? (Not a good name for a custom class, BTW.
Traditionally one uses "Foo" for examples.)

If do1 and do2 run concurrently, they will not synchronize. Updates from do2
might not be seen by callers of do1. You must establish /happens-before/ for
that to work, such as by synchronization on "storage" or the MyClass instance.
One way to do that is to build storage completely before starting the
threads that get() from it.

You will not get away with concurrent operations unless you synchronize one
way or another.

As to your goal of more fine-grained locking, your reach toward updating the
"Foo" object is in the right direction. However, you will need to synchronize
the entire test-and-set of the "updated" flag, not just the one or the other.
You do that by synchronizing on the Foo instance.

Also, since "storage" is shared data, the whole object must by synchronized
one way or another to change its structure, i.e., building it and reading it.
You can get away with unsynchronized get()s only if the Map construction
/happens-before/ all reads, i.e., before the Thread's start().

class Key
{
}
class Foo
{
private boolean updated;
public void update(){...; updated = true; }
public isUpdated() { return updated; }
}
public class Eg // instances shared among threads
{
private final Map <Key, Foo> storage =
StorageFactory.makeStorage();

@ThreadUnsafe
public Foo getSnapshot( Key key )
{
return storage.get( key );
// only sees storage.put()s from /happens-before/
}

public Foo getUpdated( Key key )
{
Foo foo = getSnapshot( key );
synchronized ( foo )
{
if ( ! foo.isUpdated() )
{
foo.update();
}
}
return foo;
}
}
 
H

Hunter Gratzner

I would like to apply
java.util.concurrent.locks.ReentrantReadWriteLock
on a HashMap(or ConcurrentHashMap)

Using ConcurrentHashMap and an additional lock doesn't make much
sense. ConcurrentHashMap allows for concurrent updates and reads,
with minimized locking.

If you need to protect multiple map operations in a row in order to
keep the map consistent, then ConcurrentHashMap is not what you need.
Then you need an additional lock and a normal HashMap. With HashMap
you are lucky. The API documentation guarantees that concurrent reads
are permitted. So a "multipe readers, single writer lock", like
ReentrantReadWriteLock is appropriate for this.

But I think you can get away with a ConcurrentHashMap.

Most examples on internet applies ReadWriteLock on whole map.
that is, every setter/write (whatever which "field") on map will block
other getter (read) operation or other setter on different field.

Yes, and that's how you do it, since HashMap can not stomach to be
updated in parallel from different threads. If you need something
different write your own implementation (good luck with that).
because I build a cache map for DB.
the map setter will occurs when read miss.
In the setter the object will be load from DB, and it may very time-
consumed.

That statement does not make sense. Why do you intend to write-lock
the map during the time you fetch the data from the DB?

Consider the following pseudo code:

ConcurrentHashMap cache = ... // thread safe
HashSet ongoingRequests = // not thread safe, need to lock
explicitly

read(key) {
if(value = cache.get(key)) {
return value
}

//
// To avoid loading the same key in
// parallel, keep a list of
// keys for which currently DB requests
// are done.
//
synchronize(ongoingRequests) {
if ongoingRequests.get(key) {
while(! value = cache.get(key) {
ongoingRequests.wait()
}
return value
} else {
// need to double-check
if(value = cache.get(key)) {
return value
}
ongoingRequests.add(key)
}
}

value = requestValueFromDb(key) // method needs to be thread safe
cache.add(key, value)
synchronize(ongoingRequests) {
ongoingRequests.remove(key)
ongoingRequests.notifyAll()
}
return value


Please note that you need to carefully check the above code if it is
indeed thread safe. I just did it on the fly (never a good idea when
dealing with threads), and didn't think much about it (a big sin when
doing threads).

does someone have idea?

Consider getting a good book about threading. Alternatively, and this
is a serious suggestion, don't use multi-threading in your
application.
 
E

easy

really thanks your suggestions. :)


BTW.
I just have a look in java source code.


In ConcurrentHashMap.put(...) it uses a lock,
so there is no real "concurrent" put() posible.
right?
 
H

Hunter Gratzner

In ConcurrentHashMap.put(...) it uses a lock,
so there is no real "concurrent" put() posible.
right?

No. You haven't sad what kind of lock. You haven' sad what the lock is
used for and when. The lock might protect a few critical section under
special conditions. And francly sad, I do trust the API documentation
more than I trust you. And the API documentation states that the class
supports concurrent updates.

Further, you have stated that the DB access is the time consuming
task. Why are you concerned about a likely very short, if any, write
lock in the map, if the DB access before the write takes "ages"? It
maybe could make a difference when the cache is initially empty. But
only maybe, and once the cache is loaded a write to the map should be
a rare occurrence. If a write is not a rare occurrences than the whole
cache maybe doesn't make sense.

Again, get a good book on Java multi threading. Alternatively refrain
from using multi threading.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,904
Latest member
HealthyVisionsCBDPrice

Latest Threads

Top