G
Gerald Thaler
Hello
The double checked locking idiom (thread safe singelton pattern) now works
correct under the current memory model:
public class MySingleton {
private static volatile MySingleton instance;
public static MySingleton getInstance() {
if (instance == null) {
synchronized (MySingleton.class) {
if (instance == null) {
instance = new MySingleton();
}
}
}
return instance;
}
}
Nevertheless the Java experts still discourage its use, claiming that there
would be no signifcant performance advantage over simply synchronizing the
whole getInstance() method, as volatile variable accesses are similar to
(half) a synchronization.
For example see:
http://www-106.ibm.com/developerworks/library/j-jtp03304/?ca=dnt-513 [Brian
Goetz]
But is this really true? IMHO the volatile read in the common code path
should be much more efficient than a monitor enter can be in any reasonable
JVM implementation. All it has usually to do is to cross a read barrier
before the read of the variable 'instance'. This affects only one processor
and will cost very few cycles if any. A monitor enter in contrast would
require a bus lock during a read-modify-write operation. This would stall
every processor in the system. So my feeling is, that a monitor enter should
be much more expensive than a volatile read.
The linked site above gives the following advice:
"Instead of double-checked locking, use the Initialize-on-demand Holder
Class idiom, which provides lazy initialization, is thread-safe, and is
faster and less confusing than double-checked locking."
I don't agree. First, the Initialize-on-demand Holder Class idiom doesn't
garantee that the Singleton is not constructed until its first use. JVMs
have great freedom here. Second, if it does indeed lazy initialization, it
can't be any faster than DCL, because it too has to cross at least one read
barrier internally.
Am i overlooking something here?
The double checked locking idiom (thread safe singelton pattern) now works
correct under the current memory model:
public class MySingleton {
private static volatile MySingleton instance;
public static MySingleton getInstance() {
if (instance == null) {
synchronized (MySingleton.class) {
if (instance == null) {
instance = new MySingleton();
}
}
}
return instance;
}
}
Nevertheless the Java experts still discourage its use, claiming that there
would be no signifcant performance advantage over simply synchronizing the
whole getInstance() method, as volatile variable accesses are similar to
(half) a synchronization.
For example see:
http://www-106.ibm.com/developerworks/library/j-jtp03304/?ca=dnt-513 [Brian
Goetz]
But is this really true? IMHO the volatile read in the common code path
should be much more efficient than a monitor enter can be in any reasonable
JVM implementation. All it has usually to do is to cross a read barrier
before the read of the variable 'instance'. This affects only one processor
and will cost very few cycles if any. A monitor enter in contrast would
require a bus lock during a read-modify-write operation. This would stall
every processor in the system. So my feeling is, that a monitor enter should
be much more expensive than a volatile read.
The linked site above gives the following advice:
"Instead of double-checked locking, use the Initialize-on-demand Holder
Class idiom, which provides lazy initialization, is thread-safe, and is
faster and less confusing than double-checked locking."
I don't agree. First, the Initialize-on-demand Holder Class idiom doesn't
garantee that the Singleton is not constructed until its first use. JVMs
have great freedom here. Second, if it does indeed lazy initialization, it
can't be any faster than DCL, because it too has to cross at least one read
barrier internally.
Am i overlooking something here?