A question about synchronized threads

B

byhesed

public class A {
synchronized void m1() { ... }
synchronized void m2() { ... }
void m3() { ... }
}

The book explains above code:

Given an instance a of class A, when one thread is executing
a.m1(),
another thread will be prohibited from executing a.m1() or a.m2().

I have a question.

The explanation means than when one thread is executing m1() method,
No other threads can execute m1() or m2() thread.

Is it correct? If it is correct, how can I handle it better?
I think it is too ineffectual. Does anybody know?
 
L

Lew

public class A {
synchronized void m1() { ... }
synchronized void m2() { ... }
void m3() { ... }
}

The book explains above code:

Given an instance a of class A, when one thread is executing
a.m1(),
another thread will be prohibited from executing a.m1() or a.m2().

I have a question.

The explanation means than when one thread is executing m1() method,
No other threads can execute m1() or m2() thread.

For that particular instance, yes.
Is it correct? If it is correct, how can I handle it better?
I think it is too ineffectual. Does anybody know?

What do you mean by "better"? What precisely is not the way you want it?
What is your standard of effectuality?
 
D

Daniele Futtorovic

public class A {
synchronized void m1() { ... }
synchronized void m2() { ... }
void m3() { ... }
}

The book explains above code:

Given an instance a of class A, when one thread is executing
a.m1(),
another thread will be prohibited from executing a.m1() or a.m2().

I have a question.

The explanation means than when one thread is executing m1() method,
No other threads can execute m1() or m2() thread.

Is it correct?

Yes, it is correct.
If it is correct, how can I handle it better?
I think it is too ineffectual. Does anybody know?

By defining yourself the monitor you synchronise on.

When you declare a method synchronized, the code will use the instance
of which that method is a member as the monitor. In other words, the
code above is equivalent to this:

public class A {
void m1() {
synchronized( this ){ ... }
}
void m2() {
synchronized( this ){ ... }
}
void m3() { ... }
}

Both method lock on the same monitor, so when one is executed, no other
thread can execute any of them.

If you don't want that interdependency, you can define monitors yourself:

public class A {
private final Object
m1Monitor = new Object(),
m2Monitor = new Object()
;
void m1() {
synchronized( m1Monitor ){ ... }
}
void m2() {
synchronized( m2Monitor ){ ... }
}
void m3() { ... }
}

That way, a thread executing m1 will not prevent another thread from
executing m2, only m1 itself.
 
B

byhesed

Yes, it is correct.


By defining yourself the monitor you synchronise on.

When you declare a method synchronized, the code will use the instance
of which that method is a member as the monitor. In other words, the
code above is equivalent to this:

public class A {
void m1() {
synchronized( this ){ ... }
}
void m2() {
synchronized( this ){ ... }
}
void m3() { ... }
}

Both method lock on the same monitor, so when one is executed, no other
thread can execute any of them.

If you don't want that interdependency, you can define monitors yourself:

public class A {
private final Object
m1Monitor = new Object(),
m2Monitor = new Object()
;
void m1() {
synchronized( m1Monitor ){ ... }
}
void m2() {
synchronized( m2Monitor ){ ... }
}
void m3() { ... }
}

That way, a thread executing m1 will not prevent another thread from
executing m2, only m1 itself.

Thank you. I understand what you elaborated on.
 
B

byhesed

For that particular instance, yes.


What do you mean by "better"?  What precisely is not the way you want it?
What is your standard of effectuality?

If too much spaces are marked as critical regions,
then the program will not be optimized.
It wastes too much time in waiting for obtaining a right to access
critical regions.

So, in my question, better means optimization when using threads.
 
M

markspace

Thank you. I understand what you elaborated on.


I hope so. "Optimizing" threads is tricky, and when you reason about
how to optimize it's easy to make a mistake. Prominent Java engineers
(i.e., Doug Lea) have been known to make mistakes.

Remember, m1 and m2 can't share mutual state. If they do, you're going
to have problems accessing any variables that are shared.

I'd be interested in seeing what it is you want to optimize. What does
this class really look like? I think it might be instructive for you to
post your full class, and let other comment on how best to "optimize" it.
 
D

Daniele Futtorovic

I hope so. "Optimizing" threads is tricky, and when you reason about
how to optimize it's easy to make a mistake. Prominent Java engineers
(i.e., Doug Lea) have been known to make mistakes.

Remember, m1 and m2 can't share mutual state. If they do, you're going
to have problems accessing any variables that are shared.
I'd be interested in seeing what it is you want to optimize. What does
this class really look like? I think it might be instructive for you to
post your full class, and let other comment on how best to "optimize" it.

Ten bucks say it's a class with too many methods. ;)
 
D

Dagon

byhesed said:
public class A {
synchronized void m1() { ... }
synchronized void m2() { ... }
void m3() { ... }
}
The book explains above code:
Given an instance a of class A, when one thread is executing
a.m1(), another thread will be prohibited from executing a.m1() or a.m2().

Throw this book away and get a better one (Goetz, "Java Concurrency in
Practict" is good).
I have a question.
The explanation means than when one thread is executing m1() method,
No other threads can execute m1() or m2() thread.
Is it correct?

This is kind of correct for this example, but it's stated in such a way that
it will confuse you until you learn more completely what synchronized does.

What it really does is: "before any thread enters m1 or m2, it will acquire an
exclusive lock on the instance of A. No other thread may acquire that lock
for that instance of A until the lock-holding thread releases it by exiting
the method."
If it is correct, how can I handle it better?
I think it is too ineffectual. Does anybody know?

Depends on what you want to happen - why do you think it's ineffectual? It's
pretty effective if you want to make sure that only one thread at a time
can run those methods on the same instance.
 
E

Eric Sosman

[...]
If too much spaces are marked as critical regions,
then the program will not be optimized.
It wastes too much time in waiting for obtaining a right to access
critical regions.

So, in my question, better means optimization when using threads.

A bit of advice I've found *very* useful over the years: Don't
think about protecting "critical regions of code," think instead
about protecting "access to shared data."

Stop. Go back and read the paragraph again. It's the crux.

Threads T1,T2,...,Tn do not interfere by executing the same
code simultaneously, but by trying to access the same data. (More
generally, by trying to access the same "state.") If the shared
state is S, then T1,T2,...,Tn must not try to alter it at the same
time, nor try to read it while another Tx is altering it. If, on
the other hand, S decomposes into disjoint sub-states S1,S2,...,Sm
that are *completely* independent, then it's all right for Ti to
alter Sa while Tj reads Sb; you must guard against simultaneous
alteration or alteration-and-read of each single sub-state Sx.

Think about the state; that's what you're trying to keep
coherent and consistent. Don't worry about the code; it's just
the tool that manipulates the state. You'll be astonished at how
much simpler things become with this view. Trust me.
 
B

byhesed

[...]
If too much spaces are marked as critical regions,
then the program will not be optimized.
It wastes too much time in waiting for obtaining a right to access
critical regions.
So, in my question, better means optimization when using threads.

     A bit of advice I've found *very* useful over the years: Don't
think about protecting "critical regions of code," think instead
about protecting "access to shared data."

     Stop.  Go back and read the paragraph again.  It's the crux.

     Threads T1,T2,...,Tn do not interfere by executing the same
code simultaneously, but by trying to access the same data.  (More
generally, by trying to access the same "state.")  If the shared
state is S, then T1,T2,...,Tn must not try to alter it at the same
time, nor try to read it while another Tx is altering it.  If, on
the other hand, S decomposes into disjoint sub-states S1,S2,...,Sm
that are *completely* independent, then it's all right for Ti to
alter Sa while Tj reads Sb; you must guard against simultaneous
alteration or alteration-and-read of each single sub-state Sx.

     Think about the state; that's what you're trying to keep
coherent and consistent.  Don't worry about the code; it's just
the tool that manipulates the state.  You'll be astonished at how
much simpler things become with this view.  Trust me.

Thank you for your advice.

I'll remember what you recommend:
"Think instead about protecting "access to shared data."
 
B

byhesed

Throw this book away and get a better one (Goetz, "Java Concurrency in
Practict" is good).


This is kind of correct for this example, but it's stated in such a way that
it will confuse you until you learn more completely what synchronized does.

What it really does is: "before any thread enters m1 or m2, it will acquire an
exclusive lock on the instance of A. No other thread may acquire that lock
for that instance of A until the lock-holding thread releases it by exiting
the method."


Depends on what you want to happen - why do you think it's ineffectual? It's
pretty effective if you want to make sure that only one thread at a time
can run those methods on the same instance.

I thought that synchronizing entire methods would be wasteful.
Also, although two methods need to be synchronized,
if two methods are totally unrelated to each other,
then it would be too bad, isn't it?
The rate of using CPU resources will be too low.
That is why I though it is ineffectual.
 
M

markspace

Also, although two methods need to be synchronized,
if two methods are totally unrelated to each other,
then it would be too bad, isn't it?
The rate of using CPU resources will be too low.


Be careful with that word "unrelated," things can seem unrelated when
they aren't. Correct is more important that ineffectual.

However, if it is obvious that the methods could be improved, then sure
it's OK to improve them, and you do seem to have the right track. I
think we're all just saying that concurrency can be tricky.

I'd would still like to see a complete example class ;-)
 
J

Jukka Lahtinen

Why? The statement he quoted is perfectly accurate.

...and the OP didn't even specify which book was quoted. (At least I
didn't spot the name of the book or the author.)
So nothing we have seen so far indicates anything wrong in the book.
 
R

Robert Klemme

If too much spaces are marked as critical regions,
then the program will not be optimized.
It wastes too much time in waiting for obtaining a right to access
critical regions.

So, in my question, better means optimization when using threads.

There is no one size fits all answer to that question. It completely
depends on the nature of your application. For example, if read
accesses vastly outnumber write accesses you will get significant
improvements by using read write locks. If it is the other way round
you won't notice a big difference between using "synchronized" and a
read write lock (because most of the time the exclusive write lock will
be used).

In other situations not sharing mutable state (i.e. copying mutable
state or using immutable state) might be the best solution. Or you use
a thread safe data structure such as copy on write list.

There is a whole, big toolbox for writing scalable thread safe
applications. Eric has it exactly right with his suggestion because the
nature of the state (shared, not shared, mutable, immutable) is the
important aspect to reason about. I recommend reading Doug Lea's
excellent book on the matter.

Kind regards

robert
 
B

byhesed

There is no one size fits all answer to that question.  It completely
depends on the nature of your application.  For example, if read
accesses vastly outnumber write accesses you will get significant
improvements by using read write locks.  If it is the other way round
you won't notice a big difference between using "synchronized" and a
read write lock (because most of the time the exclusive write lock will
be used).

In other situations not sharing mutable state (i.e. copying mutable
state or using immutable state) might be the best solution.  Or you use
a thread safe data structure such as copy on write list.

There is a whole, big toolbox for writing scalable thread safe
applications.  Eric has it exactly right with his suggestion becausethe
nature of the state (shared, not shared, mutable, immutable) is the
important aspect to reason about.  I recommend reading Doug Lea's
excellent book on the matter.

Kind regards

        robert

Thank you, I'll read it.
 
D

Deeyana

There is a whole, big toolbox for writing scalable thread safe
applications.

Even bigger if you use Clojure. Then you have access to agents, software
transactional memory, and several other goodies *as well as* monitors,
wait/notify, and all of the goodies in java.util.concurrent.
 
L

Lew

Even bigger if you use Clojure. Then you have access to agents, software
transactional memory, and several other goodies *as well as* monitors,
wait/notify, and all of the goodies in java.util.concurrent.

He asked about Java. Please don't attempt to start another stupid language
war, especially in such an unhelpful way.
 
L

Lew

byhesed said:
I thought that synchronizing entire methods would be wasteful.
Why?

Also, although two methods need to be synchronized,
if two methods are totally unrelated to each other,
then it would be too bad, isn't it?\

We cannot tell without an SSCCE.
http://sscce.org/

Please provide one.

Also, if two methods are "totally unrelated to each other" then you don't need
any synchronization at all. They shouldn't even be in the same class, perhaps
not even in the same application.
The rate of using CPU resources will be too low.
That is why I though it is ineffectual.

How do you know? Please explain your measurement methodology for performance.
What was the metric of performance, what value did it have, and how much of
overall application performance was it, percentagewise?

Make sure that you measure under conditions similar to anticipated real-life
loads, and compare proposed "optimizations" to your baseline under the same
conditions.

Especially with concurrent programming, it is a terrible, terrible mistake to
imagine that you will "optimize" something when you don't have any objective
data. Much more likely, you will "optimize" by getting wrong answers in half
the time of correct ones.

The single best, most effective way to optimize concurrent code is not to
share data. The second-best way is to make shared-data immutable (read-only).

You should do ABSOLUTELY NOTHING about or with concurrent programming until
you've read at least one of the two books people have recommended to you, and
understand it.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,763
Messages
2,569,562
Members
45,038
Latest member
OrderProperKetocapsules

Latest Threads

Top