Peterson's Algorithm in java, sequencial instruction execution ?

X

xmarlawx

Chris Thomasson wrote:
[SNIP-quote]
Making everything volatile will generated a boat load of memory barriers.
Remember, the Java memory model is basically like this: [SNIP-code]
Those barriers will be executed every time you load/store into a volatile
variable. Keep that in mind. Go with Patricia's advice and just use a normal
mutex (e.g., sync block). You will use far fewer barriers that way... IMHO,
volatile on Java is way too strict IMHO. The granularity is very, very poor.
Java doesn't support loads with data-dependencies, or naked atomic
loads/stores... Or, many other types of fine-grain memory barriers..

Sorry I don't understand.I don't know what is a memory barrier.
Will look now.
By putting all volatile i only mean the three variables used
(flag[0],flag[1],turn)
An other thing is that i'm not using simple variable type such as int
and boolean but i'm using those
wrapped in a class i've wrote with accessor methods (without
synchronized methods).

When you say to put it in a synch block do you mean nesting the two
variables check (the spinlock),
or you mean simply the operation on the variables such as writing ?
Both ?

Right now i'm making some experiments ( i can upload the code) and
works with volatile.
Without volatile seemed to work too, but i guess that is not garanteed.
Probably (surely?) it will work as well with synchronized blocks or
methods.
 
C

Chris Thomasson

That is what I'm going to do.
Use any shared memory (variable) as volatile inside the threads.
And, yes, it is an exercise and it won't be used in any real
application.
I don't have choice about the language, is to be done in java.

I will try my best to not use the synchronized blocks (either nested or
not)
since it can be confusing.Volatile seems exactly what I need.

I found this as well :
http://www.javaperformancetuning.com/news/qotm030.shtml

Which explain the differences between synchronized and volatile only.

Again, I must say, thank you all.

Making everything volatile will generated a boat load of memory barriers.
Remember, the Java memory model is basically like this:


// load acquire
void* load(void **_pthis) {
void *state = ATOMIC_LOAD(_pthis);
membar #LoadStore | #LoadLoad;
}


// store release
void store(void **_pthis, void *state) {
membar #LoadStore | #StoreStore;
ATOMIC_STORE(_pthis, state);
}



Those barriers will be executed every time you load/store into a volatile
variable. Keep that in mind. Go with Patricia's advice and just use a normal
mutex (e.g., sync block). You will use far fewer barriers that way... IMHO,
volatile on Java is way too strict IMHO. The granularity is very, very poor.
Java doesn't support loads with data-dependencies, or naked atomic
loads/stores... Or, many other types of fine-grain memory barriers..
 
M

Mike Schilling

I'm sorry Mike, but if you refer to standard Java synchronization
facility I believe you are wrong.

I did overstate; it's also possible to make your shared-memory volatile.
But without one or the other, there is no guarantee in the Java memory model
that a change made by one thread will be seen by another.
 
M

Mark Thornton

Mike said:
I did overstate; it's also possible to make your shared-memory volatile.
But without one or the other, there is no guarantee in the Java memory model
that a change made by one thread will be seen by another.

A third mechanism is provided by the classes in
java.util.concurrent.atomic (from Java 5).

Mark Thornton
 
C

Chris Thomasson

Mark Thornton said:
A third mechanism is provided by the classes in
java.util.concurrent.atomic (from Java 5).

Except that you can do Petersons algorithm without atomic operations (e.g.,
Interlocked). So, this would add even more overhead...
 
D

Daniel Pitts

Knute said:
Or declare them volatile.
I think the reordering of instructions you are talking about is at the
machine code level. I don't know if Java's JVM will do this, but here
is what I would suspect.

Any two interacting instructions on the same thread (i.e, a write to a
memory location, and a later read) will appear in the same order.
However the following sequences may be considered to have the same
observable pattern:

Given locations A, B, and C;

Original:
Load A from constant
Load C From B
Test stuff in C
Load C From A
Test stuff in C
Load B from C

Variation 1:
Load C From B
Test stuff in C
Load A from constant
Load C From A
Load B from C
Test stuff in C

Variation 2:
Load C From B
Load A from constant
Test stuff in C
Load C From A
Test stuff in C
Load B from C

---
Although, I don't know if Java will reorder instructions in that way.
It very well could depend on both the Compiler implementation and the
JVM implementation.

Also, having a volatile variable could quite possibly prevent such
reordering. It probably would be worth reading the relavent sections of
the JLS. If there isn't an explicit requirement, then JVM implementors
are free to do what they want. If there IS an explicit requirement,
you will know your answer before hand.
---

Haven't said all of that, if this isn't a class project, but for a real
application, make sure you're not optimizing prematurely.
Syncronization CAN be a bottleneck, but often times the bottleneck
could be somewhere else. Also, the new java.util.concurrent packages
give you a very nice toolset to deal with concurrent applications.

But, if you are implementing Peterson's algorithm just for the practice
of implementing it, go right ahead. Try to implement it, and see if it
works.

Hope this helps.

Daniel.
 
P

Patricia Shanahan

Daniel Pitts wrote:
....
I think the reordering of instructions you are talking about is at the
machine code level. I don't know if Java's JVM will do this, but here
is what I would suspect.
....

There can be reordering at a level below machine code, as part of the
processor implementation. The JVM does not have direct control, but
processors have instructions for forcing extra ordering, and the JVM is
required to use them to achieve what the JLS specifies e.g. for volatile
data.

The mental model of a processor doing one instruction at a time in
program order, and doing all work for each instruction before going on
to the next one, used to be reasonably accurate, except for some
supercomputers. Now, it is a gross simplification for most processors.

Instead, think of the processor as a sort of factory, with a main
production line several instructions wide, but many specialized stations
where different processing gets done and instructions can wait if
something they need is not available. Some operations are shunted off
e.g. to a specialized production line for doing floating point. The work
of making a store globally visible, not just visible to this processor,
may go into another production line.

A processor can have many dozens of instructions at some stage in the
processing.

It's all designed to make it look, to code executing on the processor,
as though the processor is doing one instruction at a time in program
order. However, a thread on another processor monitoring the behavior of
shared data can see the order in which some things really happened.

Patricia
 
M

Mark Thornton

Patricia said:
It's all designed to make it look, to code executing on the processor,
as though the processor is doing one instruction at a time in program
order. However, a thread on another processor monitoring the behavior of
shared data can see the order in which some things really happened.

Worse it can see an order which never happened!
E.g. on processor A:

start with x=1, y=1, z=2

x = 2;
z = x+y; // 3

Then processor B might see.

start x=1, y=1, z=2.

Then x=1, y=1, z=3

then x=2, y=1, z=3

I.e. it sees the change in z before the change in x.

Mark Thornton
 
M

Mike Schilling

Mark Thornton said:
A third mechanism is provided by the classes in
java.util.concurrent.atomic (from Java 5).

A mechanism not based on one of the first two? What might that be?
 
C

Chris

Just for the record, all of this has been discussed in considerable
detail in the past. Google the term "double-checked locking".

The conclusion of all these are articles is: don't try to fool the
compiler. Attempts to add quasi-memory barriers and whatnot will fail.
Use regular old synchronization.
 
M

Mark Thornton

Mike said:
A mechanism not based on one of the first two? What might that be?
The implementation of those classes is up to the VM and need not make
use of the regular synchronisation or volatile mechanisms. Many
processors have specific instructions that can be used to implement the
methods in those classes with lower overhead than using regular
synchronisation.

Mark Thornton
 
C

Chris Uppal

Mike said:
A mechanism not based on one of the first two? What might that be?

Hackery in the JVM implementation, using private internal "magic" (/not/ JNI)
to replace the implementation of those features with custom machine-code
sequences.

-- chris
 
M

Mike Schilling

Chris Uppal said:
Hackery in the JVM implementation, using private internal "magic" (/not/
JNI)
to replace the implementation of those features with custom machine-code
sequences.

Got that, but which features are they? (My customers, and thus me, aren't
using 1.5 yet.)
 
M

Mike Schilling

Mark Thornton said:

Which includes the sentence:

The memory effects for accesses and updates of atomics generally follow
the rules for volatiles, as stated in
The Java Language Specification, Third Edition (17.4 Memory Model):

Which is a good thing; having finally updated the memory model into
something sensible, it would be unfortunate to add 1 new set of semantics to
it. Though given that, under JSR 133:

Writing to a volatile field has the same memory effect as a monitor
release, and reading from a volatile
field has the same memory effect as a monitor acquire.

(from the JSR 133 FAQ at
http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html#volatile)

It's not obvious to me that the use of atomics is substantially more
"efficient" than using synchronization. (Yes, I see lazySet() and
weakCompareAndSet(), but how often would one use them?)
 
M

Mark Thornton

Mike said:
Which includes the sentence:

The memory effects for accesses and updates of atomics generally follow
the rules for volatiles, as stated in
The Java Language Specification, Third Edition (17.4 Memory Model):

Which is a good thing; having finally updated the memory model into
something sensible, it would be unfortunate to add 1 new set of semantics to
it. Though given that, under JSR 133:

Writing to a volatile field has the same memory effect as a monitor
release, and reading from a volatile
field has the same memory effect as a monitor acquire.

(from the JSR 133 FAQ at
http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html#volatile)

It's not obvious to me that the use of atomics is substantially more
"efficient" than using synchronization. (Yes, I see lazySet() and
weakCompareAndSet(), but how often would one use them?)

No idea. I suspect that Doug Lea is one of the few people who could
answer that question.

Mark Thornton
 
M

Mike Schilling

Mark Thornton said:
No idea. I suspect that Doug Lea is one of the few people who could answer
that question.

If you see him, ask if he knows what PhantomReferences are for too. :)
 
M

Mark Thornton

Mike said:
If you see him, ask if he knows what PhantomReferences are for too. :)

Objects that are only phantom reachable can't be resurrected by
finalize, so they are definitely dead.

Mark Thornton
 
M

Mike Schilling

Mark Thornton said:
Objects that are only phantom reachable can't be resurrected by finalize,
so they are definitely dead.

OK. Can you give me an example of how you'd use a PhantomReference
programmatically? I've asked this before on the n.g, and never gotten an
answer.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,773
Messages
2,569,594
Members
45,123
Latest member
Layne6498
Top