Subtle difference between C++0x MM and other MMs

D

Dmitriy V'jukov

Consider following Peterson's algorithm implementation from Wikipedia:
http://en.wikipedia.org/wiki/Peterson's_algorithm

flag[0] = 0
flag[1] = 0
turn = 0

P0: flag[0] = 1
turn = 1
memory_barrier()
while( flag[1] && turn == 1 );
// do nothing
// critical section
...
// end of critical section
flag[0] = 0

P1: flag[1] = 1
turn = 0
memory_barrier()
while( flag[0] && turn == 0 );
// do nothing
// critical section
...
// end of critical section
flag[1] = 0

We can implement this in Java using volatile variables, and needed
memory_barrier() will be emitted automatically by compiler.
We can implement this in C# using volatile variables, and
Thread.MemoryBarrier() as memory_barrier().
We can implement this in x86 MM using plain loads and stores, and
mfence instruction as memory_barrier().
We can implement this in C++0x using std::atomic<> and issuing loads
with memory_order_acquire, stores with memory_order_release, and
atomic_thread_fence(memory_order_seq_cst) as memory_barrier(). This is
the most straightforward translation of Java/C#/x86 implementations.

The only problem is that C++0x implementation will not work.
Personally for me, it's quite counter-intuitive. And following
question arise. What is the most simple way to translate some existing
Java/C#/x86 algorithm implementation to C++0x? It seems that it's not
so easy...

Dmitriy V'jukov
 
P

Peter Dimov

Consider following Peterson's algorithm implementation from Wikipedia:http://en.wikipedia.org/wiki/Peterson's_algorithm

 flag[0]   = 0
 flag[1]   = 0
 turn      = 0

 P0: flag[0] = 1
    turn = 1
    memory_barrier()
    while( flag[1] && turn == 1 );
            // do nothing
    // critical section
    ...
    // end of critical section
    flag[0] = 0

P1: flag[1] = 1
    turn = 0
    memory_barrier()
    while( flag[0] && turn == 0 );
            // do nothing
    // critical section
    ...
    // end of critical section
    flag[1] = 0

We can implement this in Java using volatile variables, and needed
memory_barrier() will be emitted automatically by compiler.

And the C++ equivalent is to use seq_cst load and stores, which are
equivalent to Java volatiles.
We can implement this in C# using volatile variables, and
Thread.MemoryBarrier() as memory_barrier().
We can implement this in x86 MM using plain loads and stores, and
mfence instruction as memory_barrier().
We can implement this in C++0x using std::atomic<> and issuing loads
with memory_order_acquire, stores with memory_order_release, and
atomic_thread_fence(memory_order_seq_cst) as memory_barrier(). This is
the most straightforward translation of Java/C#/x86 implementations.

The only problem is that C++0x implementation will not work.

Why will it not work?
 
D

Dmitriy V'jukov

Consider following Peterson's algorithm implementation from Wikipedia:http://en.wikipedia.org/wiki/Peterson's_algorithm
flag[0] = 0
flag[1] = 0
turn = 0
P0: flag[0] = 1
turn = 1
memory_barrier()
while( flag[1] && turn == 1 );
// do nothing
// critical section
...
// end of critical section
flag[0] = 0
P1: flag[1] = 1
turn = 0
memory_barrier()
while( flag[0] && turn == 0 );
// do nothing
// critical section
...
// end of critical section
flag[1] = 0
We can implement this in Java using volatile variables, and needed
memory_barrier() will be emitted automatically by compiler.

And the C++ equivalent is to use seq_cst load and stores, which are
equivalent to Java volatiles.


Yes, it's possible to implement any algorithm that relying on
sequentially consistent memory model in C++0x using seq_cst atomic
operations. But! Seq_cst atomic operations, especially stores, can be
quite expensive. So, one has general desire to use weaker operations,
like store-release and load-acquire. And in Java/C#/x86 it's possible
to implement Peterson's algorithm using weak operations + 1 strong
fence. In C++0x - NOT.

Why will it not work?


I mean not every C++0x implementation of Peterson's algorithm, but
particular implementation which uses store-release, load-acquire + 1
seq_cst fence.


Dmitriy V'jukov
 
P

Peter Dimov

And in Java/C#/x86 it's possible
to implement Peterson's algorithm using weak operations + 1 strong
fence. In C++0x - NOT.

How would you implement Peterson's algorithm in Java using weak
operations and a fence? Java doesn't have weak operations or fences.
Its volatile loads and stores are equivalent to C++MM's seq_cst loads
and stores. Both promise sequential consistency (no more and no less).
I mean not every C++0x implementation of Peterson's algorithm, but
particular implementation which uses store-release, load-acquire + 1
seq_cst fence.

Why do you think that this implementation doesn't work?
 
P

Peter Dimov

Why do you think that this implementation doesn't work?

I think I see your point. Getting back to

P0: flag[0] = 1
turn = 1
memory_barrier()
while( flag[1] && turn == 1 );
// do nothing
// critical section
...
// end of critical section
flag[0] = 0

P1: flag[1] = 1
turn = 0
memory_barrier()
while( flag[0] && turn == 0 );
// do nothing
// critical section
...
// end of critical section
flag[1] = 0

It's easy to show that P0 and P1 can't block each other forever;
eventually they will agree on the value of 'turn' and one of them will
proceed.

The case where P0 sees flag[1] == 0 and P1 sees flag[0] == 0 is a
classic SC violation example and every reasonable definition of
memory_barrier rules it out.

The interesting case you must have had in mind is the sequence

P1:flag[1] = 1
P1:turn = 0
P0:flag[0] = 1
P0:turn = 1
P0:memory_barrier

Can P0 now see flag[1] == 0? (P1 will later see turn == 1 and enter
the critical section.)

I wonder whether the formal CLR memory model (or even the current
formal x86 memory model) disallows this. (XCHG for turn instead of a
fence should work.)

I think that the C++MM does, if the condition is while( turn == 1 &&
flag[1] ). P0 seeing its own turn=1 doesn't break the release sequence
started by P1:turn=0 because turn=1 is executed by the same thread
(first bullet in 1.10/6). So P1:turn=0 synchronizes-with the read from
'turn' in P0 and ensures that P1:flag[1]=1 is seen.
 
D

Dmitriy V'jukov

Why do you think that this implementation doesn't work?

I think I see your point. Getting back to

P0: flag[0] = 1
turn = 1
memory_barrier()
while( flag[1] && turn == 1 );
// do nothing
// critical section
...
// end of critical section
flag[0] = 0

P1: flag[1] = 1
turn = 0
memory_barrier()
while( flag[0] && turn == 0 );
// do nothing
// critical section
...
// end of critical section
flag[1] = 0

It's easy to show that P0 and P1 can't block each other forever;
eventually they will agree on the value of 'turn' and one of them will
proceed.

The case where P0 sees flag[1] == 0 and P1 sees flag[0] == 0 is a
classic SC violation example and every reasonable definition of
memory_barrier rules it out.

The interesting case you must have had in mind is the sequence

P1:flag[1] = 1
P1:turn = 0
P0:flag[0] = 1
P0:turn = 1
P0:memory_barrier

Can P0 now see flag[1] == 0? (P1 will later see turn == 1 and enter
the critical section.)

Exactly! And this behavior is very counter-intuitive for me!
I wonder whether the formal CLR memory model (or even the current
formal x86 memory model) disallows this. (XCHG for turn instead of a
fence should work.)

As for x86 MM, I think that Yes, such behavior is disallowed. x86 MM
defines possible reorderings in one thread. And then intended reading
is that you must try to construct interleaving of threads (taking into
account reorderings in threads). If it's possible to construct
interleaving, then behavior is allowed. If it's impossible - then
disallowed.

For Peterson's algorithm it's impossible to construct interleaving,
which will break algorithm.

For for CLR, it's very informal. But I think intended reading is the
same as for x86... just because Microsoft targets mainly to x86 :)

But for C++0x the fact that it's impossible to construct interleaving
doesn't matter...

I think that the C++MM does, if the condition is while( turn == 1 &&
flag[1] ). P0 seeing its own turn=1 doesn't break the release sequence
started by P1:turn=0 because turn=1 is executed by the same thread
(first bullet in 1.10/6). So P1:turn=0 synchronizes-with the read from
'turn' in P0 and ensures that P1:flag[1]=1 is seen.


"by the same thread" which execute release, not "by the same thread"
which execute acquire.
So this won't work too.


Dmitriy V'jukov
 
D

Dmitriy V'jukov

How would you implement Peterson's algorithm in Java using weak
operations and a fence? Java doesn't have weak operations or fences.
Its volatile loads and stores are equivalent to C++MM's seq_cst loads
and stores. Both promise sequential consistency (no more and no less).


Java volatiles promise more. volatile store is release, and volatile
load is acquire. for x86 this means that plain stores and loads will
be used. Well, yes, sometimes heavy membar will be emitted.
C++0x's seq_cst atomic store on x86 will be locked instruction.
So, in my opinion, translating Java volatiles to C++0x's seq_cst
atomics is not fair.
IMVHO more fair way is to translate volatile store to store-release,
volatile load to load-acquire, and manually emit seq_cst fence. At
least this is what initially comes to mind.

Dmitriy V'jukov
 
P

Peter Dimov

Java volatiles promise more. volatile store is release, and volatile
load is acquire.

Exactly the same as seq_cst.
for x86 this means that plain stores and loads will
be used. Well, yes, sometimes heavy membar will be emitted.
C++0x's seq_cst atomic store on x86 will be locked instruction.

Java VMs will also (be changed to) emit XCHG for volatile stores,
because plain stores do not guarantee SC, no matter how many MFENCEs
one uses. x86 is now officially not TSO.

T0: x = 1
T1: y = 1
T2: r1 = x; r2 = y; // 1 0 allowed
T3: r3 = y; r4 = x; // 1 0 allowed
 
D

Dmitriy V'jukov

Exactly the same as seq_cst.


Java VMs will also (be changed to) emit XCHG for volatile stores,
because plain stores do not guarantee SC, no matter how many MFENCEs
one uses. x86 is now officially not TSO.

T0: x = 1
T1: y = 1
T2: r1 = x; r2 = y; // 1 0 allowed
T3: r3 = y; r4 = x; // 1 0 allowed


My example is not about total order, it's about ordering just between
2 threads.

Dmitriy V'jukov
 
P

Peter Dimov

My example is not about total order, it's about ordering just between
2 threads.

How does this change the fact that Java volatiles and C++ seq_cst
operations are effectively defined in the same way?
 
D

Dmitriy V'jukov

How does this change the fact that Java volatiles and C++ seq_cst
operations are effectively defined in the same way?


http://groups.google.com/group/comp.programming.threads/msg/ec91885b65521880


But note that you don't get it instantly too:
http://groups.google.com/group/comp.programming.threads/msg/34ed88e972296af9?hl=en
And you are one of the developers of current MM, just think about
others :)


When and if I will find more convincing arguments, I will post them...


Dmitriy V'jukov
 
P

Peter Dimov

http://groups.google.com/group/comp.programming.threads/msg/ec91885b6...

But note that you don't get it instantly too:http://groups.google.com/group/comp.programming.threads/msg/34ed88e97...
And you are one of the developers of current MM, just think about
others :)

When and if I will find more convincing arguments, I will post them...

I agree with you that the translation from x86 to the C++MM doesn't
work, and that this is a potential problem. I don't see at the moment
how the definition of a seq_cst fence can be modified to match the x86
behavior though.

But I still don't see why you think that there is any difference
between Java volatiles and C++ seq_cst operations. They are specified
in (more or less) the exact same way. The Pearson example's
formulation in Java and in C++ seq_cst is equivalent.
 
B

Boehm, Hans

I agree with you that the translation from x86 to the C++MM doesn't
work, and that this is a potential problem. I don't see at the moment
how the definition of a seq_cst fence can be modified to match the x86
behavior though.

But I still don't see why you think that there is any difference
between Java volatiles and C++ seq_cst operations. They are specified
in (more or less) the exact same way. The Pearson example's
formulation in Java and in C++ seq_cst is equivalent.- Hide quoted text -
Thanks for pointing out an interesting example.

I'm with Peter on this one. There are some differences between Java
volatiles and C++ seq_cst atomics (e.g. C++ seq_cst atomics can be non-
scalars and increments on integral types are atomic). But none of
those matter here.

I also believe that on X86, if you write the original example with
release stores following the critical section, omit the fence, and
rely on seq_cst atomics everywhere else, the compiler should fairly
easily be able to generate code at least as good as the fence-based
code. And I find it much easier to reason about that version of the
code. Unfortunately, I think this does not currently apply across all
architectures.

We do realize that this version of a "fence" is weaker than some
hardware fences. To some extent, it has to be, since it's goal is to
be efficiently implementable across architectures. I'm not sure
whether there is a possible definition that would both satisfy that
constraint, and make this particular version of Peterson's algorithm
work.

Clarification, of sorts, on X86 implementation of sequentially
consistent stores:
I believe the currently published specifications promise correctness
only if the stores are implemented with an atomic RMW operation like
xchg. AFAIK, it is currently still unclear whether an ordinary store
with a trailing fence is also sufficient. And there seems to be
little reason not to compile it as an xchg.

Hans
 
P

Peter Dimov

The interesting case you must have had in mind is the sequence

P1:flag[1] = 1
P1:turn = 0
P0:flag[0] = 1
P0:turn = 1
P0:memory_barrier

Can P0 now see flag[1] == 0? (P1 will later see turn == 1 and enter
the critical section.)

You may be interested to know that this is (likely to be) disallowed
on PowerPC as well. It'd definitely be nice to somehow disallow it in
the C++MM as well, if we figure out how to do that.

Otherwise, we'll have to live with something like

P0: flag[0].store( 1, relaxed );
turn.exchange( 1, acq_rel );
while( flag[1].load( acquire ) && turn.load( relaxed ) == 1 );
// do nothing
// critical section
...
// end of critical section
flag[0].store( 0, release );

(I hope I got this right). Of course using a RMW somewhat defeats the
point of the algorithm, but we're using it as an illustration.
 
D

Dmitriy V'jukov

We do realize that this version of a "fence" is weaker than some
hardware fences.  To some extent, it has to be, since it's goal is to
be efficiently implementable across architectures.  I'm not sure
whether there is a possible definition that would both satisfy that
constraint, and make this particular version of Peterson's algorithm
work.


I can describe how I currently implement seq_cst fence in Relacy Race
Detector in Java/CLR mode:
http://groups.google.com/group/relacy

I will try to formalize the rules.
Preconditions:
1. There is store A with memory_order_release (or stronger) on atomic
object M.
2. There is store B with memory_order_relaxed (or stronger) on atomic
object M.
3. A precedes B in modification order of M, A and B are adjacent in
modification order of M.
4. There is seq_cst fence C.
5. B is sequenced-before C.

Postcondition:
A synchronizes-with C.

My reasoning is following. Preconditions 2-5 effectively equivalent
to:
2*. There is load B with memory_order_acquire on atomic object M.
3*. B loads value stored by A.

Informally this can be rephrased as:
seq_cst fence renders all sequenced-before stores to be loads-acquire.


Consider how this applies to Peterson's algorithm:

[01] flag[0] = 0
[02] flag[1] = 0
[03] turn = 0

P0:
[04] flag[0] = 1
[05] turn = 1
[06] memory_barrier()
[07] while( flag[1]
[08] && turn == 1 );
[09] // do nothing
[10] // critical section
[11] ...
[12] // end of critical section
[13] flag[0] = 0

P1:
[14] flag[1] = 1
[15] turn = 0
[16] memory_barrier()
[17] while( flag[0]
[18] && turn == 0 );
[19] // do nothing
[20] // critical section
[21] ...
[22] // end of critical section
[23] flag[1] = 0


Store to variable 'turn' on line 15 is store A.
Store to variable 'turn' on line 05 is store B.
A precedes B in modification order of 'turn', A and B are adjacent in
modification order of 'turn'.
Store B is sequenced-before seq_cst fence on line 06.
So store A synchronizes-with fence C.
So load of 'flag[1]' on line 07, cannot return 0 any more.


Dmitriy V'jukov
 
D

Dmitriy V'jukov

The interesting case you must have had in mind is the sequence
P1:flag[1] = 1
P1:turn = 0
P0:flag[0] = 1
P0:turn = 1
P0:memory_barrier
Can P0 now see flag[1] == 0? (P1 will later see turn == 1 and enter
the critical section.)

You may be interested to know that this is (likely to be) disallowed
on PowerPC as well. It'd definitely be nice to somehow disallow it in
the C++MM as well, if we figure out how to do that.


I think that this is disallowed on Sparc too.


Dmitriy V'jukov
 
P

Peter Dimov

I think that this is disallowed on Sparc too.

Yes, I believe it is. FWIW, the following:

P0: flag[0].store( 1, relaxed );
fence( seq_cst ); // #1
turn.store( 1, relaxed );
fence( seq_cst );
while( flag[1].load( acquire ) && turn.load( relaxed ) == 1 );
// do nothing
// critical section
...
// end of critical section
flag[0].store( 0, release );

seems to work under the C++MM. The model in its current form doesn't
let us get away with just a fence(release) at #1.
 
D

Dmitriy V'jukov

I think that this is disallowed on Sparc too.

Yes, I believe it is. FWIW, the following:

P0: flag[0].store( 1, relaxed );
    fence( seq_cst ); // #1
    turn.store( 1, relaxed );
    fence( seq_cst );
    while( flag[1].load( acquire ) && turn.load( relaxed ) == 1 );
            // do nothing
    // critical section
    ...
    // end of critical section
    flag[0].store( 0, release );

seems to work under the C++MM. The model in its current form doesn't
let us get away with just a fence(release) at #1.


It looks like it works.
But it's easy to construct heavyweight correct algorithm.
The question that bothers me - how much lightweight (and at the same
time formally correct) algorithm I can construct with C++0x?


Dmitriy V'jukov
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,479
Members
44,899
Latest member
RodneyMcAu

Latest Threads

Top