What does volatile guarantee?

E

Eric Sosman

On re-thought, even Eric's argument seems not too strong anymore:
First, z is read,
Then y is atomically increased by the previously read value.
Then x ...
...
While e.g. "n" is increased, some other thread could modify "a" and "z".
Obviously, that is irrelevant to the current thread, which no longer
cares for "z"'s value, and not yet cared for "a"'s old value.
...
Finally the resulting value of "b" would be added to the new value of "a".

No ambiguities involved.

It breaks Java's "left to right evaluation" rule, though.
Whether that's important is for you as a language designer to
decide.
So it's all just about whether "++","--","+=" and "-=" would be made
shorthands for what is already possible by the Atomic* classes.
The real reason boils down to that those atomic operations are still
slightly slower than the non-atomic ones, while the cases not caring
about atomicity by far outnumber the others. That's why I added
those "(not me!)"s.

My impression (I'm by no means a hardware expert) is that the
time penalty is considerably worse than "slight." It'll depend a
lot on the nature of the hardware, though.
 
L

Lew

Andreas said:
I have, w.r.t the spectrum.
You have, w.r.t certain points of the spectrum lending themselves
to a boundary line more than others, and not less than the current.

No. You just restated my point - that there is a spectrum, at one end already
implementing the atomicity, at the other end being ridiculous to implement
atomicity, and in between some points lend themselves to it more than others.
That's exactly what I was saying. Thank you for clarifying it.
The shorthand combining operators already differ from their explicit
a = a <op> b variants. (especially for array-elements and fields
of non-trivially obtained objects)
Adding their atomicity for volatiles wouldn't exactly wreck the whole
paradigm.

I never said they would, only that they didn't need to be changed.

There is a cost to changing a language, and it's cumulative. It's like
radiation or acetominophen - small doses are ok, but repeatedly over time they
add up to a toxic level. Changes should be conservative, not eager.
For consistency, however, *all* of them would have to be atomic then,
including e.g. /=, >>>= ... which do not even have counterparts in
the Atomic* classes so far.

Thank goodness for the other synchronization techniques, then.
I'm not really proposing them, just arguing that they wouldn't be *bad*.
I make no claim, that they'd be anything more than just a convenience.

Sometimes, conveniences happen.

And because of the aforementioned cumulative toxicity, they should be resisted
unless they can be shown to have minimal or negligible impact and important
benefit. Eric has shown that the changes for the combined operators will not
be minimal, and no one has argued that the benefit is important. Ergo, don't
do it.
 
L

Lew

Patricia said:
How frequently does ++ appear in typical Java programs?

Rather a lot, I should think.

for ( int ix = 0; ix < limit; ++ix ) ...

int count = 0;
for ( Foo foo : foos )
{
blahBlah();
++count;
}

etc.

Or did you mean specifically how often an atomic ++ for volatiles would be needed?

I request clarification on your point about the branch not taken:
In addition to the obvious costs, consider the effect on branch
prediction. The interesting case is the ++ that does not need to be
atomic, the current use of ++. In those cases, the jneq will never be
taken.

I should also think that the non-atomic ++ will not branch and loop but use
the increment instruction of the host system.

If there were an atomic ++ as well as the non-atomic, would the compiler have
to generate different code depending on which one was in use?

Was your point that that difference would complicate life for any attempt to
create an atomic version of ++ that "knew" it was working with a volatile
variable?

If so, then the real question is how often an atomic ++ would be needed, and
whether its frequency justifies a new semantic and more difficult compilation
of the operator, vs. using the existing mechanisms for atomic incrementation.

Given the issues you and Eric have elucidated for the implementation of an
atomic ++ for volatile variables, its questionable benefit, and the existence
of suitable ways to accomplish the same goal, it is clear that it is not a
good idea. For those who disagree, you will very likely have to live with the
disappointment of it never happening.
 
A

Arne Vajhøj

Rather a lot, I should think.

for ( int ix = 0; ix < limit; ++ix ) ...

int count = 0;
for ( Foo foo : foos )
{
blahBlah();
++count;
}

etc.

Or did you mean specifically how often an atomic ++ for volatiles would
be needed?

It seems likely that she is talking about the required atomic ++.

BTW, isn't ++something a C++'ism?

Arne
 
A

Arne Vajhøj

It's been around longer than that, since C.

I know, but the ++something is better than something++
because it is faster is rooted in C++ classes I believe.

Arne
 
E

Eric Sosman

I know, but the ++something is better than something++
because it is faster is rooted in C++ classes I believe.

When I actually want the value of the expression, I write
whichever I need (usually a[x++] or a[--x]). When all I want
is the side-effect, I write ++x because "increment x" seems to
read more smoothly than "x increment."

In neither case do I waste even one deci-neuron's worth of
brain power on the question of which is faster -- or which was
once said to have been found to be faster by someone whom the
sayer didn't actually know but had heard about from someone
else who might possibly have known the experimenter's second
cousin's first wife's roommate.
 
M

Mike Schilling

Arne said:
I know, but the ++something is better than something++
because it is faster is rooted in C++ classes I believe.

Right, in classes which overload '++".

y = ++x + z;

simply calls the ++ method and add z to the result, while

y = x++ + z;

needs to make a copy of x, add z to it, assign the result to y, and then
call the ++ method on the "real" x. Depending on how complicated "x" is,
the copy may be a significant expense.
 
L

Lew

I wouldn't know. This is a Java newsgroup. I'm not aware of any speed
differences between the two expressions in Java, and would be dubious of any
claims that there are.

Eric said:
When I actually want the value of the expression, I write
whichever I need (usually a[x++] or a[--x]). When all I want
is the side-effect, I write ++x because "increment x" seems to
read more smoothly than "x increment."

Which effect is the "side" effect? Isn't incrementation a primary effect of
the "increment" operator?

I use ++x where I want the value of the expression to be the incremented
value, even if I'm discarding that value. I use x++ where I want the value of
the expression to be the not-yet-incremented value, which would be pointless
to throw away.

Plus what Eric said.
 
M

Mike Schilling

Patricia said:
This effect happens, to some degree, even for simple integers in C. I
remember having to allocate an extra register in some situations for
x++. The code for ++x could use the same register to represent x and
to carry the intermediate result.

True, though it's worse for overloaded operations for at least two reasons:

1. The expense of generating an extra integer (allocating an extra register
or performing an extra store and fetch) is fixed, while the expense of
creating a temporary object is unbounded.

2. Since the code generator understands integer arithmetic, it can often
avoid any extra expense at all. The above, for example, can simply be
generated as

inc x
mov x, y
add z, y

vs.

mov x, y
add z, y
inc x

This kind of optimization isn't possible when all that's known about + and
++ is that they're method calls..
 
A

Arne Vajhøj

I wouldn't know. This is a Java newsgroup. I'm not aware of any speed
differences between the two expressions in Java, and would be dubious of
any claims that there are.

Neither am I.

But I think the something++ notation is more readable and
the main reason why the ++something is used is the influence
of C++ programmer micro optimization.

Arne
 
A

Arne Vajhøj

I know, but the ++something is better than something++
because it is faster is rooted in C++ classes I believe.

When I actually want the value of the expression, I write
whichever I need (usually a[x++] or a[--x]). When all I want
is the side-effect, I write ++x because "increment x" seems to
read more smoothly than "x increment."

If there is a functional difference, then what to
use is given.

I agree with your readability comment, but it does
not convince me because "incremented x" is less
readable than "x incremented".

Arne
 
L

Lew

Neither am I.

But I think the something++ notation is more readable and

I'm sure you agree that's entirely a matter of taste and style.

Two of us in this thread have expressed the opposite opinion, and both are
good programmers.
the main reason why the ++something is used is the influence
of C++ programmer micro optimization.

Neither of us who expressed a preference for the pre-increment version had
"C++ micro[-]optimization" as the reason. Both of us had readability as the
reason. Do you have any evidence for your assertion? It would need to be
statistical to justify the claim of "main reason".
 
A

Arne Vajhøj

Neither am I.

But I think the something++ notation is more readable and

I'm sure you agree that's entirely a matter of taste and style.
Yes.
the main reason why the ++something is used is the influence
of C++ programmer micro optimization.

Neither of us who expressed a preference for the pre-increment version
had "C++ micro[-]optimization" as the reason. Both of us had readability
as the reason.

Readability is not independent of experience.

And even though you may not consider yourself C++ programmers, then
I am pretty sure that you have read C++ programs before learning
Java and had Java teachers that used to do C++ programming.
Do you have any evidence for your assertion? It would
need to be statistical to justify the claim of "main reason".

The discussion is as classic in C++ as avoiding public fields in
Java.

I can (obviously) not produce statistics that show how much
C++ way of thinking has been inherited by Java programmers.

It is my belief that most Java programmer either did use
C++ before Java or learned Java by other that did use C++
before Java or have read plenty of Java books by people that
did use C++ before Java.

Arne
 
A

Arved Sandstrom

Arne Vajhøj wrote:
[ SNIP ]
It is my belief that most Java programmer either did use
C++ before Java or learned Java by other that did use C++
before Java or have read plenty of Java books by people that
did use C++ before Java.

Arne

I'll bet lots and lots of money that in 2010 that point #1 is no longer
the case. I'm not convinced that it was _ever_ the case. In 2010 point
#2 is also likely not the case, and although point #3 may be true, I'm
glad you used the word "use" and not "knew well".

AHS
 
R

RedGrittyBrick

Arne Vajhøj wrote:
[ SNIP ]
It is my belief that most Java programmer either did use
C++ before Java or learned Java by other that did use C++
before Java or have read plenty of Java books by people that
did use C++ before Java.

I'll bet lots and lots of money that in 2010 that point #1 is no longer
the case. I'm not convinced that it was _ever_ the case. In 2010 point
#2 is also likely not the case, and although point #3 may be true, I'm
glad you used the word "use" and not "knew well".

Regardless, my programming language history went something like

A = A + 1
a := a + 1
a += 1
a++

So, to me, a++ is more natural than ++a.
 
A

Andreas Leitgeb

Eric Sosman said:
It breaks Java's "left to right evaluation" rule, though.

I surrender to this point.

// in class-context
int a;
int foo() { a=42; return 21; }
{ a=21; a+=foo(); } // -> a==42

That renders my previous posts moot w.r.t. "op=".
My impression (I'm by no means a hardware expert) is that the
time penalty is considerably worse than "slight." It'll depend a
lot on the nature of the hardware, though.

If just "++" and "--" were changed and only for volatiles, then the result
may fix more broken programs than slow down others. (And volatiles already
"suffer" from uncached memory-access, performance-wise, so the slowdown
wouldn't be all that bad, relatively speaking)
Also, that change wouldn't break a program, unless it explicitly relied on
non-atomicity - is that something, a "correct" program is allowed to do?

I want to learn about arguments that thwart even that part, just as
I'm happy to have just learnt an argument against atomic op=.
 
E

Eric Sosman

Eric said:
When I actually want the value of the expression, I write
whichever I need (usually a[x++] or a[--x]). When all I want
is the side-effect, I write ++x because "increment x" seems to
read more smoothly than "x increment."

Which effect is the "side" effect? Isn't incrementation a primary effect
of the "increment" operator?

The "side effect" is the storing of a new value in x.
JLS Chapter 15, first sentence:

"Much of the work in a program is done by evaluating
expressions, either for their side effects, such as
assignments to variables, [...]"

Section 15.1, second paragraph:

"Evaluation of an expression can also produce side
effects, because expressions may contain embedded
assignments, increment operators, decrement operators,
and method invocations."
 
A

Andreas Leitgeb

Eric Sosman said:
Eric said:
When I actually want the value of the expression, I write
whichever I need (usually a[x++] or a[--x]). When all I want
is the side-effect, I write ++x because "increment x" seems to
read more smoothly than "x increment."
Which effect is the "side" effect? Isn't incrementation a primary effect
of the "increment" operator?
The "side effect" is the storing of a new value in x.
JLS Chapter 15, first sentence:

The philosophy of calling even the primary purpose of an idiom a
"sideeffect" likely comes from functional languages, where the
direct effect is by definition only the returned value.

Outputting something to an OS-channel or bytearray is also just
the (typical) sideeffect of the write/print*-methods in OutputStreams
and Writers.
 
M

Mike Schilling

Thomas said:
(In C++ with their overloading of operators with arbitrary code, the
situation could be different. In a way, in the presence of overloading
and complex classes, the "result" of x++ is always used, at least
implicitely.)

That is, it's always computed, whether it's used or not.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,774
Messages
2,569,596
Members
45,130
Latest member
MitchellTe
Top