Simultaneous Writes on Volatile

S

Shao Miller

   arr[0] = arr[arr[0]] + --(arr[arr[0] - 1]) + arr[arr[0]];
Throw everything else aside ... DON'T WRITE CODE LIKE THIS.
Even if it were valid C I'd still reject it in a code review effort.
It's valid C.  You're dismissing the questions.  You're right about
avoidance of typing code like this.  Great; a good reminder.

It's valid in that it compiles,
The translation thus has defined behaviour.
but not so in that it has defined
behaviour.
The run-time behaviour for this particular example is out-of-scope of
the Standard, right?
arr[arr[0]-1] could be arr[0] meaning the other
arr[arr[0]] references don't have a compile time defined behaviour
[hence it's UB].
I don't follow you here. What is it that's missing in order for
translation behaviour to be well-defined? If you are suggesting that
the run-time behaviour is dependent on factors outside of the scope of
the Standard, then that's part of what I was asking about, for sure.
I think that's what you're suggesting, here.
Furthermore, even if it were valid [which I contend it's not] it's
horrible and should not in any form be encouraged.
I guess we would have to come to an agreement about "valid" first,
then proceed with agreements from there.

As far as horrible goes, yes, fine.

One possible use is to attempt to determine characteristics about an
implementation which are not required to be documented. A battery of
tests that exercise potentially awkward situations can possibly reveal
some statistics that could be used towards Bayesian inferences about
behaviour to rely upon. There're no guarantees, so such tests might
give the next best thing, when an implementation's vendor is not
interested or no longer available. Who will do this? Maybe nobody.
Other uses are possible, but I don't think it matters. We can invent
uses on-the-fly, so "horrible" is subject to change, or is a predicate
in certain contexts such as "in most cases". That's fine by me.

Thanks.
 
S

Shao Miller

In our specific case most of our customers are doing embedded
applications and we essentially treat all writes as volatile.
The logic behind this is if someone wrote code that executed
multiple writes without intervening reads they intended to do
so and we don't eliminate the earlier writes.

We support several processors where it is possible to have a
write collision. In these we consistently resolve the write
conflict with rules during code generation using your terminology
we use software to avoid a collision. Write / read
conflicts are a little more complex resolve. We detect during
code generation and then use a handful of rules to resolve how
they should be handled.

I have been part of several processor instruction set designs
and generally side with utilizing machine generated code to
resolve code generation ambiguities rather than have a processor
try to resolve these issues. Almost without exception as experience
with the instruction set matures, ways are found to profitably
use a simpler less protective processor more effectively.
Excellent, excellent, excellent! I sincerely appreciate this valuable
response, Mr. W. Banks.
 
F

FredK

Snip
If I have written code that offends you, then I apologize. Please
understand its purpose. Its purpose is not as code that one might see
in a practical program. Its purpose is to provide a reference during
discussion. Implementations will do something with that program.
What can we expect due to conformance? What can we expect is
implementation-specific? What exactly does a C Standard suggest about
'volatile'? These are questions. I appreciate you sharing your
thoughts on the matter.

But perhaps not meaningful questions. It's like asking what the color Red
smells like. Your code - with or without the volatile qualifier doesn't
illustrate "simultanious writes", or even "simultanious" read/writes. The
only net effect here will be to inhibit optimization. Realistically because
the array is a procedure local automatic variable - the compiler "could"
ignore the volatile qualifier and get the same result. But I doubt any
compiler writer would bother to even check or care about it enough to not
simply apply its volatile rules. And unlike "register", "volatile" is not a
"hint" - so compilers aren't likely to be looking for situations to ignore
it.

Because of the nature of the example (volatile being meaningless to the
result) - there would be no way for the programmer to know short of
analyzing the machine code output - what the compiler did.
 
S

Shao Miller

Snip


But perhaps not meaningful questions.  It's like asking what the color Red
smells like.
Then feel free to ask for clarification and I am happy to oblige and
thank you for your interest in sharing your expertise on the subject.
 Your code - with or without the volatile qualifier doesn't
illustrate "simultanious writes",
It's not supposed to illustrate simultaneous writes. It's possible
that an implementation might schedule object writes to happen
simultaneously. The first question asks if the 'volatile' type
qualifier could inhibit an implementation's "right" to schedule writes
simultaneously. The code goes with the paragraph beginning "assuming
so, ..." which, you might note, was an immediate correction to the
first post, in the second post.
or even "simultanious" read/writes.
It's not supposed to illustrate simultaneous writes. The assumption
("assuming so, ...") is that there are no simultaneous writes. It's
difficult for that code to demonstrate simultaneous writes when the
assumption is that they are inhibited. If your answer is that
'volatile' does not inhibit simultaneous writes, that precedes the
code and the code is then out-of-scope.

Some posters have kindly already shared their interpretations of
'volatile' and suggested that it does not prevent simultaneous writes,
much less simultaneous reads. Those posters needn't even comment on
the code. What is your experience with 'volatile'? Has it prevented
implementations that you have used from performing simultaneous writes/
reads where the implementation would without the qualifier?
 The
only net effect here will be to inhibit optimization.
Optimization of what, exactly? If the implementation defines the
operand evaluation order and defines what constitutes an access to a
'volatile' object, what decisions might it make differently than
without the qualifier?
 Realistically because
the array is a procedure local automatic variable - the compiler "could"
ignore the volatile qualifier and get the same result.  But I doubt any
compiler writer would bother to even check or care about it enough to not
simply apply its volatile rules.
Sure. That'd be extra effort for likely little gain, perhaps.
 And unlike "register", "volatile" is not a
"hint" - so compilers aren't likely to be looking for situations to ignore
it.
It isn't a hint? If not, what does it demand of any conforming
implementation?
Because of the nature of the example (volatile being meaningless to the
result) - there would be no way for the programmer to know short of
analyzing the machine code output - what the compiler did.
Well that's part of the questions. Let's pretend (just for a moment)
that the Standard implies that 'volatile' accesses cannot be performed
simultaneously. As in, reads for 'volatile's are performed for each
reference which implies a read. They aren't cached or assumed to be
consistent since the last sequence point. As in, writes for
'volatile's are not scheduled to occur alongside any other writes for
'volatile's. They occur before or after other 'volatile' writes, but
not during. If you're not interested in pretending this situation is
the case, then you are answering the very first question with "No."
That's an entirely acceptable response.

Thank you for the feedback.
 
F

FredK

It's not supposed to illustrate simultaneous writes.

Hence the title "Simultanious Writes on Volatile".

It is possible that Hogsfather existed.

My answer is that there is no connection between volatile and this "magic
scheduling" and further more even if there is - it is compiler
implementation and HW specific.

Of course it doesn't. Neither does the C compiler generate code that
"schedules simultanious reads/writes". Even assuming that you have a
mythical parallelizing C compiler on a mythical HW implementation that would
allow the compiler to generate instructions destined for multiple thread
execution - it isn't clear how the compiler could come close to guaranteeing
that the access would be simultanious -- on purpose - perhaps accidentally
on purpose. Then the next question is why "simultanious" access would be an
issue? It happens all the time today on almost any multi-CPU system. The
HW is designed to deal with access to the same memory location. If it
*isn't* then it will take a lot more than anything a C compiler can do to
deal with it - because the simultanious accesses typically come from
independent threads of execution - not multiple threads in a parallelizing C
compiler.

Since the case you are building doesn't seem to exist in reality, is it
worth killing the brain cells to see if volatile might serve a purpose?
The only net effect here will be to inhibit optimization.

1) The already-in-a-register value cannot be re-used, it must be re-fetched.
That is:

extern volatile int *foo;

*foo = 1;
*foo = 2;

Cannot optimize away the first store. It *must* do both (all) writes:

a = *foo;
....
b = *foo;

Cannot use the value already fetched. It must refetch it.

2) In the implementation-dependent code there can be many rules specific to
the architecture, for example:

extern volatile unsigned char *pSomething;
int i;

for (i = 0; i < 100000; i += 1) *pSomething++ = i;

Without the volatile qualifier, a clever compiler might turn the middle of
this loop into a series of aligned int or quadword writes instead of the
individual byte writes. The volatile qualifier on many (most) compilers
would prevent this and require each write to be a byte write.
It isn't a hint? If not, what does it demand of any conforming
implementation?

AFAIK - and not being a "standards" guy and not being willing to look it
up... the only thing it *requires* is that all access to the variable
actually be made.
 
S

Shao Miller

FredK said:
Hence the title "Simultanious Writes on Volatile".
Yes, [almost] exactly. That's what the first question is certainly
about. Perhaps it's best to proceed one step at a time. I could have
asked the first question, met with varying responses, then repeatedly
pasted the code for those people who replied in the affirmative. Sorry
about the confusion.
It is possible that Hogsfather existed.
I'll stick that possibility in my Luggage and pull it back out for
review when Times get Interesting.
My answer is that there is no connection between volatile and this "magic
scheduling" and further more even if there is - it is compiler
implementation and HW specific. Ok.


Of course it doesn't. Ok.

Neither does the C compiler generate code that
"schedules simultanious reads/writes". Even assuming that you have a
mythical parallelizing C compiler on a mythical HW implementation that would
allow the compiler to generate instructions destined for multiple thread
execution - it isn't clear how the compiler could come close to guaranteeing
that the access would be simultanious -- on purpose - perhaps accidentally
on purpose. Ok.

> Then the next question is why "simultanious" access would be an
issue? It happens all the time today on almost any multi-CPU system. The
HW is designed to deal with access to the same memory location. If it
*isn't* then it will take a lot more than anything a C compiler can do to
deal with it - because the simultanious accesses typically come from
independent threads of execution - not multiple threads in a parallelizing C
compiler. Ok.


Since the case you are building doesn't seem to exist in reality, is it
worth killing the brain cells to see if volatile might serve a purpose?
I have two jukebox arms that pick up discs and plop them back into
place. I schedule the operation for these arms so that they operate
concurrently. Each arm moves at a constant speed. Each arm is the same
distance 'd' away from slot 's'. I tell them both to plop their disc in
slot 's'. Brain cells healthy, the jukebox isn't in such great shape...
Nor are the worlds upon those discs.
1) The already-in-a-register value cannot be re-used, it must be re-fetched.
That is:

extern volatile int *foo;

*foo = 1;
*foo = 2;

Cannot optimize away the first store. It *must* do both (all) writes: Ok.


a = *foo;
...
b = *foo;

Cannot use the value already fetched. It must refetch it. Ok.


2) In the implementation-dependent code there can be many rules specific to
the architecture, for example:

extern volatile unsigned char *pSomething;
int i;

for (i = 0; i < 100000; i += 1) *pSomething++ = i;

Without the volatile qualifier, a clever compiler might turn the middle of
this loop into a series of aligned int or quadword writes instead of the
individual byte writes. The volatile qualifier on many (most) compilers
would prevent this and require each write to be a byte write. Right.


AFAIK - and not being a "standards" guy and not being willing to look it
up... the only thing it *requires* is that all access to the variable
actually be made.
Ok.

Thanks for sharing this!
 
F

FredK

Shao Miller said:
FredK wrote:
I have two jukebox arms that pick up discs and plop them back into place.
I schedule the operation for these arms so that they operate concurrently.
Each arm moves at a constant speed. Each arm is the same distance 'd'
away from slot 's'. I tell them both to plop their disc in slot 's'.
Brain cells healthy, the jukebox isn't in such great shape... Nor are the
worlds upon those discs.

Bad analogy. Nor is it likely that you will ever be able to get both to
start exactly at the same instant in time. Just as it is nearly impossible
to "schedule" two threads to absolutely hit the same memory at the same time
except by random accident. You could instead have two threads spin on the
location and then it becomes *likely* that it eventually will happen. But
the compiler itself isn't going to generate the spin absent
outside-the-language constructs (for example an atomic builtin).

Nor does it matter. Please name me the parallelizing C compiler and the HW
with which to do this. But I'll go a step further and even posit that you
can find both. "Simultanious" access to the same memory address is
routinely handled in multi-processor shared memory systems. If this
mythical HW cannot handle it - why would the compiler even attempt to emit a
sequence like it?

"Could" the implementation inhibit doing it if the variable is declared
volatile? More likely it might be used to generate a sequence that "works
correctly " in an implementation-specific manner. But certainly it would be
outside of the language standard... but I would concede that it is within
the general usage of volatile to implement a HW specific access method.
However unlikely it is.

Volatile, as it I think has been repeated by more than just me - is designed
to allow access to memory mapped hardware and shared memory. Aside from the
general nature of the required fetch/write - everything else is
implementation defined.

My general concern here is that the discussion itself confuses people
reading that volatile is something that does something other than the above,
or something they should even care about. If you need to write a driver, or
a shared memory application - you need to care about it and a 1000 other
things. Otherwise you don't - and the initial example is an example of a
meaningless useage of volatile. The question of so-called simultanious
access to memory in the context given is outside the scope of the C language
definition. Were someone to invent HW and a parallelizing C compiler to go
with it - the issues of correct operation would be up to the implementation
to guarantee - not the programmer writing the C application.
 
S

Shao Miller

FredK wrote:
.... ... ...
My general concern here is that the discussion itself confuses people
reading that volatile is something that does something other than the above,
or something they should even care about. If you need to write a driver, or
a shared memory application - you need to care about it and a 1000 other
things. Otherwise you don't - and the initial example is an example of a
meaningless useage of volatile. The question of so-called simultanious
access to memory in the context given is outside the scope of the C language
definition. Were someone to invent HW and a parallelizing C compiler to go
with it - the issues of correct operation would be up to the implementation
to guarantee - not the programmer writing the C application.
What is your expectation for the number of reads of 'x' in:

volatile int x = 1;
int y = x + x;

? Can one have a general expectation regardless of implementation and
hardware? What does your experience of >= 32 years suggest?

If someone expects 2 reads of 'x' above, is that expectation consistent
with a general expectation that every reference (of any object) that
implies a read "should" result in an independent read, but a compiler
can typically optimize out such multiple reads where there is no
'volatile' qualifier?

These questions are a shift away from the original(s), but what you
share about your familiarity with 'volatile' can serve as a reference
for others, if you'd care to. Thanks, FredK.
 
T

Tom St Denis

The translation thus has defined behaviour.

Uh, no.

Consider

i = 0;
i = ++i - i--;

What is the current value of 'i' after the 2nd statement? Every C
compiler must accept this program, but they can do anything they want
with it. It's syntactically valid.
The run-time behaviour for this particular example is out-of-scope of
the Standard, right?

No. Actually the standard prescribes fairly exactly the side effects
of expressions (not down to the implementation details mind you).
arr[arr[0]-1] could be arr[0] meaning the other
arr[arr[0]] references don't have a compile time defined behaviour
[hence it's UB].

I don't follow you here.  What is it that's missing in order for
translation behaviour to be well-defined?  If you are suggesting that
the run-time behaviour is dependent on factors outside of the scope of
the Standard, then that's part of what I was asking about, for sure.
I think that's what you're suggesting, here.

What is missing here is the standard does not prescribe the order of
evaluation of the individual terms. Even in something like

j = ++i * i - i++;

You know that it's equivalent to (++i * i) - i++, but how the
individual terms are computed is up to the compiler, it could be
computed as

A = i++
B = i
C = ++i
j = B*C + A

or equally valid as

A = ++i
B = i++
C = i
j = C*A + B

Both which produce a different value.
Furthermore, even if it were valid [which I contend it's not] it's
horrible and should not in any form be encouraged.

I guess we would have to come to an agreement about "valid" first,
then proceed with agreements from there.

Well it's syntactically valid code, it will compile, but the standard
doesn't prescribe the BEHAVIOUR of the code.
One possible use is to attempt to determine characteristics about an
implementation which are not required to be documented.  A battery of
tests that exercise potentially awkward situations can possibly reveal
some statistics that could be used towards Bayesian inferences about
behaviour to rely upon.  There're no guarantees, so such tests might
give the next best thing, when an implementation's vendor is not
interested or no longer available.  Who will do this?  Maybe nobody.
Other uses are possible, but I don't think it matters.  We can invent
uses on-the-fly, so "horrible" is subject to change, or is a predicate
in certain contexts such as "in most cases".  That's fine by me.

Well you wouldn't have to guess what is UB or not UB if you read the
damn spec. It specifically denotes things like that [lacking a
sequence point] as causing UB.

Basically the rule is simple, if a single statement [more or less]
uses a variable [or through an array potentially the same variable]
multiple times with modification it's likely to be UB.

so just like i = ++i * i--; is likely to be a bad idea so is i =
arr[0]++ * --arr[0]; or i = arr[arr[0]]++ * --arr[arr[0]]; ...

Tom
 
B

Bart van Ingen Schenau

What is your expectation for the number of reads of 'x' in:

volatile int x = 1;
int y = x + x;

?  Can one have a general expectation regardless of implementation and
hardware?  What does your experience of >= 32 years suggest?

Assuming the compiler does not use a very strange definition of
'access to a volatile variable', this is required to result in two
reads of x.
If someone expects 2 reads of 'x' above, is that expectation consistent
with a general expectation that every reference (of any object) that
implies a read "should" result in an independent read, but a compiler
can typically optimize out such multiple reads where there is no
'volatile' qualifier?

Yes. Without the volatile qualification, any number of accesses to x
is allowed. Most commonly it will be 0, 1 or 2, but other numbers are
allowed if the compiler wants to produce strange code.

Bart v Ingen Schenau
 
F

FredK

Shao Miller said:
FredK wrote:
... ... ...
What is your expectation for the number of reads of 'x' in:

volatile int x = 1;
int y = x + x;

Two, but my expectation for this expression is minimal unless x is external
in scope. Generally (but not always)when I see volatile applied to
something other than a pointer, I suspect that the programmer didn't
understand what they were trying to do. See below.
? Can one have a general expectation regardless of implementation and
hardware? What does your experience of >= 32 years suggest?

If someone expects 2 reads of 'x' above, is that expectation consistent
with a general expectation that every reference (of any object) that
implies a read "should" result in an independent read, but a compiler can
typically optimize out such multiple reads where there is no 'volatile'
qualifier?

Yes. In general, once "x" is read into a register it can be used for every
reference to "x" in the routine (ignoring register pressure, etc) - there is
no need to re-fetch the value. When you make it volatile, you are telling
the compiler that ever reference to "x" must re-fetch the value again. The
same is true for a write - the compiler can't see multiple writes to "x" and
accumulate the changes in the register that contains "x" and defer it to a
single write (which normally it might do.

x = 1;
x = 2;

Optimized this would be a single write of 2 to "x". With the volatile
qualifier the compiler must update the variable both times.

a = x;
b = x;

Optimized, the fetch of "x" into a register would use the same fetch to
satisfy both "a" and "b".

But unless the variable ("x") is an actual memory location and can be
changed by other threads of execution - the only effect will be to defeat
optimization.
These questions are a shift away from the original(s), but what you share
about your familiarity with 'volatile' can serve as a reference for
others, if you'd care to. Thanks, FredK.

Imagine that you have a pointer to a hardware register - here is something
way simple:

extern volatile unsigned int32 *my_register;

The programmer needs to be able to ensure that each reference to the
register is actually issued. Volatile ensures this. Now, depending on the
architecture there may be other requirements - for example that the
instruction used to access the register is in this case a 32-bit memory
read/write (even if the compiler may think that the memory address is
unaligned - consider that some HW/compilers have instruction sequences that
they use to access unaligned memory to avoid an alignment fault).

But volatile alone isn't all that is needed, they also need to understand
the underlying architecture because they may also need to issue HW-specific
instructions such as a memory barrier to force specific things having to do
with how HW can merge and collapse writes.

The same applies to shared memory access. You declare it volatile because
there may be another thread of execution that may access the variable and
you cannot trust that the value of an earlier read is still valid, and you
also don't want the compiler to defer the write or collapse a series of
writes into a single write. Again, volatile alone isn't all that is neede
when doing things with shared memory.

When you apply volatile to an automatic variable, it is effectively
meaningless - because there is no external mechanism defined to change the
variable external to the routine. The automatic variable may in fact have
no storage and only exist as a register reference. What the C compiler does
in this case is beyond what I can guess. It probably ignores the volatile
qualifier.

When you apply it to a program scope static variable the same is true unless
you have multiple threads of execution within the same program unit... which
*is* possible (for example an asynchronous routine that interrupts execution
and modifies or reads the variable).

You might apply it to an external scope variable that can be accessed from a
different process/thread - i.e. a tyoe of shared memory implementation.
 
K

Keith Thompson

Tom St Denis said:
Uh, no.

Consider

i = 0;
i = ++i - i--;

What is the current value of 'i' after the 2nd statement? Every C
compiler must accept this program, but they can do anything they want
with it. It's syntactically valid.

I think what Shao means is that the compile-time behavior is defined
(i.e., it compiles successfully). But see below.

[...]
What is missing here is the standard does not prescribe the order of
evaluation of the individual terms. Even in something like

j = ++i * i - i++;

What's missing is that it's not just about the order of evaluation.
The behavior is completely undefined; it's not just a choice among
the possible orders of evaluation.

[...]
Well it's syntactically valid code, it will compile, but the standard
doesn't prescribe the BEHAVIOUR of the code.

It won't necessarily compile. See the note under the standard's
definition of "undefined behavior":

NOTE Possible undefined behavior ranges from ignoring the
situation completely with unpredictable results, to behaving
during translation or program execution in a documented manner
characteristic of the environment (with or without the issuance
of a diagnostic message), to terminating a translation or
execution (with the issuance of a diagnostic message).

[...]
 
T

Tom St Denis

It won't necessarily compile.  See the note under the standard's
definition of "undefined behavior":

    NOTE Possible undefined behavior ranges from ignoring the
    situation completely with unpredictable results, to behaving
    during translation or program execution in a documented manner
    characteristic of the environment (with or without the issuance
    of a diagnostic message), to terminating a translation or
    execution (with the issuance of a diagnostic message).

Fair enough. But that's mostly a way of saying "the compiler ain't
broke so stop trying to use that code."

However, I think most compilers will compile said UB code just fine.
The output code will be unpredictable garbage but it'll translate just
fine.

I was mostly trying to correct the point that just because it compiles
doesn't mean it's not UB.

Tom
 
L

lawrence.jones

FredK said:
Two, but my expectation for this expression is minimal unless x is external
in scope.

I think it's a bit broader than that. In order for volatile to be
meaningful, there must be some way for an external agent to locate it,
so x must have linkage (either internal or external) or have its address
taken and passed out of the local scope (e.g., passed as an argument or
assigned to an object that has [or might have] linkage).
 
F

FredK

FredK said:
Two, but my expectation for this expression is minimal unless x is
external
in scope.

I think it's a bit broader than that. In order for volatile to be
meaningful, there must be some way for an external agent to locate it,
so x must have linkage (either internal or external) or have its address
taken and passed out of the local scope (e.g., passed as an argument or
assigned to an object that has [or might have] linkage).

Yes. The two lines were ambiguous since there was no context. I think I
explained later in the reply the exact same thing less succinctly and more
wordy :). The tricky thing that I avoided as to not confuse things
further, is that even a local (automatic) variable could be passed by
address to an external routine - which would force the compiler to associate
storage with the variable. Highly unlikely, but possible. It is also
possible that by virtue of having the volatile qualifier itself, the
compiler might automatically associate a memory/stack address for it because
the implication is that the variable will be referenced by some outside
agent (i.e. passed by address).
 
S

Shao Miller

By translation, I mean compiling. What right would any implementation
have to refuse to compile the code? I don't mean run-time.
The implementation could reboot the computer during evaluation of the
second statement, noting multiple writes to the same object.
Do they? It might be syntactically valid, but could a compiler
determine that there are multiple attempts to write to 'i' within the
same expression and the same sequence point bounds? Could it make that
determination at compile-time (translation-time)? It seems like that
would be an easy determination to make. Then a compiler could stop
compilation and output an error, such as "Violation of C99 6.5p2." Then
the code needn't compile.
If a 'volatile' access is implementation-defined and operand evaluation
order and side effect order are unspecified, how can the run-time
behaviour be within the scope of the Standard? We know which side
effects are warranted, but what else?
arr[arr[0]-1] could be arr[0] meaning the other
arr[arr[0]] references don't have a compile time defined behaviour
[hence it's UB].
I don't follow you here. What is it that's missing in order for
translation behaviour to be well-defined? If you are suggesting that
the run-time behaviour is dependent on factors outside of the scope of
the Standard, then that's part of what I was asking about, for sure.
I think that's what you're suggesting, here.

What is missing here is the standard does not prescribe the order of
evaluation of the individual terms.
But can that impact compilation (translation)?
Even in something like
You meant '-' rather than '+', but I follow you.
What is the difference between these two examples? It seems you are
suggesting a temporal order as the difference. What about:

A1 = ++i and A2 = i++ (same time)
B = i
j = B * A1 - A2

That might be worse than the other two examples, since the two writes
happen at the same time, which might lead to very odd consequences.
Furthermore, even if it were valid [which I contend it's not] it's
horrible and should not in any form be encouraged.
I guess we would have to come to an agreement about "valid" first,
then proceed with agreements from there.

Well it's syntactically valid code, it will compile, but the standard
doesn't prescribe the BEHAVIOUR of the code.
There are different bits we can call "behaviour", including behaviour
during compilation and behaviour during execution. I believe that you
are referring to the execution behaviour, here.
One possible use is to attempt to determine characteristics about an
implementation which are not required to be documented. A battery of
tests that exercise potentially awkward situations can possibly reveal
some statistics that could be used towards Bayesian inferences about
behaviour to rely upon. There're no guarantees, so such tests might
give the next best thing, when an implementation's vendor is not
interested or no longer available. Who will do this? Maybe nobody.
Other uses are possible, but I don't think it matters. We can invent
uses on-the-fly, so "horrible" is subject to change, or is a predicate
in certain contexts such as "in most cases". That's fine by me.

Well you wouldn't have to guess what is UB or not UB if you read the
damn spec. It specifically denotes things like that [lacking a
sequence point] as causing UB.
We have a disconnect. The "guessing" was about the implementation's
design decisions and operational characteristics, not about a C
Standard. Sorry for the confusion.
Basically the rule is simple, if a single statement [more or less]
uses a variable [or through an array potentially the same variable]
multiple times with modification it's likely to be UB.
Right.
so just like i = ++i * i--; is likely to be a bad idea so is i =
arr[0]++ * --arr[0]; or i = arr[arr[0]]++ * --arr[arr[0]]; ...
A bad idea indeed and in general, sure.
That is indeed what I meant. Thanks for helping to clarify where my
terminology was failing.
[...]
What is missing here is the standard does not prescribe the order of
evaluation of the individual terms. Even in something like

j = ++i * i - i++;

What's missing is that it's not just about the order of evaluation.
The behavior is completely undefined; it's not just a choice among
the possible orders of evaluation.

[...]
Well it's syntactically valid code, it will compile, but the standard
doesn't prescribe the BEHAVIOUR of the code.

It won't necessarily compile. See the note under the standard's
definition of "undefined behavior":

NOTE Possible undefined behavior ranges from ignoring the
situation completely with unpredictable results, to behaving
during translation or program execution in a documented manner
characteristic of the environment (with or without the issuance
of a diagnostic message), to terminating a translation or
execution (with the issuance of a diagnostic message).
[...]
Well I don't quite see how the compilation behaviour could be undefined.
How can the compiler make the determination that the original post's
code is a violation of 6.5p2? By completely discarding 'volatile' and
assuming that the array values are what they were at the last sequence
point?

If we used a user-input value as an index, would that leave a chance for
undefined behaviour during compilation? That was roughly one of the
ideas for the code example; a determination that cannot be made at
compile-time.

If 'volatile' can be discarded, then perhaps a compiler could refuse to
compile because it determines a violation of 6.5p2. Is that really
"allowed"? It seems like it is, but seems like a fair question to ask.
Fair enough. But that's mostly a way of saying "the compiler ain't
broke so stop trying to use that code."

However, I think most compilers will compile said UB code just fine.
The output code will be unpredictable garbage but it'll translate just
fine.
What makes it undefined behaviour, exactly? In the original post, we
have the assumption that no factor modifies the 'arr' array. With that
assumption, it's a clear violation of 6.5p2 and thus undefined
behaviour. Is that what you are referring to? Or are you suggesting
the it's undefined behaviour by the Standard, regardless of any
assumptions or factors?
I was mostly trying to correct the point that just because it compiles
doesn't mean it's not UB.
Where was that point that you intended to correct in the thread? I
think that perhaps it was just a miscommunication due to terminology.
Translation-time/compile-time versus execution-time/run-time.

Thanks, Mr. T. St. Denis.
 
S

Shao Miller

Bart said:
Assuming the compiler does not use a very strange definition of
'access to a volatile variable', this is required to result in two
reads of x.
Ok. I'd like to understand why you suggest that. Is it because you
perceive one or more of:
- Side effects should not occur at the same instant in time?
- Side effects involving the same object should not occur at the same
instant in time?
- The abstract semantics imply that every read of a value is independent
of anything else and should occur separately from other evaluation
processes?
- Another reason?
Yes. Without the volatile qualification, any number of accesses to x
is allowed. Most commonly it will be 0, 1 or 2, but other numbers are
allowed if the compiler wants to produce strange code.
Thank you, Mr. B. van Ingen Schenau.
 
S

Shao Miller

FredK said:
Two, but my expectation for this expression is minimal unless x is external
in scope. Generally (but not always)when I see volatile applied to
something other than a pointer, I suspect that the programmer didn't
understand what they were trying to do. See below.


Yes. In general, once "x" is read into a register it can be used for every
reference to "x" in the routine (ignoring register pressure, etc) - there is
no need to re-fetch the value. When you make it volatile, you are telling
the compiler that ever reference to "x" must re-fetch the value again. The
same is true for a write - the compiler can't see multiple writes to "x" and
accumulate the changes in the register that contains "x" and defer it to a
single write (which normally it might do.

x = 1;
x = 2;

Optimized this would be a single write of 2 to "x". With the volatile
qualifier the compiler must update the variable both times.

a = x;
b = x;

Optimized, the fetch of "x" into a register would use the same fetch to
satisfy both "a" and "b".
Ok. I'd like to understand why you suggest that. Is it because you
perceive one or more of:
- Side effects should not occur at the same instant in time?
- Side effects involving the same object should not occur at the same
instant in time?
- The abstract semantics imply that every read of a value is independent
of anything else and should occur separately from other evaluation
processes?
- Another reason?
But unless the variable ("x") is an actual memory location and can be
changed by other threads of execution - the only effect will be to defeat
optimization.
So intentional suppression of optimization is a goal for the 'volatile'
qualifier, but it's pretty useless if there're no means for 'x' to change.
Imagine that you have a pointer to a hardware register - here is something
way simple:

extern volatile unsigned int32 *my_register;

... ... ...
The rest of your post should certainly help those who might be confused
into thinking 'volatile' suppresses simultaneous writes by the original
post. Thank you again, FredK.
 
K

Keith Thompson

Shao Miller said:
Tom St Denis wrote: [...]
Consider

i = 0;
i = ++i - i--;

What is the current value of 'i' after the 2nd statement?
The implementation could reboot the computer during evaluation of the
second statement, noting multiple writes to the same object.
Do they? It might be syntactically valid, but could a compiler
determine that there are multiple attempts to write to 'i' within the
same expression and the same sequence point bounds?
[...]

Yes; see below.

[SNIP]
Keith said:
What is missing here is the standard does not prescribe the order of
evaluation of the individual terms. Even in something like

j = ++i * i - i++;

What's missing is that it's not just about the order of evaluation.
The behavior is completely undefined; it's not just a choice among
the possible orders of evaluation.

[...]
Well it's syntactically valid code, it will compile, but the standard
doesn't prescribe the BEHAVIOUR of the code.

It won't necessarily compile. See the note under the standard's
definition of "undefined behavior":

NOTE Possible undefined behavior ranges from ignoring the
situation completely with unpredictable results, to behaving
during translation or program execution in a documented manner
characteristic of the environment (with or without the issuance
of a diagnostic message), to terminating a translation or
execution (with the issuance of a diagnostic message).
[...]
Well I don't quite see how the compilation behaviour could be undefined.

I'm not sure whether compile-time behavior is undefined or is restricted
to what's mentioned in the note. In particular, I'm not sure whether
"ignoring the situation completely with unpredictable results" can apply
to compile-time behavior. I'd certainly be unhappy with a compiler that
erased my file system *at compile time* because I fed it "i = i++;".

But that's beside the point.
How can the compiler make the determination that the original post's
code is a violation of 6.5p2? By completely discarding 'volatile' and
assuming that the array values are what they were at the last sequence
point?

I don't remember what code was in the original post.

If a compiler can prove that a program's behavior will be undefined
in every possible execution, it can treat it as undefined and, for
example, reject the translation unit. For something like "j = ++i *
i - i++;", this isn't difficult to prove, with or without volatile.

On the other hand, a compiler can't reject something like:

if (0) i = i++;

[...]
 
F

FredK

Shao Miller said:
Ok. I'd like to understand why you suggest that. Is it because you
perceive one or more of:
- Side effects should not occur at the same instant in time?
- Side effects involving the same object should not occur at the same
instant in time?
- The abstract semantics imply that every read of a value is independent
of anything else and should occur separately from other evaluation
processes?
- Another reason?

The above use was just to illustrate the requirement. The compiler is free
to organize most variables as it sees fit. Most hardware these days require
operations to actually happen in registers for example, so once a "variable"
is in a register - unless the register is re-used for something else - the
compiler never has to refetch it. It can also assume that unless it is told
otherwise, a variable can't simply change without it's knowing it (which is
what volatile tells it). So once it has a value in a register for x - it
never has to refetch it. It never even has to create "a" or "b" if they are
never used except for the fetch - and it never has to copy the value of x
into either.

Consider some actual uses for volatile:

int foo() {

volatile int trigger = 0;

setup_trigger (&trigger);

while (1) {
// ... some stuff
if (trigger) {
// Do something
trigger = 0;
}
}
}

Imagine that "setup_trigger" passes the address to a routine that
asynchronously sets it to "1" when a particular event occurs. The reason to
make trigger volatile is to ensure that the read of trigger actually fetches
the value from the storage of trigger. Otherwise the compiler might use
whatever value of trigger that happened to be in a register from the last
time it needed the variable.

Again, consider the following:

unsigned char tmp;
unsigned char *frame = FRAME_ADDRESS;
volatile unsigned char *reg = REG_ADDRESS;

*frame = 1;
*frame = 2;

*reg = 1;
tmp = *reg;

*reg = 2;
__MEMORY_BARRIER;
*reg = 0

In the first case, the compiler is likely to optimize away the *frame = 1;
and instead just write *frame = 2. This is the classic desire of frame
buffer writing - and normal programs - we only care about the LAST result
unless we say otherwise.

In the second, without the volatile - a compiler might just make tmp == 1.
But what I *want* to happen is for the write to *reg to happen *and* then
the read to happen (for example because the read itself has side effects on
many registers). Also, writes have to complete before the read can be
completed... so by doing the read we can be sure that the write has
completed (assuming uncached memory mapped hardware space).

In the third, I want to write 2 and then 0 to the register. I don't want to
compiler to skip the first write. I also put in memory barrier because for
this hypothetical example - the hardware itself can collapse writes (just
like the compiler might optimize it - only do the latest write). The
individual bits in the register may cause specific things to
trigger/happen - as opposed to only caring about the "last" value written to
it. It may be a transmit register for example that sends the value written
to a secondary uProc - it wants to send 2 and then 0 - and not just 0.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,781
Messages
2,569,619
Members
45,316
Latest member
naturesElixirCBDGummies

Latest Threads

Top