return x = 0; // under C99 for volatile x

F

Francois Grieu

Hi, consider:

volatile unsigned char x;
unsigned char foo(void) { return x = 0; }

Assuming conformant C99, when foo is executed, is the value returned
a) allways 0
b) what was read from x after x has been written with 0
c) unspecified.

TIA,
François Grieu
 
S

Stefan Ram

Francois Grieu said:
volatile unsigned char x;
unsigned char foo(void) { return x = 0; }
Assuming conformant C99, when foo is executed, is the value returned
a) allways 0
b) what was read from x after x has been written with 0
c) unspecified.

An object is a region of data storage. If an lvalue does not
designate an object when it is evaluated, the behavior is
undefined.

So, the lvalue »x« has to designate a region of data /storage/.

Now, C does not define »storage«. One possibility may be to
assume that a storage is something that /retains/ the last
value written to it. In this case, a volatile object might
not be »storage«, which would render the behavior to be
undefined. But your implementation might provide a definition
of the behavior in this case when ISO/IEC 9899:1999 (E) does not.
 
S

Stefan Ram

One possibility may be to assume that a storage is something
that /retains/ the last value written to it. In this case, a
volatile object might not be »storage«

On the other hand, a real storage device that fulfils this
condition, might seem to violate it, when another process
writes to it. From the point of view of a single process, a
storage that is being modified by another process can not be
distinguished from a storage that is faulty and changes its
value at arbitrary moments. So the question remains, whether
such a thing is »storage«.

A more general definition of »(volatile) storage« would be
»anything that supports two operation: "read" and "write"«.
 
K

Kaz Kylheku

Hi, consider:

volatile unsigned char x;
unsigned char foo(void) { return x = 0; }

Assuming conformant C99, when foo is executed, is the value returned
a) allways 0
b) what was read from x after x has been written with 0
c) unspecified.

An assignment expression has the value of the left operand, after
the assignment (and not: the value that is stored in the operand
/by/ the assignment).

Access to a volatile object is a side effect.

So, after zero is stored in it, the object x must be accessed again to
retrieve the stored value.
 
C

crisgoogle

An assignment expression has the value of the left operand, after
the assignment (and not: the value that is stored in the operand
/by/ the assignment).

Access to a volatile object is a side effect.

So, after zero is stored in it, the object x must be accessed again to
retrieve the stored value.

But since what exactly volatile means, and what "access" means, is
pretty much implementation-defined, I don't think that the literal
readback is required or even wanted.

After all, what exactly does "after the assignment" mean? One
femtosecond?
One clock cycle? One second?

If we're talking about the type of hardware register where volatile
might
be used and where the OP's original issue may actually be important
(e.g.,
where the act of writing to a register causes it to be updated via
some other mechanism), then the time at which the readback occurs
makes
a difference. Since your program can't tell, within the confines of
the
Standard, when the proposed readback took place, simply giving back
the
zero, almost regardless of what the hardware is doing, seems valid to
me.

The implementation, of course, could specify something different, but
that
would be up to the implementation, not the Standard.
 
F

Francois Grieu

Kaz Kylheku wrote :
An assignment expression has the value of the left operand, after
the assignment (and not: the value that is stored in the operand
/by/ the assignment).

Access to a volatile object is a side effect.

So, after zero is stored in it, the object x must be accessed again to
retrieve the stored value.

That seems to answer the question, as option b). Thanks.

Just to give a little more background: in my case, "x" is a
peripheral computing CRC. Bytes enter the CRC when written
to x, and reading x returns some of the CRC (not the last byte
fed). I'm actually doing something in the tune of

vOr |= x = dst[j] = src[j];

trying to copy src to dst, compute a CRC of what's being
copied, and the bitwise OR of all the copied bytes into vOr.
x is declared volatile, and that induce the compiler
to read x back, which is not the intend, and initialy was
a surprise. I'm trying to determine if the readback is
authorized / mandated / unspecified given C99.

François Grieu
 
B

bert

Hi, consider:

volatile unsigned char x;
unsigned char foo(void) { return x = 0; }

Assuming conformant C99, when foo is executed, is the value returned
a) allways 0
b) what was read from x after x has been written with 0
c) unspecified.

I vote for (b). The volatile location could
well be one which begins to "do something"
on being written to, probably varying according
to the value written; and when next read
from, does not return a value until it has
finished "doing" whatever it was doing.

Other posters' musings about the time interval
between writing and re-reading are irrelevant.
In really low-level programming, I have been
familiar with pseudo-storage locations which
behaved in just the way I have suggested.
They usually had a ready/not_ready status
flag which allowed the code to do other things
while waiting for the datum, but assembly code
generated from C could scarcely take advantage
of such a feature; it would just have to spin
until the datum was available.
--
 
F

Flash Gordon

Francois Grieu wrote:

Just to give a little more background: in my case, "x" is a
peripheral computing CRC. Bytes enter the CRC when written
to x, and reading x returns some of the CRC (not the last byte
fed). I'm actually doing something in the tune of

vOr |= x = dst[j] = src[j];

I would split the above in to two lines not because of problems in the
code, but because I think it would make it easier for a person to read
it, *especially* if x is volatile and you won't get back what you wrote in!
trying to copy src to dst, compute a CRC of what's being
copied, and the bitwise OR of all the copied bytes into vOr.
x is declared volatile, and that induce the compiler
to read x back, which is not the intend, and initialy was
a surprise. I'm trying to determine if the readback is
authorized / mandated / unspecified given C99.

Sometimes it's not worth working out whether something is guaranteed to
work, because if you can't work it out someone looking at the code next
year will *also* have trouble working it out!
 
F

Francois Grieu

Flash said:
Francois said:
Just to give a little more background: in my case, "x" is a
peripheral computing CRC (declared volatile). Bytes enter the CRC
>> when written to x, and reading x returns some of the CRC (not the
>> last byte fed). I'm actually doing something in the tune of

vOr |= x = dst[j] = src[j];

I would split the above in to two lines not because of problems in the
code, but because I think it would make it easier for a person to read
it, *especially* if x is volatile and you won't get back what you wrote in!

What's actually writen is closer to

// HAL.h
(..)
#ifdef ON_THAT_PLATFORM
#include "HAL_for_that_platform.h"
#endif
(..)
// enter byte into CRC, return byte
#ifndef EnterCrc
unsigned char EnterCrc(unsigned char byte);
#endif

// HAL_for_that_platform.h
(..)
// CRC hardware
volatile unsigned char CRCREG @0x12;
(..)
// enter byte into CRC, return byte, which is evaluated once
#define EnterCrc(byte) \
((unsigned char)(CRCREG =(unsigned char)(byte)))

// memutils.c
(..)
// copy one byte from src to dst, enter it into CRC, and into vOr
vOr |= EnterCrc( dst[j] = src[j] );
(..)

so the code is readable enough to my standards. It is
Sometimes it's not worth working out whether something is guaranteed to
work, because if you can't work it out someone looking at the code next
year will *also* have trouble working it out!

Of course I can use a temp but I guess the performance will
suffer measurably, and I need either a global temp in EnterCrc,
or change the interface of EnterCrc and change several pieces of
perfectly fine code. "inline" is not an option (my compiler
at best ignores it), and I find no way to declare a temp in
macro returning a value with that compiler. So it will probably
end up using an idiom of that (Cosmic) compiler:

// enter byte into CRC, return byte, which is evaluated once
#define EnterCrc(byte) \
((unsigned char)_asm(" LD _CRCREG,A")(unsigned char)(byte))
// we use an assembly language macro to avoid a temp for byte


Francois Grieu
 
F

Flash Gordon

Francois said:
Flash said:
Francois said:
Just to give a little more background: in my case, "x" is a
peripheral computing CRC (declared volatile). Bytes enter the CRC
when written to x, and reading x returns some of the CRC (not the
last byte fed). I'm actually doing something in the tune of

vOr |= x = dst[j] = src[j];

I would split the above in to two lines not because of problems in the
code, but because I think it would make it easier for a person to read
it, *especially* if x is volatile and you won't get back what you
wrote in!

What's actually writen is closer to

// HAL_for_that_platform.h
(..)
// CRC hardware
volatile unsigned char CRCREG @0x12;
(..)
// enter byte into CRC, return byte, which is evaluated once
#define EnterCrc(byte) \
((unsigned char)(CRCREG =(unsigned char)(byte)))

You don't need the casts.
// memutils.c
(..)
// copy one byte from src to dst, enter it into CRC, and into vOr
vOr |= EnterCrc( dst[j] = src[j] );
(..)

EnterCrc( dst[j] = src[j] );
vOr != GetCrc;
so the code is readable enough to my standards. It is

It's not only if you can read that line without looking at the #define,
it's also whether the entire thing when you look at the defines (which
you might need to) is readable and can be understood to be correct.
Of course I can use a temp

You don't need a temp to split it.
but I guess the performance will
suffer measurably,

Compilers have been able to eliminate the odd temp for many years, so
the odds are that if you did use a temp (which you don't need to) it
would have no effect at all on performance.
and I need either a global temp in EnterCrc,
or change the interface of EnterCrc and change several pieces of
perfectly fine code. "inline" is not an option (my compiler
at best ignores it), and I find no way to declare a temp in
macro returning a value with that compiler.

Well, you don't need a temp, so you don't need a function either.
So it will probably
end up using an idiom of that (Cosmic) compiler:

// enter byte into CRC, return byte, which is evaluated once
#define EnterCrc(byte) \
((unsigned char)_asm(" LD _CRCREG,A")(unsigned char)(byte))
// we use an assembly language macro to avoid a temp for byte

You are already using something very compiler specific. I've never yet
used a compiler that allowed "@ address" to specify the address a
variable would be stored at. I've always done it more like (without
though or checking)
#define SOMETHING (*(volatile unsigned char *)0x1234)
Which if I have not made any mistakes is standard C syntax, although the
mapping of integer to address is implementation defined. Either that or
I have used linker magic to map a variable to the correct address.
 
J

jacob navia

Richard a écrit :
Fact : anyone who uses "we" regularly in a tech newsgroup is generally a
conceited arse who thinks more about cosying up to similar
dysfunctionals than he does to actually providing help to the unwitting
new boys.

Nobody "laughed" at anything. There was only one objection (from Mr John
Devereux ) that objected to example 2.1 in that text. The example is correct
and Mr Devereux apparently misunderstood the problem. We had:

volatile int buffer_ready;
char buffer[BUF_SIZE];
void buffer_init() {
int i;
for (i=0; i<BUF_SIZE; i++)
buffer = 0;
buffer_ready = 1;
}

"The for-loop does not access any volatile locations, nor does it
perform any side-effecting operations. Therefore, the compiler is free
to move the loop below the store to buffer_ready, defeating the
developer's intent."

To this, Mr Devereux says:
<quote>
The problem is that the compiler is *not* free to do this (as far as I
can see). Surely clearing the buffer *is* a side effect?
<end quote>

He misunderstood that example. The compiler can't eliminate the loop
but it can move the whole loop elsewhere, and that is what the authors
of that papers said. That is possible since the buffer is not
declared volatile.

Nobody laughed at anything, and may other people supported that
paper. Obviously carmody MUST phrase and envelop everything with his own
private phantasies.

Whole discussion at
http://www.motherboardpoint.com/semantics-volatile-t191744p4.html
 
S

Seebs

Nobody "laughed" at anything. There was only one objection (from Mr John
Devereux ) that objected to example 2.1 in that text. The example is correct
and Mr Devereux apparently misunderstood the problem. We had:
volatile int buffer_ready;
char buffer[BUF_SIZE];
void buffer_init() {
int i;
for (i=0; i<BUF_SIZE; i++)
buffer = 0;
buffer_ready = 1;
}

"The for-loop does not access any volatile locations, nor does it
perform any side-effecting operations. Therefore, the compiler is free
to move the loop below the store to buffer_ready, defeating the
developer's intent."

Hmmm. Interesting. Well, first off, I'd point out that unless there's more
to it, buffer is obviously all zeroes already, making the question moot.
He misunderstood that example. The compiler can't eliminate the loop
but it can move the whole loop elsewhere, and that is what the authors
of that papers said. That is possible since the buffer is not
declared volatile.

I'm not convinced of that. So far as I can tell, even if we assume the
buffer was initialized with other stuff before buffer_init is called, it's
a violation for there to be any point during the execution of that function
at which you can observe that buffer_ready is 1 and buffer contains non-zero
points.

Of course... That applies only to strictly conforming code, and the only
way this could come up would be if we had some kind of threading going on,
which isn't (currently) conforming.

-s
 
P

Phil Carmody

Seebs said:
Nobody "laughed" at anything. There was only one objection (from Mr John
Devereux ) that objected to example 2.1 in that text. The example is correct
and Mr Devereux apparently misunderstood the problem. We had:
volatile int buffer_ready;
char buffer[BUF_SIZE];
void buffer_init() {
int i;
for (i=0; i<BUF_SIZE; i++)
buffer = 0;
buffer_ready = 1;
}

"The for-loop does not access any volatile locations, nor does it
perform any side-effecting operations. Therefore, the compiler is free
to move the loop below the store to buffer_ready, defeating the
developer's intent."

Hmmm. Interesting. Well, first off, I'd point out that unless there's more
to it, buffer is obviously all zeroes already, making the question moot.


That's not obvious at all. We don't have any idea what else happens
before the call to buffer_init().

What is obvious is that the first quoted sentence is just plain
false, and therefore the second sentence's 'therefore' does not
necessarily follow, and indeed cannot follow.

Phil
 
S

Seebs

volatile int buffer_ready;
char buffer[BUF_SIZE];
void buffer_init() {
int i;
for (i=0; i<BUF_SIZE; i++)
buffer = 0;
buffer_ready = 1;
}
"The for-loop does not access any volatile locations, nor does it
perform any side-effecting operations. Therefore, the compiler is free
to move the loop below the store to buffer_ready, defeating the
developer's intent."

Hmmm. Interesting. Well, first off, I'd point out that unless there's more
to it, buffer is obviously all zeroes already, making the question moot.

That's not obvious at all. We don't have any idea what else happens
before the call to buffer_init().

As I said, "unless there's more to it". If there's other code affecting
these, it's hard to guess. Without other code, though, it's moot.
What is obvious is that the first quoted sentence is just plain
false, and therefore the second sentence's 'therefore' does not
necessarily follow, and indeed cannot follow.

Agreed. It is pretty obvious that the loop performs assignment, which is
an operation with side effects.

That said:

In general, a compiler is permitted to reorder side effects on non-volatiles
as long as a strictly conforming program can't tell the difference. So,
if you ran this on a threaded system, it's not obvious to me that the compiler
couldn't reorder the side effects on the buffer to later in the function,
because a strictly conforming program has no way to peek at the buffer
and the volatile flag until the function returns... If you use threads, you're
not strictly conforming, so we don't care what you see.

I wouldn't buy that, though, and if a compiler actually did break the
assumption of ordering, I'd regard it as a bug.

-s
 
P

Phil Carmody

Seebs said:
volatile int buffer_ready;
char buffer[BUF_SIZE];
void buffer_init() {
int i;
for (i=0; i<BUF_SIZE; i++)
buffer = 0;
buffer_ready = 1;
}
"The for-loop does not access any volatile locations, nor does it
perform any side-effecting operations. Therefore, the compiler is free
to move the loop below the store to buffer_ready, defeating the
developer's intent."
Hmmm. Interesting. Well, first off, I'd point out that unless there's more
to it, buffer is obviously all zeroes already, making the question moot.

That's not obvious at all. We don't have any idea what else happens
before the call to buffer_init().

As I said, "unless there's more to it". If there's other code affecting
these, it's hard to guess. Without other code, though, it's moot.
What is obvious is that the first quoted sentence is just plain
false, and therefore the second sentence's 'therefore' does not
necessarily follow, and indeed cannot follow.

Agreed. It is pretty obvious that the loop performs assignment, which is
an operation with side effects.

That said:

In general, a compiler is permitted to reorder side effects on non-volatiles
as long as a strictly conforming program can't tell the difference. So,
if you ran this on a threaded system, it's not obvious to me that the compiler
couldn't reorder the side effects on the buffer to later in the function,
because a strictly conforming program has no way to peek at the buffer
and the volatile flag until the function returns... If you use threads, you're
not strictly conforming, so we don't care what you see.

I wouldn't buy that, though, and if a compiler actually did break the
assumption of ordering, I'd regard it as a bug.


Absolutely. If the compiler can see that the code has been required
to be used in a multi-threaded context, then it has additional
information to help it prevent making the "optimisation", and if
if the compiler doesn't have that additional information it can't
make the optimisation, on the presumption that the rest of the code
is conforming. The paper just seemed to be "we found a bug in GCC,
aren't we clever?". The answer to which is "no, not especially, just
look at their bugzilla".

Couldn't a signal handler change such a value and still be conforming,
we don't actually need never-heard-of-them-they're-not-std-c-threads
/per se/.

Phil
 
S

Seebs

Couldn't a signal handler change such a value and still be conforming,
we don't actually need never-heard-of-them-they're-not-std-c-threads
/per se/.

So far as I can tell, if a signal handler is ever called except by raise(),
then you're still outside the environment of strictly conforming code,
etcetera.

-s
 
F

Flash Gordon

Phil said:
Seebs said:
volatile int buffer_ready;
char buffer[BUF_SIZE];
void buffer_init() {
int i;
for (i=0; i<BUF_SIZE; i++)
buffer = 0;
buffer_ready = 1;
}
"The for-loop does not access any volatile locations, nor does it
perform any side-effecting operations. Therefore, the compiler is free
to move the loop below the store to buffer_ready, defeating the
developer's intent."
Hmmm. Interesting. Well, first off, I'd point out that unless there's more
to it, buffer is obviously all zeroes already, making the question moot.
That's not obvious at all. We don't have any idea what else happens
before the call to buffer_init().

As I said, "unless there's more to it". If there's other code affecting
these, it's hard to guess. Without other code, though, it's moot.
What is obvious is that the first quoted sentence is just plain
false, and therefore the second sentence's 'therefore' does not
necessarily follow, and indeed cannot follow.
Agreed. It is pretty obvious that the loop performs assignment, which is
an operation with side effects.

That said:

In general, a compiler is permitted to reorder side effects on non-volatiles
as long as a strictly conforming program can't tell the difference. So,
if you ran this on a threaded system, it's not obvious to me that the compiler
couldn't reorder the side effects on the buffer to later in the function,
because a strictly conforming program has no way to peek at the buffer
and the volatile flag until the function returns... If you use threads, you're
not strictly conforming, so we don't care what you see.

I wouldn't buy that, though, and if a compiler actually did break the
assumption of ordering, I'd regard it as a bug.


Absolutely. If the compiler can see that the code has been required
to be used in a multi-threaded context, then it has additional
information to help it prevent making the "optimisation", and if
if the compiler doesn't have that additional information it can't
make the optimisation, on the presumption that the rest of the code
is conforming. The paper just seemed to be "we found a bug in GCC,
aren't we clever?". The answer to which is "no, not especially, just
look at their bugzilla".


It's not always as simple as that...

If you have two processors sharing some dual port RAM (also with local
RAM), and the processor this code is running on has a cache. Is there
any guarantee that having written the data to the buffer the cached copy
of it is flushed to the dual port RAM before the write to buffer_ready?
It might not even be a processor on the other side, it could be the
buffer_ready flag causes the video hardware to switch which block of RAM
it is displaying.
Couldn't a signal handler change such a value and still be conforming,
we don't actually need never-heard-of-them-they're-not-std-c-threads
/per se/.

If a signal handler is called other than by a call to raise or abort
then if it accesses any static objects (which buffer and buffer_ready
presumably are) which are not of type "volatile sig_atomic_t" the
behaviour is undefined. So if a signal handler cannot change a value in
buffer (or read it), and cannot even access buffer_ready! So no, signals
are not a way you can access buffer whilst that function is executing
and still be conforming.

In my opinion, it's a case of when you need to use volatile (or believe
you do) you need to read your implementation documentation to find out
exactly what you need to do and how you need to do it.
 
F

Flash Gordon

Phil said:
Seebs said:
volatile int buffer_ready;
char buffer[BUF_SIZE];
void buffer_init() {
int i;
for (i=0; i<BUF_SIZE; i++)
buffer = 0;
buffer_ready = 1;
}
"The for-loop does not access any volatile locations, nor does it
perform any side-effecting operations. Therefore, the compiler is free
to move the loop below the store to buffer_ready, defeating the
developer's intent."
Hmmm. Interesting. Well, first off, I'd point out that unless there's more
to it, buffer is obviously all zeroes already, making the question moot.
That's not obvious at all. We don't have any idea what else happens
before the call to buffer_init().

As I said, "unless there's more to it". If there's other code affecting
these, it's hard to guess. Without other code, though, it's moot.
What is obvious is that the first quoted sentence is just plain
false, and therefore the second sentence's 'therefore' does not
necessarily follow, and indeed cannot follow.
Agreed. It is pretty obvious that the loop performs assignment, which is
an operation with side effects.

That said:

In general, a compiler is permitted to reorder side effects on non-volatiles
as long as a strictly conforming program can't tell the difference. So,
if you ran this on a threaded system, it's not obvious to me that the compiler
couldn't reorder the side effects on the buffer to later in the function,
because a strictly conforming program has no way to peek at the buffer
and the volatile flag until the function returns... If you use threads, you're
not strictly conforming, so we don't care what you see.

I wouldn't buy that, though, and if a compiler actually did break the
assumption of ordering, I'd regard it as a bug.


Absolutely. If the compiler can see that the code has been required
to be used in a multi-threaded context, then it has additional
information to help it prevent making the "optimisation", and if
if the compiler doesn't have that additional information it can't
make the optimisation, on the presumption that the rest of the code
is conforming. The paper just seemed to be "we found a bug in GCC,
aren't we clever?". The answer to which is "no, not especially, just
look at their bugzilla".


It's not always as simple as that...

If you have two processors sharing some dual port RAM (also with local
RAM), and the processor this code is running on has a cache. Is there
any guarantee that having written the data to the buffer the cached copy
of it is flushed to the dual port RAM before the write to buffer_ready?
It might not even be a processor on the other side, it could be the
buffer_ready flag causes the video hardware to switch which block of RAM
it is displaying.
Couldn't a signal handler change such a value and still be conforming,
we don't actually need never-heard-of-them-they're-not-std-c-threads
/per se/.

If a signal handler is called other than by a call to raise or abort
then if it accesses any static objects (which buffer and buffer_ready
presumably are) which are not of type "volatile sig_atomic_t" the
behaviour is undefined. So if a signal handler cannot change a value in
buffer (or read it), and cannot even access buffer_ready! So no, signals
are not a way you can access buffer whilst that function is executing
and still be conforming.

In my opinion, it's a case of when you need to use volatile (or believe
you do) you need to read your implementation documentation to find out
exactly what you need to do and how you need to do it.
 
F

Flash Gordon

Phil said:
Seebs said:
volatile int buffer_ready;
char buffer[BUF_SIZE];
void buffer_init() {
int i;
for (i=0; i<BUF_SIZE; i++)
buffer = 0;
buffer_ready = 1;
}
"The for-loop does not access any volatile locations, nor does it
perform any side-effecting operations. Therefore, the compiler is free
to move the loop below the store to buffer_ready, defeating the
developer's intent."
Hmmm. Interesting. Well, first off, I'd point out that unless there's more
to it, buffer is obviously all zeroes already, making the question moot.
That's not obvious at all. We don't have any idea what else happens
before the call to buffer_init().

As I said, "unless there's more to it". If there's other code affecting
these, it's hard to guess. Without other code, though, it's moot.
What is obvious is that the first quoted sentence is just plain
false, and therefore the second sentence's 'therefore' does not
necessarily follow, and indeed cannot follow.
Agreed. It is pretty obvious that the loop performs assignment, which is
an operation with side effects.

That said:

In general, a compiler is permitted to reorder side effects on non-volatiles
as long as a strictly conforming program can't tell the difference. So,
if you ran this on a threaded system, it's not obvious to me that the compiler
couldn't reorder the side effects on the buffer to later in the function,
because a strictly conforming program has no way to peek at the buffer
and the volatile flag until the function returns... If you use threads, you're
not strictly conforming, so we don't care what you see.

I wouldn't buy that, though, and if a compiler actually did break the
assumption of ordering, I'd regard it as a bug.


Absolutely. If the compiler can see that the code has been required
to be used in a multi-threaded context, then it has additional
information to help it prevent making the "optimisation", and if
if the compiler doesn't have that additional information it can't
make the optimisation, on the presumption that the rest of the code
is conforming. The paper just seemed to be "we found a bug in GCC,
aren't we clever?". The answer to which is "no, not especially, just
look at their bugzilla".


It's not always as simple as that...

If you have two processors sharing some dual port RAM (also with local
RAM), and the processor this code is running on has a cache. Is there
any guarantee that having written the data to the buffer the cached copy
of it is flushed to the dual port RAM before the write to buffer_ready?
It might not even be a processor on the other side, it could be the
buffer_ready flag causes the video hardware to switch which block of RAM
it is displaying.
Couldn't a signal handler change such a value and still be conforming,
we don't actually need never-heard-of-them-they're-not-std-c-threads
/per se/.

If a signal handler is called other than by a call to raise or abort
then if it accesses any static objects (which buffer and buffer_ready
presumably are) which are not of type "volatile sig_atomic_t" the
behaviour is undefined. So if a signal handler cannot change a value in
buffer (or read it), and cannot even access buffer_ready! So no, signals
are not a way you can access buffer whilst that function is executing
and still be conforming.

In my opinion, it's a case of when you need to use volatile (or believe
you do) you need to read your implementation documentation to find out
exactly what you need to do and how you need to do it.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,535
Members
45,007
Latest member
obedient dusk

Latest Threads

Top