D
Dave Rahardja
I'm trying to understand when exactly the compiler is allowed to elide access
to a variable.
Here is a common mistake made by a program waiting for a value that is
modified by an external entity:
bool flag;
while (!flag)
{
// Idle
}
In this case, the compiler is free to elide the access to flag, and replace
the while loop with an infinite loop.
Turning flag into a volatile variable, however, turns all accesses to flag
into observable behavior (§1.9-6), and prevents the compiler from eliding
access to the variable.
However, I'm confused about the relationship between observable behavior and
sequence points (§1.9-7). What does it mean that all side effects before a
sequence point is completed before the next evaluation is performed? Does it
mean that inserting a function call such as:
void foo(); // Defined in another compilation unit
bool flag;
while (!flag)
{
foo();
}
Forces the compiler to re-read flag at the head of each loop iteration?
My thought is that the compiler is /technically/ still allowed to elide the
access to flag, because its behavior is still not observable. However, for
practical purposes, a compiler will not know what side effect foo() may have
(including modifying flag), so unless it performs whole-program optimization,
it cannot elide access to flag.
This line of questioning comes from my experiments with multithreading, which
is not defined by the standard. I'm trying to determine exactly when I need to
use a volatile keyword, say when accessing data members in one object via
multiple threads. Even after synchronizing access to member variables (another
off-topic operation), I'm still not sure if my reads and writes to member
variables are liable to get optimized away. However, §1.9-7 seems to imply
that modifying an object defines a sequence point, and in all likelihood a
compiler will never optimize such operations away.
Can someone point me to a reliable guide on how to correctly manage
optimizations for multithreaded programs? Or do I simply have to declare all
variables accessed by multiple threads as volatile?
-dr
to a variable.
Here is a common mistake made by a program waiting for a value that is
modified by an external entity:
bool flag;
while (!flag)
{
// Idle
}
In this case, the compiler is free to elide the access to flag, and replace
the while loop with an infinite loop.
Turning flag into a volatile variable, however, turns all accesses to flag
into observable behavior (§1.9-6), and prevents the compiler from eliding
access to the variable.
However, I'm confused about the relationship between observable behavior and
sequence points (§1.9-7). What does it mean that all side effects before a
sequence point is completed before the next evaluation is performed? Does it
mean that inserting a function call such as:
void foo(); // Defined in another compilation unit
bool flag;
while (!flag)
{
foo();
}
Forces the compiler to re-read flag at the head of each loop iteration?
My thought is that the compiler is /technically/ still allowed to elide the
access to flag, because its behavior is still not observable. However, for
practical purposes, a compiler will not know what side effect foo() may have
(including modifying flag), so unless it performs whole-program optimization,
it cannot elide access to flag.
This line of questioning comes from my experiments with multithreading, which
is not defined by the standard. I'm trying to determine exactly when I need to
use a volatile keyword, say when accessing data members in one object via
multiple threads. Even after synchronizing access to member variables (another
off-topic operation), I'm still not sure if my reads and writes to member
variables are liable to get optimized away. However, §1.9-7 seems to imply
that modifying an object defines a sequence point, and in all likelihood a
compiler will never optimize such operations away.
Can someone point me to a reliable guide on how to correctly manage
optimizations for multithreaded programs? Or do I simply have to declare all
variables accessed by multiple threads as volatile?
-dr