Is this code MT-Safe?

D

Denis Remezov

Ken said:
[...]
What are the general guidelines for using volatile?
[...]

I assume you mean in regards to multithreading. Take a look at the
following MT FAQ if you haven't done so already:

http://www.thecomputerportal.com/fr...6%5CThreads%7C6373&pos=3&type=full&s=Computer

It does mention the standard of C (largely inherited by C++) in connection
with the use of volatile and POSIX multithreading.

Just search the whole text for "volatile" (although it's a great read
as a whole).

Denis
 
K

Ken Durden

I already posted this to comp.programming.threads, but the group over
there seems to be arguing from a purely theoretical standpoint rather
than addressing the reality of compiler optimizations and processor
architecture. I was hoping for a more straight-forward answer from
this group.

Thanks,
-ken

Explanation follows code example, MFC synchronization objects used
because thats what I know how to use. Some psuedo-code may be present
where coding in all the parameters wasn't meaningful.

class A
{
public:
A() :
_bRunning(false),
_stopEvent(FALSE,FALSE),
_stoppedEvent(FALSE,FALSE)
{}

void start()
{
CSingleLock( &crit, TRUE );

AfxBeginThread(ThreadFunc, (LPVOID) this, ... );
}

void stop()
{
CSingleLock( &crit, TRUE );

_stoppedEvent.Reset();
_stopEvent.Set();

::WaitForSingleObject( _stoppedEvent, INFINITE );
}

void test()
{
CSingleLock( &crit, TRUE );

if( _bRunning )
stop();

assert( !_bRunning );
}

private:
UINT ThreadFunc( LPVOID pParam )
{
A * a = (A*) pParam;

a->_bRunning = true;
::WaitForSingleObject( pParam->_stopEvent, INFINITE );
a->_bRunning = false;

_stoppedEvent.Set();
}

CCriticalSection crit;
CEvent _stopEvent;
CEvent _stoppedEvent;
bool _bRunning;
};

int main()
{
A a;
a.start();
a.test();
}

In this simplistic example, assume the only two threads you see are
the only two which are relevant. In this simple example, am I
guaranteed that the assert in the test function will not fire?

I'm worried that the main thread will read the value of _bRunning in
the test() function, and then when it comes time to do the assert,
simply re-use the existing value it has already read, rather than
going back to main memory to get the new value which was set by the
other thread. Is this a valid concern?

Does _bRunning need to be tagged as volatile?

My understanding from school and reading is that it does, but that
supposition blows so many holes in the way most MT programs I've seen
are written its scary. For example, here's another example.. even
simpler:

class B
{
public:
B() : m_pList(NULL) {}

void add( int n )
{
CSingleLock( &crit, TRUE );

if( m_pList == NULL )
m_pList = new std::list<int>();

m_pList->push_back( n );
}

int size()
{
CSingleLock( &crit, TRUE );

if( m_pList == NULL )
return 0;

return m_pList->size();
}

private:
CCriticalSection crit;
std::list<int> * m_pList;
};

In this example, does m_pList need to be declared as
std::list<int> * volatile m_pList

Suppose I have a dozen threads all competing to add items to this
list, without volatile, is there a possibility that one thread will
have a "cached" value of m_pList which is out of date? If the threads
only called add(), it seems that there is no way for the NULL value of
m_pList to be cached in a register; what if the threads also called
size() prior to calling add, however, would this introduce a
possibility for the pointer value to be stored in a register?

Given the relatively small set of registers on Intel processors, how
big an issue is volatile on that platform? What effect does locality
of referencing have? If I reference the same variable 30 C lines apart
from each other does that make a difference from accessing it 2 lines
apart? Do non-inlined function-calls change this behavior?

What are the general guidelines for using volatile? Even if I've got
my locks and events all lined up perfectly, it seems I can still get
burnt without it. I've got a beefed up scenario which models class A
(the first one), I'm going to add volatile to see if I can change the
behavior, but its (dread) low-frequency intermittent problem, so I
won't know for a month if it helped probably.

Thanks alot,
-Ken
 
J

Julie

Ken said:
I already posted this to comp.programming.threads...

Wrong here as well. C++ has no notion of threads.

To get the answers that you need, you will have to post on a newsgroup dealing
specifically with your operating system and/or threads package. In looking at
your code and comments, it is apparent that your target is MFC on Windows --
there are numerous ms-specific newsgroups that could better answer your
questions, as well as the resources in the MSDN library.
 
X

Xenos

Ken Durden said:
I already posted this to comp.programming.threads, but the group over
there seems to be arguing from a purely theoretical standpoint rather
than addressing the reality of compiler optimizations and processor
architecture. I was hoping for a more straight-forward answer from
this group.

Thanks,
-ken

Explanation follows code example, MFC synchronization objects used
because thats what I know how to use. Some psuedo-code may be present
where coding in all the parameters wasn't meaningful.

class A
{
public:
A() :
_bRunning(false),
_stopEvent(FALSE,FALSE),
_stoppedEvent(FALSE,FALSE)
{}

void start()
{
CSingleLock( &crit, TRUE );

AfxBeginThread(ThreadFunc, (LPVOID) this, ... );
}

void stop()
{
CSingleLock( &crit, TRUE );

_stoppedEvent.Reset();
_stopEvent.Set();

::WaitForSingleObject( _stoppedEvent, INFINITE );
}

void test()
{
CSingleLock( &crit, TRUE );

if( _bRunning )
stop();

assert( !_bRunning );
}

private:
UINT ThreadFunc( LPVOID pParam )
{
A * a = (A*) pParam;

a->_bRunning = true;
::WaitForSingleObject( pParam->_stopEvent, INFINITE );
a->_bRunning = false;

_stoppedEvent.Set();
}

CCriticalSection crit;
CEvent _stopEvent;
CEvent _stoppedEvent;
bool _bRunning;
};

int main()
{
A a;
a.start();
a.test();
}

In this simplistic example, assume the only two threads you see are
the only two which are relevant. In this simple example, am I
guaranteed that the assert in the test function will not fire?

I'm worried that the main thread will read the value of _bRunning in
the test() function, and then when it comes time to do the assert,
simply re-use the existing value it has already read, rather than
going back to main memory to get the new value which was set by the
other thread. Is this a valid concern?

Does _bRunning need to be tagged as volatile?

My understanding from school and reading is that it does, but that
supposition blows so many holes in the way most MT programs I've seen
are written its scary. For example, here's another example.. even
simpler:

class B
{
public:
B() : m_pList(NULL) {}

void add( int n )
{
CSingleLock( &crit, TRUE );

if( m_pList == NULL )
m_pList = new std::list<int>();

m_pList->push_back( n );
}

int size()
{
CSingleLock( &crit, TRUE );

if( m_pList == NULL )
return 0;

return m_pList->size();
}

private:
CCriticalSection crit;
std::list<int> * m_pList;
};

In this example, does m_pList need to be declared as
std::list<int> * volatile m_pList

Suppose I have a dozen threads all competing to add items to this
list, without volatile, is there a possibility that one thread will
have a "cached" value of m_pList which is out of date? If the threads
only called add(), it seems that there is no way for the NULL value of
m_pList to be cached in a register; what if the threads also called
size() prior to calling add, however, would this introduce a
possibility for the pointer value to be stored in a register?

Given the relatively small set of registers on Intel processors, how
big an issue is volatile on that platform? What effect does locality
of referencing have? If I reference the same variable 30 C lines apart
from each other does that make a difference from accessing it 2 lines
apart? Do non-inlined function-calls change this behavior?

What are the general guidelines for using volatile? Even if I've got
my locks and events all lined up perfectly, it seems I can still get
burnt without it. I've got a beefed up scenario which models class A
(the first one), I'm going to add volatile to see if I can change the
behavior, but its (dread) low-frequency intermittent problem, so I
won't know for a month if it helped probably.

Thanks alot,
-Ken

I think the main reason you are seeing problems is that AfxBeginThread() is
expecting a function with a particular signature, which you are not giving
it (a class function has an extra hidden parameter for this). Try making
ThreadFunc static. You are passing it this, but not using it consistently.
You are trying to use the passed in this value for some members, and using
the "hidden" this for others. That's probably why you are experiencing
weird behavior.

I don't think volatile is your problem. I usually only use volatile on
variables that may change unexpectedly and are accessed more then once
within a particular function (and I need to see the change). An example is
looping (polling) on a variable until it changes (by another thread or via a
hardware register).

Anyways, if it may change through any other means than the current thread of
execution that is accessing it, it is always safest to make it volatile.

I would use mutexes instead of criticial sections. Criticial sections are
faster (in execution), but they lock the processor to the thread, instead of
the thread to the resource you are protecting. Instead of the thread
waiting for the resource to become available, all other threads are blocked
by the critical section when they care nothing about that resource. This
unnecessarily increases the latency of context switching. This can be bad
if you have hard dead-lines to meet.

DrX
 
P

Peter van Merkerk

Xenos said:
I would use mutexes instead of criticial sections. Criticial sections are
faster (in execution), but they lock the processor to the thread, instead of
the thread to the resource you are protecting. Instead of the thread
waiting for the resource to become available, all other threads are blocked
by the critical section when they care nothing about that resource. This
unnecessarily increases the latency of context switching. This can be bad
if you have hard dead-lines to meet.

<offtopic>
Are you sure about this? It seems that you believe a critical section
disables interrupts or the scheduler. However this is not what the MSDN
says, and it is not what the actual code in the OS does. Under
MS-Windows critical sections and mutexes are quite similar. Critical
sections do protect resources, don't lock the processor to one thread
(context switches are still possible when a critical section is entered)
and it does not increases the latency of context switching. The
difference between a critical section and a mutex is that mutexes work
across process boundaries and critical sections can only be used inside
a single process. A critical section does not block all other threads
when it is entered. Only if another thread tries to enter the same
critical section that thread will be blocked until the critical section
is released, just like a mutex.

A critical section uses an atomic test and set instruction (in user
mode) to determine whether the thread may continue. If the test
indicates that no other thread has entered the critical section, no
kernel call will be made and the thread can continue. If there is
contention (another thread has already entered the same critical
section) a kernel call will be made to block the thread until the other
threads leaves the critical section. Switching to kernel mode and back
is rather slow, so by avoiding kernel calls whenever possible the time
needed to enter a critical section can be significantly reduced.
Critical sections are fast on single processor systems if there is no
contention.
</offtopic>
 
T

tom_usenet

I already posted this to comp.programming.threads, but the group over
there seems to be arguing from a purely theoretical standpoint rather
than addressing the reality of compiler optimizations and processor
architecture. I was hoping for a more straight-forward answer from
this group.

The theoretical standpoint is the best one though, since it is the
only way to write portable code.

You should be asking in a Windows programming group, since the
question is about Windows synchronization.
I'm worried that the main thread will read the value of _bRunning in
the test() function, and then when it comes time to do the assert,
simply re-use the existing value it has already read, rather than
going back to main memory to get the new value which was set by the
other thread. Is this a valid concern?

Yes, I think so. I would have thought that SetEvent would use release
memory semantics, and WaitForSingleObject acquire ones, in which case
it is fine as is. However, Microsoft don't document this...
Does _bRunning need to be tagged as volatile?

I can't find enough information to say...
My understanding from school and reading is that it does, but that
supposition blows so many holes in the way most MT programs I've seen
are written its scary.

volatile is not needed as long as critical sections or mutexes are
used. However, you aren't using them in your example.

For example, here's another example.. even
simpler:

class B
{
public:
B() : m_pList(NULL) {}

void add( int n )
{
CSingleLock( &crit, TRUE );

if( m_pList == NULL )
m_pList = new std::list<int>();

m_pList->push_back( n );
}

int size()
{
CSingleLock( &crit, TRUE );

if( m_pList == NULL )
return 0;

return m_pList->size();
}

private:
CCriticalSection crit;
std::list<int> * m_pList;
};

In this example, does m_pList need to be declared as
std::list<int> * volatile m_pList

No, since the critical section ensures that each thread/processor
reloads the relevant members from main memory, and writes them out
when releasing the section.
Suppose I have a dozen threads all competing to add items to this
list, without volatile, is there a possibility that one thread will
have a "cached" value of m_pList which is out of date?

Nope, the semantics of critical sections prevent that.

If the threads
only called add(), it seems that there is no way for the NULL value of
m_pList to be cached in a register; what if the threads also called
size() prior to calling add, however, would this introduce a
possibility for the pointer value to be stored in a register?

Given the relatively small set of registers on Intel processors, how
big an issue is volatile on that platform? What effect does locality
of referencing have? If I reference the same variable 30 C lines apart
from each other does that make a difference from accessing it 2 lines
apart? Do non-inlined function-calls change this behavior?

volatile isn't sufficient on a multiprocessor machine, since it only
forces a reload for that CPU, not a complete cache synchronization
with main memory. Memory barriers do the necessary cache
synchronization, and they are used in the various synchronization
functions.
What are the general guidelines for using volatile?

Use it with the Interlocked* functions when you just want single
atomic values. Also, I suspect it is useful when writing kernel device
driver type code.

I'm no expert though - the lot over in comp.programming.threads are,
or you might try a Windows group.

Tom
 
X

Xenos

Peter van Merkerk said:
<offtopic>
Are you sure about this? It seems that you believe a critical section
disables interrupts or the scheduler. However this is not what the MSDN
says, and it is not what the actual code in the OS does. Under
MS-Windows critical sections and mutexes are quite similar. Critical
sections do protect resources, don't lock the processor to one thread
(context switches are still possible when a critical section is entered)
and it does not increases the latency of context switching. The
difference between a critical section and a mutex is that mutexes work
across process boundaries and critical sections can only be used inside
a single process. A critical section does not block all other threads
when it is entered. Only if another thread tries to enter the same
critical section that thread will be blocked until the critical section
is released, just like a mutex.

A critical section uses an atomic test and set instruction (in user
mode) to determine whether the thread may continue. If the test
indicates that no other thread has entered the critical section, no
kernel call will be made and the thread can continue. If there is
contention (another thread has already entered the same critical
section) a kernel call will be made to block the thread until the other
threads leaves the critical section. Switching to kernel mode and back
is rather slow, so by avoiding kernel calls whenever possible the time
needed to enter a critical section can be significantly reduced.
Critical sections are fast on single processor systems if there is no
contention.
</offtopic>
Interesting. Thanks. I was misinformed many, many, years ago when I
actually did Windows programming. Thanks for correcting me.
 
K

Ken Durden

Julie said:
Wrong here as well. C++ has no notion of threads.

To get the answers that you need, you will have to post on a newsgroup dealing
specifically with your operating system and/or threads package. In looking at
your code and comments, it is apparent that your target is MFC on Windows --
there are numerous ms-specific newsgroups that could better answer your
questions, as well as the resources in the MSDN library.

The people who prowl on MFC newsgroups are largely idiots, there are
no environment specific issues here, just MT programming, which most
of the people in this ng have probably used at one point or another.

-ken
 
J

Julie

Ken said:
The people who prowl on MFC newsgroups are largely idiots, there are
no environment specific issues here, just MT programming, which most
of the people in this ng have probably used at one point or another.

-ken

Yes, all of what you said may be true. Regardless, threads and thread-safety
are always off-topic in this newsgroup as it has absolutely nothing to do w/
C++.
 
K

Ken Durden

Xenos said:
I think the main reason you are seeing problems is that AfxBeginThread() is
expecting a function with a particular signature, which you are not giving
it (a class function has an extra hidden parameter for this). Try making
ThreadFunc static. You are passing it this, but not using it consistently.
You are trying to use the passed in this value for some members, and using
the "hidden" this for others. That's probably why you are experiencing
weird behavior.

I don't think volatile is your problem. I usually only use volatile on
variables that may change unexpectedly and are accessed more then once
within a particular function (and I need to see the change). An example is
looping (polling) on a variable until it changes (by another thread or via a
hardware register).

Anyways, if it may change through any other means than the current thread of
execution that is accessing it, it is always safest to make it volatile.

The code in the post had a couple typographical errors, since I coded
it up on the fly. The function is static, else it wouldn't compile.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,539
Members
45,024
Latest member
ARDU_PROgrammER

Latest Threads

Top