std::deque Thread Saftey Situtation

N

NvrBst

I've read a bit online seeing that two writes are not safe, which I
understand, but would 1 thread push()'ing and 1 thread pop()'ing be
thread-safe? Basically my situation is the follows:

--Thread 1--
1. Reads TCPIP Buffer
2. Adds Buffer to Queue (q.size() to check queue isn't full, and
q.push_back(...))
3. Signals Reading Thread Event & Goes Back to Wait for More Messages
on TCPIP

--Thread 2--
1. Waits for Signal
2. while(q.size() > 0) myStruct = q.front();
2a. processes myStruct (doesn't modify it but does copy some
information for another queue).
2b. delete[] myStruct.Buffer
2c. q.pop_front();
3. Goes back to waiting for single

I've run the app for a few hours (basically saturating the TCPIP
traffic) and there were no apparent problems. The "processes
myStruct" takes a while and I can't have the push_back(...) thread
locked while processing is working.

I can add Critical Section locks around the ".size()", ".front()" and/
or ".pop_front()/.push_back()", but my first inclination is that this
is thread safe?

My worry is that say if a .push_back() starts on a deque with 1 node
in it. It sees the 1st node and modifies something in it to point to
the, to be added, new node, and then the ".pop_front()" occurs while
that’s happening. I have no idea how the queue is implemented
internally so unsure if this is a valid worry. Performance is very
important and I would rather absolutely no blocking unless it's needed
which is why I ask here :)

If Critical Sections are needed, would it just be fore the
".pop_front()/.push_back()"? or all member functions I'm using
(.size()/.front()).

Thanks. Any information would be greatly appreciated. These are the
only two threads/functions accessing the queue. I currently
implemented it with locking on all just to be safe, but would like to
remove the locking if it is not needed, or fine tune it.
 
P

peter koch

I've read a bit online seeing that two writes are not safe, which I
understand, but would 1 thread push()'ing and 1 thread pop()'ing be
thread-safe?  
No

Basically my situation is the follows:

--Thread 1--
1. Reads TCPIP Buffer
2. Adds Buffer to Queue (q.size() to check queue isn't full, and
q.push_back(...))
3. Signals Reading Thread Event & Goes Back to Wait for More Messages
on TCPIP

--Thread 2--
1. Waits for Signal
2. while(q.size() > 0) myStruct = q.front();
2a. processes myStruct (doesn't modify it but does copy some
information for another queue).
2b. delete[] myStruct.Buffer
2c. q.pop_front();
3. Goes back to waiting for single

I've run the app for a few hours (basically saturating the TCPIP
traffic) and there were no apparent problems.  The "processes
myStruct" takes a while and I can't have the push_back(...) thread
locked while processing is working.

If it did not fail, you were simply lucky. You will surely run out of
luck some day.
I can add Critical Section locks around the ".size()", ".front()" and/
or ".pop_front()/.push_back()", but my first inclination is that this
is thread safe?

It is thread-safe only when you protect everything with a mutex (which
could very well be a windows CRITICAL_SECTION).

[snip]
If Critical Sections are needed, would it just be fore the
".pop_front()/.push_back()"? or all member functions I'm using
(.size()/.front()).

On all of them.
Thanks.  Any information would be greatly appreciated.  These are the
only two threads/functions accessing the queue.  I currently
implemented it with locking on all just to be safe, but would like to
remove the locking if it is not needed, or fine tune it.

Did you measure the performance without a lock? Presumably, it would
not really make a difference.

/Peter
 
N

NvrBst

Ahh thanks for the clarification. I was sorta expecting it not to be
thead safe -_-.
Did you measure the performance without a lock? Presumably, it would
not really make a difference.

The clean locking way IE

lock();
while(q.size() > 0) {
Process((.front().Buffer);
delete[] etc
..pop_front();
}
unlock();

Was way to slow (because of the "Process(..)"). Moving the lock
inside the while loop, and adding some tmp vars, and also managing the
"q.size()" in the head of the while loop made the code a little more
ugly, but I believe the performance isn't too much worst; just seemed
to me that if I wanted to increase the performance, tackling the
locking/unlocking that's inside the while loop might be a good thing
to look into (the while loop can run hundreds, maybe thousands of
times depending how fast the other thread is pumping the queue up).

Thanks
 
P

peter koch

Ahh thanks for the clarification.  I was sorta expecting it not to be
thead safe -_-.
Did you measure the performance without a lock? Presumably, it would
not really make a difference.

The clean locking way IE

lock();
while(q.size() > 0) {
Process((.front().Buffer);
delete[] etc
.pop_front();}

unlock();

This is not at all clean. You lock while processing many elements,
while you only should lock for the short duration where you pop/push
an element.

Search for "condition variables" for an idea on how to do this kind of
stuff. A producer-consumer structure is really quite basic and most
useful - but it must be implemented correctly.
Was way to slow (because of the "Process(..)").  Moving the lock
inside the while loop, and adding some tmp vars, and also managing the
"q.size()" in the head of the while loop made the code a little more
ugly, but I believe the performance isn't too much worst; just seemed
to me that if I wanted to increase the performance, tackling the
locking/unlocking that's inside the while loop might be a good thing
to look into (the while loop can run hundreds, maybe thousands of
times depending how fast the other thread is pumping the queue up).

A low level mutex is a very fast beast unless you have heavy
contention. If your data is expensive to copy, that might be a
bottleneck, but wait dealing with that until you have gotten your
basic structure to work.

/Peter
 
N

NvrBst

A low level mutex is a very fast beast unless you have heavy
contention. If your data is expensive to copy, that might be a
bottleneck, but wait dealing with that until you have gotten your
basic structure to work.

/Peter

The information is too much to copy. When I said clean, I ment more-
so clean to look at (understand). What I have no looks like this:

Enter();
while(.size() > 0) {
Leave();
Process();
Enter();
}
Leave();

It works fine but as you might guess a little harder to understand, I
left out some temp variables, but basically it's not as clean to look
at as:

Enter();
while(.size() > 0) Process();
Leave();


Is the top way bad? Is there a consumer-producer model I should be
using instead (basically the TCPIP reading can't be blocked, and
processing the reading queue takes a while; it usally sometimes hits
0, but the "while(.size()) {..}" is usally running 75% of the time.

Thanks
 
P

peter koch

The information is too much to copy.  When I said clean, I ment more-
so clean to look at (understand).  What I have no looks like this:

Enter();
while(.size() > 0) {
Leave();
Process();
Enter();}

Leave();

This is not so easy to understand - even with correct indentation. And
it is not exception-safe.
It works fine but as you might guess a little harder to understand, I
left out some temp variables, but basically it's not as clean to look
at as:

Enter();
while(.size() > 0) Process();
Leave();

Is the top way bad?  Is there a consumer-producer model I should be
using instead (basically the TCPIP reading can't be blocked, and
processing the reading queue takes a while; it usally sometimes hits
0, but the "while(.size()) {..}" is usally running 75% of the time.
As I said, you should look for condition variables. Perhaps boost has
something for you. I am not sure that they have a producer-consumer
class, but they have the building blocks you need, and quite likely
also an example that you can use as a skeleton.
I am confident that there are also other solutions out there.

/Peter
 
N

NvrBst

This is not so easy to understand - even with correct indentation. And
it is not exception-safe.

Ack, if it is not exception-safe I definally can't use it... Would
you be able to give a situation where it would fail, if you get some
free time. It can be just very basic order of flow, I should be able
to grasp it. Everything I've written is pseudo-code, not my actual
code. I can't get the Indentations right here, and would be too hard
to follow if I just paste, but I can outline the basics:

--Pumping Thread--
WaitForReadSignle();
ReadTCPIP(Buffer, size, ...);
//Copies Buffer, and makes myStruct
m_Q[TCPIP].qLock.Enter();
q.push_back(myStruct);
m_Q[TCPIP].qLock.Leave();
SetEvent(recvieEvent);

--Processing Thread--
WaitForObjects(recvieEvent);
m_Q[TCPIP].qLock.Enter();
while(q.size() > 0) {
myStruct = q.front();
q.pop_front();
m_Q[TCPIP].qLock.Leave();
ProcessData(myStruct);
m_Q[TCPIP].Enter();
}
m_Q[TCPIP].Leave();


Both threads are in loops, and there are stuff I left out for clarity,
but this, more-than-less, is the structure. I do have try{} finally{}
blocks set up (to ensure that for every "Enter()" there is a
"Leave()"). I left them out for clarity. When you said exception-
safe you ment this? Or you ment that the "q" above isn't thread-
safe? Sorry for the confustion, I did leave things out thinking I'd
make it easier for people to read (my confusion was more so the
internal implementation so left out what I thought wasn't needed).

If you see something though I'd be happy to hear about it. I'm very
good with understanding concepts by example (quick, rough, pseudo code
things if you don't mind writing 5 or 10 lines). But, unless you see
a problem, I think the implementation I have is working.


As I said, you should look for condition variables. Perhaps boost has
something for you. I am not sure that they have a producer-consumer
class, but they have the building blocks you need, and quite likely
also an example that you can use as a skeleton.
I am confident that there are also other solutions out there.

I'm unsure what boost is, I do know the basic producer-consumer
models, which I thought I was following (just using critical sections
to protect the "q", in a more basic way; the models usally have the
locking inside the Class, I just wanted to use the std:: class, since
I only use it in two places).

Thanks for your help :)
 
Z

zaimoni

Everything I've written is pseudo-code, not my actual
code. I can't get the Indentations right here, and would be too hard
to follow if I just paste, but I can outline the basics:

--Pumping Thread--
WaitForReadSignle();
ReadTCPIP(Buffer, size, ...);
//Copies Buffer, and makes myStruct
m_Q[TCPIP].qLock.Enter();
q.push_back(myStruct);
m_Q[TCPIP].qLock.Leave();
SetEvent(recvieEvent);

--Processing Thread--
WaitForObjects(recvieEvent);
m_Q[TCPIP].qLock.Enter();
while(q.size() > 0) {
myStruct = q.front();
q.pop_front();
m_Q[TCPIP].qLock.Leave();
ProcessData(myStruct);
m_Q[TCPIP].Enter();}

m_Q[TCPIP].Leave();

Both threads are in loops, and there are stuff I left out for clarity,
but this, more-than-less, is the structure. I do have try{} finally{}
blocks set up (to ensure that for every "Enter()" there is a
"Leave()"). I left them out for clarity. ....

Enough try-catch blocks will work, although I'd find proofreading them
a bit much for my taste. A thin wrapper class would get the
proofreading time per instance down to practically instant.

Please pardon my use of agonizing detail in the following. A really
thin wrapper class WrapMutex should:
* take the actual mutex as a non-const reference parameter in its
constructor
* store a reference to the actual mutex as private data
** I think this implies not default-constructable, as a parameter is
needed to initialize the reference. Usually not a good trait, but
acceptable for this class.
* call Enter on the mutex in its inline constructor
* call Leave on the mutex in its inline non-virtual destructor
* not be copyable
** We should lose the default assignment operator because of the
reference data member, but the copy constructor needs to be rendered
unusable. Besides the textbook method, deriving from
boost::noncopyable will work.

Then your code

m_Q[TCPIP].qLock.Enter();
while(q.size() > 0) {
myStruct = q.front();
q.pop_front();
m_Q[TCPIP].qLock.Leave();
ProcessData(myStruct);
m_Q[TCPIP].qLock.Enter();
}
m_Q[TCPIP].qLock.Leave();

can be replaced by

some_struct_type myStruct;
while(q.size() > 0) {
{
WrapMutex(m_Q[TCPIP].qLock);
myStruct = q.front();
q.pop_front();
}
ProcessData(myStruct);
}
I'm unsure what boost is,

The main website is http://www.boost.org/ ; it's a very extensive C++
library with a noticeable learning curve. I personally am nonfluent
in it. (I currently use mainly the program-correctness, conditional
compilation of templates, and Boost.Preprocessor sub-libraries.)
 
Z

zaimoni

I've read a bit online seeing that two writes are not safe, which I
understand, but would 1 thread push()'ing and 1 thread pop()'ing be
thread-safe?

As both push() and pop() are non-const operations, no. (In general,
non-const operations are "writing" to the data structure.)

Assuming the data type has no mutable data members: Even one non-const
operation in parallel with any number of const operations on the same
data structure is technically unsafe. Also, any operation that
invalidates iterators is unsafe when another thread has iterators in
use (for a std::deque, inserting elements at all comes to mind).

There are plausibly some more subtle issues that an expert in the
field would notice.
 
N

NvrBst

Ahh thank you. I find examples a lot easier to follow, one question:
some_struct_type myStruct;
while(q.size() > 0) {
  {
  WrapMutex(m_Q[TCPIP].qLock);
  myStruct = q.front();
  q.pop_front();
  }
  ProcessData(myStruct);

}

This would imply the ".size()" doesn't have to be wrapped in the
CriticalSections? The reasion I put the locks/unlocks outside the
whileloop was solely for that reasion. I actually already have a
wraper like you listed "CriticalSection::Owner
sLock(m_Q[TCPIP].qLock);" which does what you outlined, but I failed
to realize I could put empty curly brackets around the section I want,
to force it out of scope :)

Most my confusion comes from my C# background (java, and then C# is
what I grew up with), and in C#'s Queue, the size property is stored
as an Int32 which, when reading, is an atomic operation on mostly all
processors. Meaning it even read it before another thread changes it,
or after; either case is fine (thread safe) if there are only two
threads, and each thread either pop's or push's, thus can't cause an
exception. This is because in C# the size doesn't get incremented
untill after the insertion is completed, so exceptions can't occure in
my situation.

I was thinking ".front()" should also be thread safe for the same (C#-
Based) reasion, since it's a 32bit applications, and .front() should
be returning a pointer to the first element. Since the thread
calling .front() is the only thread who's removing elements, and since
this thread knows that ".size()" shows one element, then I would of
assumed ".front()" would be thread-safe as well.

Since .size() was a method call instead of a property, I was thinking
there might be more to it in c++... Which I'm not confused on again:
".size()" ".front()" would both be thread-safe operations for a thread
thats removes elements (being the only thread that removes items)? In
C# Enqueue/Dequeue arn't thread safe, but again, depends on
implementations, which is why I asked here; I would be suprised if
they were, but thought maybe.

So in essance I can change it to the following if I wanted to fine
tune the thread-safty locking to it's bare minimium in my example (and
remove my try statment to boot, yay)?

some_struct_type myStruct;
while(q.size() > 0) {
myStruct = q.front();
{
WrapMutex(m_Q[TCPIP].qLock);
q.pop_front();
}
ProcessData(myStruct); }


Thanks, NB
 
N

NvrBst

"Which I'm not confused on again" -> "Which I'm now confused on
again". Sorry typo's, Amung others; this one change my meaning a bit
more than I wanted.
 
Z

zaimoni

Ahh thank you. I find examples a lot easier to follow, one question:
some_struct_type myStruct;
while(q.size() > 0) {
{
WrapMutex(m_Q[TCPIP].qLock);
myStruct = q.front();
q.pop_front();
}
ProcessData(myStruct);

This would imply the ".size()" doesn't have to be wrapped in the
CriticalSections?

Yes, but I got that wrong. In general, it should be in the critical
section.

To be truly safe without locking up the deque for the entire loop, the
loop would have to be reorganized so that the q.size() call was within
the critical section.
The reasion I put the locks/unlocks outside the
whileloop was solely for that reasion.

Ok. That is not exception-safe by itself, although you did mention
using try-catch blocks to counter that. The wrapper class approach
shouldn't need the try-catch blocks to be exception-safe.
...

I was thinking ".front()" should also be thread safe for the same (C#-
Based) reasion, since it's a 32bit applications, and .front() should
be returning a pointer to the first element. Since the thread

I'm not familiar with C#, so I don't know when C# pointers analogize
to either C++ raw pointers, or C++ references. It makes a huge
difference here.

front() and back() return C++ references. begin() and end() return
iterators (which usually have the same behavior as pointers).
Iterators are designed so that a pointer is an iterator for a C-style
array.
calling .front() is the only thread who's removing elements, and since
this thread knows that ".size()" shows one element, then I would of
assumed ".front()" would be thread-safe as well.

It's important that this is the only thread removing elements. As
long as std::deque is implemented correctly, insertions by other
threads to the front or back will only invalidate all iterators;
references will be fine. That is: *q.begin() is not safe, q.front()
would mostly be safe. [Insertions to the middle will invalidate
references.] However, the loop does assume that no other threads are
inserting to the front.
Since .size() was a method call instead of a property, I was thinking
there might be more to it in c++... Which I'm not confused on again:

True. There's more to size() than to empty(), as well.
".size()" ".front()" would both be thread-safe operations for a thread
thats removes elements (being the only thread that removes items)?

q.empty() would come pretty close to being thread-safe (and might well
be for a good implementation on a sufficiently friendly CPU). The
other member functions are likely to be thrown off by temporarily
inconsistent internal state.

As long as all insertions are to the end, it is possible that a good
implementation would have a coincidentally thread-safe q.front().
Insertions to the middle will invalidate this.

The reference from q.front() should be thread-safe regardless, given
that this thread is the only thread removing items. But q.pop_front()
will invalidate the reference, so the assignment is needed.
So in essance I can change it to the following if I wanted to fine
tune the thread-safty locking to it's bare minimium in my example (and
remove my try statment to boot, yay)?

The try-catch statements can be outright gone, yes.
some_struct_type myStruct;
while(q.size() > 0) {
myStruct = q.front();
{
WrapMutex(m_Q[TCPIP].qLock);
q.pop_front();
}
ProcessData(myStruct); }

A fortunate implementation would let us get away with:

some_struct_type myStruct;
while(!q.empty()) {
myStruct = q.front();
{
WrapMutex(m_Q[TCPIP].qLock);
q.pop_front();
}
ProcessData(myStruct); }

The loop that would be maximally portable would be like:

some_struct_type myStruct;
bool in_loop = false; /* redundant */
do{
in_loop = false;
{
WrapMutex my_lock(m_Q[TCPIP].qLock);
if (!q.empty())
{
myStruct = q.front();
q.pop_front();
in_loop = true;
};
}
if (in_loop) ProcessData(myStruct);
}
while(in_loop);
 
P

peter koch

Ahh thank you.  I find examples a lot easier to follow, one question:
some_struct_type myStruct;
while(q.size() > 0) {
  {
  WrapMutex(m_Q[TCPIP].qLock);
  myStruct = q.front();
  q.pop_front();
  }
  ProcessData(myStruct);

This would imply the ".size()" doesn't have to be wrapped in the
CriticalSections?  

Yes - that is an error with the code: it should be wrapped.

[snip]
Most my confusion comes from my C# background (java, and then C# is
what I grew up with), and in C#'s Queue, the size property is stored
as an Int32 which, when reading, is an atomic operation on mostly all
processors.  Meaning it even read it before another thread changes it,
or after; either case is fine (thread safe) if there are only two
threads, and each thread either pop's or push's, thus can't cause an
exception.  This is because in C# the size doesn't get incremented
untill after the insertion is completed, so exceptions can't occure in
my situation.

This is not correct, and your C# code will not work unless there is
more action involved when reading the Int32 size. The reason is that
even though the size gets written after the data, it might not be seen
this way by another thread. Memory writes undergo a lot of steps
involving caches of different kinds, and the cacheline with the size
might be written to memory before the cache-line containing the data.
I was thinking ".front()" should also be thread safe for the same (C#-
Based) reasion, since it's a 32bit applications, and .front() should
be returning a pointer to the first element.  Since the thread
calling .front() is the only thread who's removing elements, and since
this thread knows that ".size()" shows one element, then I would of
assumed ".front()" would be thread-safe as well.

You have the same problem here.

[snip]

/Peter
 
J

Jerry Coffin

I've read a bit online seeing that two writes are not safe, which I
understand, but would 1 thread push()'ing and 1 thread pop()'ing be
thread-safe? Basically my situation is the follows:

Generally speaking, no, it's not safe.

My advice would be to avoid std::deque in such a situation -- in a
multi-threaded situation, it places an undue burden on the client code.
This is a case where it's quite reasonable to incorporate the locking
into the data structure itself to simplify the client code (a lot).

For one example, I've used code like this under Windows for quite a
while:

template<class T, unsigned max = 256>
class queue {
HANDLE space_avail; // signaled => at least one slot empty
HANDLE data_avail; // signaled => at least one slot full
CRITICAL_SECTION mutex; // protect buffer, in_pos, out_pos

T buffer[max];
long in_pos, out_pos;
public:
queue() : in_pos(0), out_pos(0) {
space_avail = CreateSemaphore(NULL, max, max, NULL);
data_avail = CreateSemaphore(NULL, 0, max, NULL);
InitializeCriticalSection(&mutex);
}

void push(T data) {
WaitForSingleObject(space_avail, INFINITE);
EnterCriticalSection(&mutex);
buffer[in_pos] = data;
in_pos = (in_pos + 1) % max;
LeaveCriticalSection(&mutex);
ReleaseSemaphore(data_avail, 1, NULL);
}

T pop() {
WaitForSingleObject(data_avail,INFINITE);
EnterCriticalSection(&mutex);
T retval = buffer[out_pos];
out_pos = (out_pos + 1) % max;
LeaveCriticalSection(&mutex);
ReleaseSemaphore(space_avail, 1, NULL);
return retval;
}

~queue() {
CloseHandle(space_avail);
CloseHandle(data_avail);
DeleteCriticalSection(&mutex);
}
};

Exception safety depends on assignment of T being nothrow, but (IIRC)
not much else is needed. This uses value semantics, so if you're dealing
with something where copying is expensive, you're expected to use some
sort of smart pointer to avoid copying. Of course, a reference counted
smart pointer will usually have some locking of its own on incrementing
and decrementing the reference count, but that's a separate issue.

Contrary to the statements elsethread, I've seen little use for being
able to parameterize how the waits happen and such. While there might be
situations where it could be useful, I haven't encountered a need in
real life.

Likewise, there's optimization that could be done -- for an obvious
example, the current design switches to kernel mode twice for every push
or pop, which isn't exactly fast. In real use, however, I haven't see
this take enough processor time to bother optimizing it.
 
N

NvrBst

Thank you all for your information :) I think I have what I need now.
This is not correct, and your C# code will not work unless there is
more action involved when reading the Int32 size. The reason is that
even though the size gets written after the data, it might not be seen
this way by another thread. Memory writes undergo a lot of steps
involving caches of different kinds, and the cacheline with the size
might be written to memory before the cache-line containing the data.

Only if there were, say, two threads poping. In my case I only have 1
thread poping, and 1 thread pushing. Even if the poping thread reads
0 elements (because say the size cache wasn't updated in time, but
there is really 1 element), that is safe, it'll ignore untill size
says 1 element. If it reads 1+ element then there is definally 1
element to remove (since no other thread is removing items). Same
logic works for reading when there is a max size you can't go over.

Thanks again for all the information, and enjoy the weekend ;)
 
T

Thomas J. Gritzan

Paavo Helde schrieb:
[...]
Item local;
{
boost::mutex::scoped_lock lock(queue_mutex);
while(queue.empty()) {
// wait until an item arrives
// other behaviors can be imagined here as well
queue_condition.wait(lock);
}
queue.front().swap(local);
queue.pop_front();
}
// ... process local to your heart's content, with no mutex locked.

Doing the wait on the condition variable while holding the lock is a bad
idea, but you'll recognize that error very early when your application
dead locks.
 
T

Thomas J. Gritzan

I said:
Paavo Helde schrieb:
[...]
Item local;
{
boost::mutex::scoped_lock lock(queue_mutex);
while(queue.empty()) {
// wait until an item arrives
// other behaviors can be imagined here as well
queue_condition.wait(lock);
}
queue.front().swap(local);
queue.pop_front();
}
// ... process local to your heart's content, with no mutex locked.

Doing the wait on the condition variable while holding the lock is a bad
idea, but you'll recognize that error very early when your application
dead locks.

Sorry, the code is correct. condition::wait will unlock the mutex while
waiting on the condition variable.

I should have checked the boost.thread documentation earlier.
 
P

peter koch

Thank you all for your information :)  I think I have what I need now.


Only if there were, say, two threads poping.  In my case I only have 1
thread poping, and 1 thread pushing.  Even if the poping thread reads
0 elements (because say the size cache wasn't updated in time, but
there is really 1 element), that is safe, it'll ignore untill size
says 1 element.  If it reads 1+ element then there is definally 1
element to remove (since no other thread is removing items).  Same
logic works for reading when there is a max size you can't go over.

Thanks again for all the information, and enjoy the weekend ;)

No - you are not correct. Imagine a situation where the pushing thread
pushes the object to the dequeue and then increases the counter.
However, the size get written through the cache whereas the object
stays (partly) in the cache. Now the reader will see that there is an
object in the dequeue, but in reality it was never written to a
location that allows the other thread to see it (it is on another
core), so it reads rubbish where the object should be.

/Peter
 
J

James Kanze

Yes, that's one thing I like about boost::thread library that
it makes some kind of errors impossible (like forgetting to
unlock the mutex before wait, or forgetting to relock it after
the wait).

Except that that's not a property of the boost::thread library,
but the way conditions work in general. All Boost provides here
is a very low level (but portable) wrapper for the Posix
interface, plus RAII for the locks held at the application
level (which is, in itself, already a good thing).
 
J

James Kanze

Generally speaking, no, it's not safe.
My advice would be to avoid std::deque in such a situation --
in a multi-threaded situation, it places an undue burden on
the client code.

Avoid it, or wrap it? I use it regularly for communicating
between threads; my Queue class is based on it.
This is a case where it's quite reasonable to incorporate the
locking into the data structure itself to simplify the client
code (a lot).
For one example, I've used code like this under Windows for
quite a while:
template<class T, unsigned max = 256>
class queue {
HANDLE space_avail; // signaled => at least one slot empty
HANDLE data_avail; // signaled => at least one slot full
CRITICAL_SECTION mutex; // protect buffer, in_pos, out_pos

T buffer[max];
long in_pos, out_pos;

And if you replace buffer, in_pos and out_pos with
where's the problem. said:
Exception safety depends on assignment of T being nothrow, but
(IIRC) not much else is needed. This uses value semantics, so
if you're dealing with something where copying is expensive,
you're expected to use some sort of smart pointer to avoid
copying. Of course, a reference counted smart pointer will
usually have some locking of its own on incrementing and
decrementing the reference count, but that's a separate issue.

My own queues use std::auto_ptr at the interface. This imposes
the requirement that all of the contained objects be dynamically
allocated, and adds some clean-up code in the queue itself
(since you can't put the auto_ptr in the deque), but ensures
that once the message has been posted, the originating thread
won't continue to access it.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,260
Messages
2,571,038
Members
48,768
Latest member
first4landlord

Latest Threads

Top