[...]
I'm using a sctp-library that receives video-frames from a sending
server. Each frame has a timestamp to determine the time I have to write
it to disk (this means to show it on the screen).
In the receiving loop I want to store all the incoming data to a vector
of structs or a map by the push_back function. this will run forever.
I guess I need another thread to write the first element of the vector
to the disk. This will occur when the timestamp of this element matches.
Is it clear enough? I need access to the vector in both threads, in the
first one to store all incoming frames to the vector, in the 2nd one to
write the elements to the disk and to delete the written element. Is it
possible to use a global vector and mutex to protect it?
C++ contains no concept of concurrency. As the question is OT, post
on comp.programming.threads if you have further questions about
threading.
My personal recommendation is to use threads for the reader/writer and
a queue (wrapped around a list or dequeue) rather than a vector to
store the frames.
Why threads? Ease of design, mostly, but use of a shared vector and
processes may affect stability (in extreme cases). Usually, processes
have separate address spaces and so different processes will not share
global/local/heap variables. "Usually" doesn't mean it's impossible,
it just means your program would have to use shared memory (e.g.
shmget / shmat, CreateFileMapping / OpenFileMapping / MapViewOfFile)
and placement new. You'll need to create an allocater which uses
shared memory, use that allocator to create a vector which also uses
said allocator. That way, whatever dynamically allocated objects the
vector needs will be allocated within shared memory.
One problem with this approach is the extra work on your part; a
threaded approach (with mutexes &c) will probably be more efficient to
design and create than using processes. Another problem is that
shared memory may have a time/space tradeoff, depending on how it's
implemented. A consequence of this along with the fact that shared
memory often isn't resizable is that you have to worry about running
out of space. This will become particularly important when the vector
needs to grow, which will require a contiguous region of the shared
memory large enough to contain the new data. This implies shared
memory should be roughly at least 3 times what you expect the maximum
size of the vector will be. Even then, there may be times when the
reader reigns supreme and you run out of memory when the vector grows.
This is how the process/shared vector approach affects stability.
From your description, your reader/writer mostly operate at the ends
of the vector. A list or deque is much more efficient for operations
at the ends. Furthermore, for the shared memory approach, a list
doesn't need the arena to be three times as big as its expected
maximum size.
To reiterate: the "best" approach is threaded with a frame queue
(wrapping around list or dequeue) protected by a mutex or semaphore.
Just make sure you have a mechanism to prevent either the reader or
the writer from dominating access to the queue.
Kanenas