Proper Usage of Shared Memory

M

myren, lord

When I first discovered shared memory (between multiple processes) I
immediately started thinking of how to build my own VM subsystem +
locking mechanisms for a large single block of memory. This seems like
one option, the other appears to be just having each "object" you want
to share be a shared mem space to itself: allocate objects into a
defined shared mem space. But here you have many many objects being
shared. Having a VM subsystem would allow you to just allocate a large
contiguous chunk of memory and then rely on your program for allocation,
deallocation, spatial locality/coherence and all other matters.

Is this at all a wise idea? Is there any advantage over having a large
number of small shared memory objects? What overhead does having shared
memory objects incur?

Sometimes I think the answer comes down to one merely of locking: that
with a complex vm subsystem mapped on a single flat space you could tune
far better far finer grained locking for your system. How much more is
at stake than simply locking? Obviously difficulty of implementation is
a key factor, but what about technical advantages and disadvantages? is
there a penalty for having thousands of shared memory objects between a
collection of programs?

Myren
 
T

Thomas Matthews

When I first discovered shared memory (between multiple processes) I
immediately started thinking of how to build my own VM subsystem +
locking mechanisms for a large single block of memory. This seems like
one option, the other appears to be just having each "object" you want
to share be a shared mem space to itself: allocate objects into a
defined shared mem space. But here you have many many objects being
shared. Having a VM subsystem would allow you to just allocate a large
contiguous chunk of memory and then rely on your program for allocation,
deallocation, spatial locality/coherence and all other matters.

First off, this has nothing to do with the C++ language.
Probably a better newsgroup is
Is this at all a wise idea?

Maybe, maybe not. You'll have to check your operating system to see
how it handles the memory request. Some OSes already have virtual
memory, so when you allocate a contiguous chunk, it may not be in
phyiscal memory; or another task may be using that memory.

Is there any advantage over having a large
number of small shared memory objects?

Research the topic of "Memory Fragmentation".

What overhead does having shared memory objects incur?

The minimal overhead is using semaphores, signals or mutexes.
One must be sure that two tasks are not writing to the same
memory at the same time.

Sometimes I think the answer comes down to one merely of locking: that
with a complex vm subsystem mapped on a single flat space you could tune
far better far finer grained locking for your system. How much more is
at stake than simply locking? Obviously difficulty of implementation is
a key factor, but what about technical advantages and disadvantages? is
there a penalty for having thousands of shared memory objects between a
collection of programs?

Myren


--
Thomas Matthews

C++ newsgroup welcome message:
http://www.slack.net/~shiva/welcome.txt
C++ Faq: http://www.parashift.com/c++-faq-lite
C Faq: http://www.eskimo.com/~scs/c-faq/top.html
alt.comp.lang.learn.c-c++ faq:
http://www.raos.demon.uk/acllc-c++/faq.html
Other sites:
http://www.josuttis.com -- C++ STL Library book
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,017
Latest member
GreenAcreCBDGummiesReview

Latest Threads

Top