Hi,
Its said that one should avoid dynamic allocation of memory,
It's said by whom? Offhand, this sounds like a generalization so broad
as to be almost insupportable. There may be circumstances under which
avoiding (at least some types of) dyanmic allocation is helpful, but
almost certainly others under which it is not.
as it defragment the memory into small chunks
That would be "fragment" rather than defragment. Defragmenting is the
process of taking small chunks and putting them together into larger
chunks again. Again, this sounds to me like a broad and insupportable
generalization.
and allocation of memory is a costly process in terms of efficiency.
More of the same -- the price of dynamic allocation varies widely
depending on quite a few factors, including the usage pattern(s) and the
policies used to manage the memory. Differeent allocation strategies
give different trade-offs between (for example) allocation speed and
keeping memory defragmented. Some do things like lazy coalescing, which
can improve average speed, but make a few allocations _much_ slower than
others. You can also trade off speed between allocation and freeing, as
well as between performance on different sizes and numbers of blocks
(i.e. some work better for allocating lots of small blocks while others
work better for a few large blocks -- some libraries switch between two
or three different strategies depending on block size).
So what is the best solution to
handle to situation in which we need to allocate memory at run time.
There is no one best solution, and so far it doesn't sound like there's
any real problem to fix anyway. My advice would be to use the default
implementation of new and delete to start with. Only when/if you find
that code is too slow, AND a profiler shows that allocating and freeing
memory is a bottleneck in the code should you consider using something
else.
When/if that happens, you'll probably need to look at the patterns of
your memory usage, such as patterns in allocating and freeing memory and
the relative uses of different block sizes to get a good idea of what's
likely to improve efficiency. My guess is that you'll never get to that
point though -- the amount of otherwise well-written code I've seen that
gained a lot by changing allocation strategy was pretty small. That's
not to say it couldn't happen, only to say that it's not really a good
bet that it'll necessarily be a problem.