Memory allocation principle

B

Bo Yang

Hi,
I am wondering is there any good principle to allocate and
deallocate memory with C++. Take the STL's vector for example. When
the memory is not enough for the elements, the class will allocate a
new area of memory and copy the old elements to the new one. In SGI,
the new memory size is as twice as the old one if the new inserted
elements are less than it. But is there any reason that we should
always enlarge the memory for twice when the space is not enough? Why
STL choose to double the size instead of get more memory? Just by
instinct?

Thanks !


Regards!
Bo
 
R

robertwessel2

Hi,
   I am wondering is there any good principle to allocate and
deallocate memory with C++. Take the STL's vector for example. When
the memory is not enough for the elements, the class will allocate a
new area of memory and copy the old elements to the new one. In SGI,
the new memory size is as twice as the old one if the new inserted
elements are less than it. But is there any reason that we should
always enlarge the memory for twice when the space is not enough? Why
STL choose to double the size instead of get more memory? Just by
instinct?


Ahhhh... Homework...

Think about it. Let's say that you increased the size of your
container by a fixed about (say 10 units) each time it overflowed (so
the effective sizes would be 10, 20, 30, 40, 50...). Or alternatively
you doubled its size each time it overflowed (10, 20, 40, 80, 160).
How much work would you do in each case if the container grew to 1000
elements? 10,000? 1,000,000? (Where we can simplify and define
"work" as the copy of an element).
 
B

Bo Yang

Thanks for your reply!
Ahhhh...   Homework...
^_^


Think about it.  Let's say that you increased the size of your
container by a fixed about (say 10 units) each time it overflowed (so
the effective sizes would be 10, 20, 30, 40, 50...).  Or alternatively
you doubled its size each time it overflowed (10, 20, 40, 80, 160).
How much work would you do in each case if the container grew to 1000
elements?  10,000?  1,000,000?  (Where we can simplify and define
"work" as the copy of an element).

So, doubling the size will cause the number of calls to malloc(or new)
to be small than increasing by a fixed size. And I also think about
the effect of solution that allocating three times of current size.
And I think the memory wasting is more than the situation we allocate
twice, am I right? For other words, is there other reason why I can't
allocate 3/4 ore more times of the memory?

Thanks!
Regards!
Bo
 
R

robertwessel2

Thanks for your reply!




So, doubling the size will cause the number of calls to malloc(or new)
to be small than increasing by a fixed size. And I also think about
the effect of solution that allocating three times of current size.
And I think the memory wasting is more than the situation we allocate
twice, am I right? For other words, is there other reason why I can't
allocate 3/4 ore more times of the memory?


More exactly, keeping the increase exponential keeps the work
basically constant in the number of inputs. For example, no matter
how many items are added to a container that expands by a factor of
two, on average, each element only has to get moved twice during the
"grow" processes.

You can pick any factor that works for you. The smaller the factor
the more often you'll have to grow the list. It's a tradeoff. 100%
is a common increment (it's certainly easy to implement), as are
various “nice” fractions (50%, 41%, 33%, 25%, 10%), although what's
"best" is going to depend a great deal on the usage - the bigger the
factor, the more memory "wasted," but the less work done growing the
container. The work required for the expansion is also a factor -
growing a hash table is likely a more complex operation than growing a
list or vector (so you'd want to do it less often).

Often the programmer just picks something reasonable. For example,
let's say the object in the contain are large, but the container is
physically no more than an array of pointers, then having twice as
many pointers allocated as necessary is basically insignificant.
OTOH, if the container is physically storing the large objects, a big
over-allocation will consume considerable excess memory.

On occasion you see more complex schemes. For example, sometimes you
see the time of the last grow operation tracked, and if another one
happens in less than some interval, a bigger expansion factor is
used. Or sometimes there are predictive schemes that estimate an
approaching workload (for example, you're about to process an input
file, and you estimate that on average you'll put an object into your
container for every 100 bytes in the file - if you can easily
determine the file's size, you can force the allocation early). As a
general rule, the more complex schemes tend to be pretty specifically
targeted.

A related, and even more complex question is, when do you *shrink* a
container's allocation.

But bottom line, use the STL, or program a simple scheme (like simple
doubling) yourself, in most cases those choices are close enough to
reasonable, and only rarely are they seriously bad. After that, if
you're seeing excessive CPU or memory utilization, *then* go back and
try to optimize this stuff.
 
B

Bo Yang

More exactly, keeping the increase exponential keeps the work
basically constant in the number of inputs. For example, no matter
how many items are added to a container that expands by a factor of
two, on average, each element only has to get moved twice during the
"grow" processes.

You can pick any factor that works for you. The smaller the factor
the more often you'll have to grow the list. It's a tradeoff. 100%
is a common increment (it's certainly easy to implement), as are
various "nice" fractions (50%, 41%, 33%, 25%, 10%), although what's
"best" is going to depend a great deal on the usage - the bigger the
factor, the more memory "wasted," but the less work done growing the
container. The work required for the expansion is also a factor -
growing a hash table is likely a more complex operation than growing a
list or vector (so you'd want to do it less often).

Often the programmer just picks something reasonable. For example,
let's say the object in the contain are large, but the container is
physically no more than an array of pointers, then having twice as
many pointers allocated as necessary is basically insignificant.
OTOH, if the container is physically storing the large objects, a big
over-allocation will consume considerable excess memory.

On occasion you see more complex schemes. For example, sometimes you
see the time of the last grow operation tracked, and if another one
happens in less than some interval, a bigger expansion factor is
used. Or sometimes there are predictive schemes that estimate an
approaching workload (for example, you're about to process an input
file, and you estimate that on average you'll put an object into your
container for every 100 bytes in the file - if you can easily
determine the file's size, you can force the allocation early). As a
general rule, the more complex schemes tend to be pretty specifically
targeted.

A related, and even more complex question is, when do you *shrink* a
container's allocation.

But bottom line, use the STL, or program a simple scheme (like simple
doubling) yourself, in most cases those choices are close enough to
reasonable, and only rarely are they seriously bad. After that, if
you're seeing excessive CPU or memory utilization, *then* go back and
try to optimize this stuff.

Really a excellent course for me. Thanks a lot!

Regards!
Bo
 
J

James Kanze

I am wondering is there any good principle to allocate and
deallocate memory with C++. Take the STL's vector for example. When
the memory is not enough for the elements, the class will allocate a
new area of memory and copy the old elements to the new one. In SGI,
the new memory size is as twice as the old one if the new inserted
elements are less than it. But is there any reason that we should
always enlarge the memory for twice when the space is not enough? Why
STL choose to double the size instead of get more memory? Just by
instinct?

More or less. And simplicity. It's since been proven that
doubling has some serious drawbacks, and I think most modern
implementations use 1.5 as a multiplier, rather than 2.
 
J

James Kanze

More exactly, keeping the increase exponential keeps the work
basically constant in the number of inputs. For example, no
matter how many items are added to a container that expands by
a factor of two, on average, each element only has to get
moved twice during the "grow" processes.
You can pick any factor that works for you. The smaller the
factor the more often you'll have to grow the list. It's a
tradeoff. 100% is a common increment (it's certainly easy to
implement),

Doubling has the disadvantage that the total memory you've freed
is always smaller than the size of the new memory, so you'll
never be able to use it (even supposing no fragmentation, but if
you're just growing a vector in a loop, you'll soon reach a
point where there is no fragmentation). The critical cutoff
point is (1 + sqrt(5))/2, roughly 1.6; 1.5 is easy to calculate,
and is pretty close to 1.6, will remaining below it, so I expect
that it's the most common multiplier in modern implementations.
 
S

SG

Doubling has the disadvantage that the total memory you've freed
is always smaller than the size of the new memory, so you'll
never be able to use it (even supposing no fragmentation, but if
you're just growing a vector in a loop, you'll soon reach a
point where there is no fragmentation).  The critical cutoff
point is (1 + sqrt(5))/2, roughly 1.6; 1.5 is easy to calculate,
and is pretty close to 1.6, will remaining below it, so I expect
that it's the most common multiplier in modern implementations.

Out of curiosity I actually wrote a little simulation. The question
was: How much memory do you need at least to let a vector grow to
certain sizes, assuming a first-fit allocator with a per-block
allocation overhead of 1. You get different stair case graphs for
different growth factors, obviously.

http://img96.imageshack.us/img96/7449/growth.png

This comparison is not very convincing. Sometimes the growth factor 2
strategy requires less memory and sometimes the growth factor 1.5
requires less memory. At *average* the use of 1.5 as factor is
slightly better (memory size requirement is about 12% lower). But does
it justify the increased number of average copy constructions from 2
to 3 (not counting push_back) for full vectors? I'm not sure. Take
your pick!

Cheers,
SG
 
R

red floyd

Doubling has the disadvantage that the total memory you've freed
is always smaller than the size of the new memory, so you'll
never be able to use it (even supposing no fragmentation, but if
you're just growing a vector in a loop, you'll soon reach a
point where there is no fragmentation).  The critical cutoff
point is (1 + sqrt(5))/2, roughly 1.6; 1.5 is easy to calculate,
and is pretty close to 1.6, will remaining below it, so I expect
that it's the most common multiplier in modern implementations.

I could see a fibonacci style vector, which has a private member
indicating the last size allocated, so that it asymptotically
approaches the golden mean as the ratio.

e.g.: reallocation_size = last_capacity + current_capacity
 
J

James Kanze

Out of curiosity I actually wrote a little simulation. The
question was: How much memory do you need at least to let a
vector grow to certain sizes, assuming a first-fit allocator
with a per-block allocation overhead of 1. You get different
stair case graphs for different growth factors, obviously.

This comparison is not very convincing. Sometimes the growth
factor 2 strategy requires less memory and sometimes the
growth factor 1.5 requires less memory. At *average* the use
of 1.5 as factor is slightly better (memory size requirement
is about 12% lower). But does it justify the increased number
of average copy constructions from 2 to 3 (not counting
push_back) for full vectors? I'm not sure. Take your pick!

I'd have to see the code which generated the curves, but I'll
admit that I've done no actual measurements; I'm just repeating
a theoretical point initially raised by Andy Koenig. One this
is certain, though: at some point, supposing the vector is
working in an array of its own, with no other fragmenting, etc.,
using 1.5 will allow using memory that was freed during previous
growths; using 2 never will. I'm not sure how quickly this will
start to occur, however, given that you need to have two blocks
allocated when you're growing the vector.
 
S

SG

I'd have to see the code which generated the curves,

If you can read Matlab code, here it is:

--------8<--------

function [s,m] = growth(initc,faktor,minsize,ovhd)
if (nargin<4)
ovhd = 1; % default overhead = 1 (i.e. one pointer)
end;
free = 0; % size of "gap" before the vector's data
allc = initc; % vector's current capacity
endd = free + ovhd + allc; % required memory
s(1) = allc; % stores the vector sizes (x-axis)
m(1) = endd; % stores the required mem size (y-axis)
ctr = 1;
while (allc<minsize)
nxta = max(allc+1,floor(allc*faktor));
if (ovhd + nxta <= free)
free = 0; % new block fits into the "gap"
else
% gap gets bigger ...
free = free + ovhd + allc;
end;
ctr = ctr + 1;
s(ctr) = allc+1;
allc = nxta; % new capacity
endd = max(endd, free+ovhd+allc); % updated mem usage
m(ctr) = endd;
ctr = ctr + 1;
s(ctr) = allc;
m(ctr) = endd;
end;
return;

--------8<--------
> [s,m] = growth(1, 1.5, 64000);
> loglog(s,m);
--------8<--------

but I'll
admit that I've done no actual measurements; I'm just repeating
a theoretical point initially raised by Andy Koenig.  One this
is certain, though: at some point, supposing the vector is
working in an array of its own, with no other fragmenting, etc.,
using 1.5 will allow using memory that was freed during previous
growths; using 2 never will.

True, you can see this effect in the figure.
I'm not sure how quickly this will
start to occur, however, given that you need to have two blocks
allocated when you're growing the vector.

which I accounted for.

Cheers,
SG
 
S

SG

I could see a fibonacci style vector, which has a private member
indicating the last size allocated, so that it asymptotically
approaches the golden mean as the ratio.

e.g.: reallocation_size = last_capacity + current_capacity

Nice idea. I just thought for a second that it might be dangerously
close to the golden ratio if the per-block allocation overhead is
greater than 0. But I think it still works (as in the "gap" is
sometimes closed).

A call to vector<>::reserve should probably include

last_capacity = max(last_capacity, new_capacity/2);

or something like that.


Cheers,
SG
 
S

SG

Nice idea. I just thought for a second that it might be dangerously
close to the golden ratio if the per-block allocation overhead is
greater than 0. But I think it still works (as in the "gap" is
sometimes closed).

Oh, false alarm. Those ratios approache the golden ratio from above.
So, "filling the gap" is never going to happen -- unless you count per-
block memory allocation overhead. But I think that can just happen
once.

Cheers,
SG
 
J

James Kanze

On 22 Okt., 19:25, James Kanze wrote:
If you can read Matlab code, here it is:

Not too well:).
--------8<--------
function [s,m] = growth(initc,faktor,minsize,ovhd)
if (nargin<4)
ovhd = 1; % default overhead = 1 (i.e. one pointer)
end;
free = 0; % size of "gap" before the vector's data
allc = initc; % vector's current capacity
endd = free + ovhd + allc; % required memory
s(1) = allc; % stores the vector sizes (x-axis)
m(1) = endd; % stores the required mem size (y-axis)
ctr = 1;
while (allc<minsize)
nxta = max(allc+1,floor(allc*faktor));
if (ovhd + nxta <= free)
free = 0; % new block fits into the "gap"

Shouldn't this be "free = free - (ovhd + nxta)"?
else
% gap gets bigger ...
free = free + ovhd + allc;
end;
ctr = ctr + 1;
s(ctr) = allc+1;
allc = nxta; % new capacity
endd = max(endd, free+ovhd+allc); % updated mem usage
m(ctr) = endd;
ctr = ctr + 1;
s(ctr) = allc;
m(ctr) = endd;
end;
return;
--------8<--------
[s,m] = growth(1, 1.5, 64000);
loglog(s,m);
--------8<--------
but I'll admit that I've done no actual measurements; I'm
just repeating a theoretical point initially raised by Andy
Koenig. One this is certain, though: at some point,
supposing the vector is working in an array of its own, with
no other fragmenting, etc., using 1.5 will allow using
memory that was freed during previous growths; using 2 never
will.
True, you can see this effect in the figure.

Yes. After posting, I knocked up a quick test myself---almost
identical to yours, except in C++ (and I ignored the overhead).
With more or less similar results.
which I accounted for.

Yes, and apparently, that has a big effect. I wonder if there
is some smaller factor which really does make a difference.

Andy Koenig's original calculations ignored the overhead, and
the fact that you need to hold on to two buffers when growing
the array, which gives a value of (1+sqrt(5))/2 as the point
where you start being able to reuse previously freed memory.
Perhaps a more exact calculation, taking these two factors into
account, would result in a lower figure which would make a real
difference. On the other hand, my quick tests showed that I was
reusing freed memory at times, but it still didn't make a
difference. So I think it's still a very much open question.
(In the meantime, as a result of Andy's comments, many
implementations have switched over to 1.5.)
 
S

SG

 function [s,m] = growth(initc,faktor,minsize,ovhd)
    if (nargin<4)
       ovhd = 1;   % default overhead = 1 (i.e. one pointer)
    end;
    free = 0;      % size of "gap" before the vector's data
    allc = initc;  % vector's current capacity
    endd = free + ovhd + allc; % required memory
    s(1) = allc;   % stores the vector sizes      (x-axis)
    m(1) = endd;   % stores the required mem size (y-axis)
    ctr = 1;
    while (allc<minsize)
       nxta = max(allc+1,floor(allc*faktor));
       if (ovhd + nxta <= free)
          free = 0;  % new block fits into the "gap"

Shouldn't this be "free = free - (ovhd + nxta)"?

No. 'free' counts the size of the free memory *preceeding* the
vector's current data block only. Think of it as the offset to the
block allocated for the vector's data (including overhead). if ovhd
+nxta<=free evaluates to true, the new block will fit into this "gap"
completely which is why free is set to zero again.

here 'free' grows by the memory size that was used by the old vector
(ovhd+allc) because the new vector data is appended to that block.
       end;
       ctr = ctr + 1;
       s(ctr) = allc+1;
       allc = nxta;                      % new capacity
       endd = max(endd, free+ovhd+allc); % updated mem usage
       m(ctr) = endd;
       ctr = ctr + 1;
       s(ctr) = allc;
       m(ctr) = endd;
    end;
 return;
--------8<--------
 [s,m] = growth(1, 1.5, 64000);
 loglog(s,m);
--------8<--------

Cheers,
SG
 
R

robertwessel2

If you can read Matlab code, here it is:

Not too well:).
--------8<--------
 function [s,m] = growth(initc,faktor,minsize,ovhd)
    if (nargin<4)
       ovhd = 1;   % default overhead = 1 (i.e. one pointer)
    end;
    free = 0;      % size of "gap" before the vector's data
    allc = initc;  % vector's current capacity
    endd = free + ovhd + allc; % required memory
    s(1) = allc;   % stores the vector sizes      (x-axis)
    m(1) = endd;   % stores the required mem size (y-axis)
    ctr = 1;
    while (allc<minsize)
       nxta = max(allc+1,floor(allc*faktor));
       if (ovhd + nxta <= free)
          free = 0;  % new block fits into the "gap"

Shouldn't this be "free = free - (ovhd + nxta)"?




       else
          % gap gets bigger ...
          free = free + ovhd + allc;
       end;
       ctr = ctr + 1;
       s(ctr) = allc+1;
       allc = nxta;                      % new capacity
       endd = max(endd, free+ovhd+allc); % updated mem usage
       m(ctr) = endd;
       ctr = ctr + 1;
       s(ctr) = allc;
       m(ctr) = endd;
    end;
 return;
--------8<--------
 > [s,m] = growth(1, 1.5, 64000);
 > loglog(s,m);
--------8<--------
but I'll admit that I've done no actual measurements; I'm
just repeating a theoretical point initially raised by Andy
Koenig.  One this is certain, though: at some point,
supposing the vector is working in an array of its own, with
no other fragmenting, etc., using 1.5 will allow using
memory that was freed during previous growths; using 2 never
will.
True, you can see this effect in the figure.

Yes.  After posting, I knocked up a quick test myself---almost
identical to yours, except in C++ (and I ignored the overhead).
With more or less similar results.
which I accounted for.

Yes, and apparently, that has a big effect.  I wonder if there
is some smaller factor which really does make a difference.

Andy Koenig's original calculations ignored the overhead, and
the fact that you need to hold on to two buffers when growing
the array, which gives a value of (1+sqrt(5))/2 as the point
where you start being able to reuse previously freed memory.
Perhaps a more exact calculation, taking these two factors into
account, would result in a lower figure which would make a real
difference.  On the other hand, my quick tests showed that I was
reusing freed memory at times, but it still didn't make a
difference.  So I think it's still a very much open question.
(In the meantime, as a result of Andy's comments, many
implementations have switched over to 1.5.)


In principal the reuse of the prior areas (assuming no fragmentation)
could happen without keeping both copies in memory by doing a memmove
() type of operation plus whatever fixups were necessary to move the
container back to the beginning of the free area once enough free
space had accumulated at the front.

But if you can do that, an ordinary realloc() type of operation to
lengthen the area would clearly be possible too (and almost certainly
easier).

And as for that, who's to say that the internals of a container,
particularly if the main data structure was a list of pointers (or
other PODs), couldn't reasonably use malloc() and realloc() for
storage management.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,065
Latest member
OrderGreenAcreCBD

Latest Threads

Top