vector::resize(): Why no fixed memory usage?

F

Felix E. Klee

Hi,

why does the memory consumption of the program attached below increase
steadily during execution? Shouldn't vector::reserve() allocate one
large memory chunk of memory that doesn't change anymore?

CPU: Pentium III Coppermine (Celeron)
OS: Slackware LINUX 9.1 with kernel 2.4.22
Compiler: 3.2.3
Compile command: g++ -O0 -o foo foo.cpp
Tools used to check memory consumption: top and xosview

Felix

foo.cpp:
#include <iostream>
#include <vector>
#include <cmath>
using namespace std;

int main() {
vector<double> vect;
vect.reserve(50l*1000000l);

for (int i = 0; i < 50; ++i) {
for (long j = 0; j < 1000000; ++j)
vect.push_back(sin(j*0.34543));
cout << "vect.capacity()=" << vect.capacity()
<< "vect.size()=" << vect.size()
<< endl;
}
return 0;
}

PS: To contact me off list don't reply but send mail to "felix.klee" at
the domain "inka.de". Otherwise your email to me might get automatically
deleted!
 
R

Rob Williscroft

Felix E. Klee wrote in
Hi,

why does the memory consumption of the program attached below increase
steadily during execution? Shouldn't vector::reserve() allocate one
large memory chunk of memory that doesn't change anymore?
[snip]
#include <iostream>
#include <vector>
#include <cmath>
using namespace std;

int main() {
vector<double> vect;
vect.reserve(50l*1000000l);

for (int i = 0; i < 50; ++i) {
for (long j = 0; j < 1000000; ++j)
vect.push_back(sin(j*0.34543));
cout << "vect.capacity()=" << vect.capacity()
<< "vect.size()=" << vect.size()
<< endl;
}
return 0;
}

This is an OS issue not a C++ issue, what is (probably) happening
is that when you call reserve, reserve asks the OS for 400MB of
memory, however the OS does this "virtualy". So the "real" memory
is only given to your app when its first accessed. This occurs
during your loop, so you see a steady increase in memory usage by
your programme.

Rob.
 
T

tom_usenet

Hi,

why does the memory consumption of the program attached below increase
steadily during execution? Shouldn't vector::reserve() allocate one
large memory chunk of memory that doesn't change anymore?

CPU: Pentium III Coppermine (Celeron)
OS: Slackware LINUX 9.1 with kernel 2.4.22
Compiler: 3.2.3
Compile command: g++ -O0 -o foo foo.cpp
Tools used to check memory consumption: top and xosview

Felix

foo.cpp:
#include <iostream>
#include <vector>
#include <cmath>
using namespace std;

int main() {
vector<double> vect;
vect.reserve(50l*1000000l);

for (int i = 0; i < 50; ++i) {
for (long j = 0; j < 1000000; ++j)
vect.push_back(sin(j*0.34543));
cout << "vect.capacity()=" << vect.capacity()
<< "vect.size()=" << vect.size()
<< endl;
}
return 0;
}

I suspect your OS isn't actually allocating the space when you make
the reserve call, but only when it gets page faults from accessing
pages of virtual memory that haven't yet been backed up with physical
memory. Read up on virtual memory.

This isn't conforming, since bad_alloc probably won't be correctly
thrown under some circumstances, but low memory conditions is a
complex topic on modern OSes.

Tom

C++ FAQ: http://www.parashift.com/c++-faq-lite/
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
 
D

Dylan Nicholson

tom_usenet said:
I suspect your OS isn't actually allocating the space when you make
the reserve call, but only when it gets page faults from accessing
pages of virtual memory that haven't yet been backed up with physical
memory. Read up on virtual memory.

This isn't conforming, since bad_alloc probably won't be correctly
thrown under some circumstances, but low memory conditions is a
complex topic on modern OSes.
Are you saying that after reserving memory, if you later try to access
that memory, it's possible that the OS might not be able to provide
physical memory, and hence necessarily crash your software? Surely
OS's must make some attempt to ensure that reserved blocks of virtual
memory will be available when required, or it would be basically
impossible to write software stable enough to cope with low-memory
conditions. To me it seems to be more an issue of the OS memory usage
reporting: I can understand distinguishing between "reserved but not
yet accessed" and "reserved and accessed/allocated" memory, but I
would have though both figures would be available.

Dylan
 
D

Dylan Nicholson

tom_usenet said:
I suspect your OS isn't actually allocating the space when you make
the reserve call, but only when it gets page faults from accessing
pages of virtual memory that haven't yet been backed up with physical
memory. Read up on virtual memory.

This isn't conforming, since bad_alloc probably won't be correctly
thrown under some circumstances, but low memory conditions is a
complex topic on modern OSes.

FWIW, I did a little test under Win2000 using:

char* p = (char*)malloc(10000000);
p[0] = 1;
p[10000000 - 1] = 2;
p[5000000] = 3;
p[2000000] = 4;
p[7000000] = 5;

The first call sets the 'VM size'* for the process to 10M, but each
subsequent access causes the 'Mem usage' to go up (although never to
10M). It's the same with calloc() too, which obviously means it uses
OS to provide default 0-initialized memory (HeapAlloc with
HEAP_ZERO_MEMORY).
Of course in _DEBUG build, because the CRT initializes the area with
'landfill', the 'Mem usage' immediately jumps to 10M, hence if what
you say is true regarding bad_alloc, you'd get quite different
behaviour depending on whether you were using debug-enabled memory
allocators (you'd except some slight difference perhaps, as most debug
allocators use extra space, but not that much).

Dylan


* Columns under Task Manager.
 
R

Rob Williscroft

Dylan Nicholson wrote in
tom_usenet said:
I suspect your OS isn't actually allocating the space when you make
the reserve call, but only when it gets page faults from accessing
pages of virtual memory that haven't yet been backed up with physical
memory. Read up on virtual memory.

This isn't conforming, since bad_alloc probably won't be correctly
thrown under some circumstances, but low memory conditions is a
complex topic on modern OSes.

FWIW, I did a little test under Win2000 using:

char* p = (char*)malloc(10000000);
p[0] = 1;
p[10000000 - 1] = 2;
p[5000000] = 3;
p[2000000] = 4;
p[7000000] = 5;

The first call sets the 'VM size'* for the process to 10M, but each
subsequent access causes the 'Mem usage' to go up (although never to
10M). It's the same with calloc() too, which obviously means it uses
OS to provide default 0-initialized memory (HeapAlloc with
HEAP_ZERO_MEMORY).

This seems fine to me, the OS has reserved 10M of memory/page file
for the process, but only gives the process real memory when it
uses it. This leaves the real memory available for other processes
until its actually used. Presumably at this point the OS can then
swap out the real memory used by the other process('s) into the
page file (VM) that its reserved, and give this process real memory.

Of course in _DEBUG build, because the CRT initializes the area with
'landfill', the 'Mem usage' immediately jumps to 10M, hence if what
you say is true regarding bad_alloc, you'd get quite different
behaviour depending on whether you were using debug-enabled memory
allocators (you'd except some slight difference perhaps, as most debug
allocators use extra space, but not that much).

The only difference should be that the debug build is slower. and
is also slowing down the OS/other apps by over commiting memory.


Rob.
 
T

tom_usenet

Are you saying that after reserving memory, if you later try to access
that memory, it's possible that the OS might not be able to provide
physical memory, and hence necessarily crash your software?

Yup. Normally paging will slow the system to a crawl first though.

Surely
OS's must make some attempt to ensure that reserved blocks of virtual
memory will be available when required, or it would be basically
impossible to write software stable enough to cope with low-memory
conditions.

On modern OSes, you can't rely on bad_alloc being thrown from new, you
have to keep track of memory usage some other way.

To me it seems to be more an issue of the OS memory usage
reporting: I can understand distinguishing between "reserved but not
yet accessed" and "reserved and accessed/allocated" memory, but I
would have though both figures would be available.

With paging and virtual memory, it doesn't necessarily make sense to
back up all allocations with physical memory at the time of the
allocation, since that prevents other processes from using it.

All this exception safety/new throwing bad_alloc is actually less of
an issue than it seems, since normally new won't throw unless you run
out of virtual memory (which is near-impossible on a 64-bit machine),
but some kind of platform specific signal or exception may be
generated when attempting to access the memory if you've run out of
physical memory and page file space.

On the Windows test I just did, I got a message box telling me that
virtual memory was getting low. I don't recall seeing that in the spec
for new!

Tom

C++ FAQ: http://www.parashift.com/c++-faq-lite/
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,482
Members
44,901
Latest member
Noble71S45

Latest Threads

Top