Illogical std::vector size?

S

simon

Hi,

First some background.

I have a structure,

struct sFileData
{
char*sSomeString1;
char*sSomeString2;
int iSomeNum1;
int iSomeNum2;
sFileData(){...};
~sFileData(){...};
sFileData(const sFileData&){...};
const sFileData operator=( const sFileData &s ){...}
};

std::vector< sFileData, std::allocator<sFileData>> address_;

for the sake of simplicity I remove the body of the 'tors
I have no memory leaks as far as I can tell.

Then I read a file, (each line is 190 chars mostly blank spaces).
In each line I 'read' info to fill in the structure.

Because there are some many blank spaces in the line I make sure that my
data is 'trimmed'.

So in effect sSomeString1 and sSomeString2 are never more than 10 chars,
(although in the file they could be up to 40 chars).

I chose vectors because after reading the file I need to do searches of
sSomeString1 and sSomeString2, (no other reasons really).

But my problem is the size of address_ is not consistent with the size of
the file.

The file is around 13Mb with around 100000 'lines' of 190 chars each.
Because I remove blank spaces and I convert 2 numbers to int, (from char). I
guess I should not use more than half, 5Mb.

But after loading I see that I used around 40Mb, (3 times more than the
original size).

as far as I can tell you cannot really tell the size of a vector, but I use
windows and the task manager and I can see the size of my app before and
after reading the file, (I do nothing else).

So what could be the reason for those inconsistencies?
How could I optimize my code to compress those 40mb even more?

Many thanks

Simon
 
M

msalters

simon schreef:
Hi,

First some background.

I have a structure,

struct sFileData
{
char*sSomeString1;
char*sSomeString2;
int iSomeNum1;
int iSomeNum2;
sFileData(){...};
~sFileData(){...};
sFileData(const sFileData&){...};
const sFileData operator=( const sFileData &s ){...}
};

std::vector< sFileData, std::allocator<sFileData>> address_;

for the sake of simplicity I remove the body of the 'tors
I have no memory leaks as far as I can tell.

Then I read a file, (each line is 190 chars mostly blank spaces).
In each line I 'read' info to fill in the structure.

Because there are some many blank spaces in the line I make sure that my
data is 'trimmed'.

So in effect sSomeString1 and sSomeString2 are never more than 10 chars,
(although in the file they could be up to 40 chars).

I chose vectors because after reading the file I need to do searches of
sSomeString1 and sSomeString2, (no other reasons really).

But my problem is the size of address_ is not consistent with the size of
the file.

The file is around 13Mb with around 100000 'lines' of 190 chars each.
Because I remove blank spaces and I convert 2 numbers to int, (from char). I
guess I should not use more than half, 5Mb.

But after loading I see that I used around 40Mb, (3 times more than the
original size).

as far as I can tell you cannot really tell the size of a vector, but I use
windows and the task manager and I can see the size of my app before and
after reading the file, (I do nothing else).

1) Windows Task Manager is not suited for this
2) vector only stores sFileData objects, not the strings themselves
3) Even when vector has excess size (which is common, don't want to
reallocate after each pusch_back) it won't include the strings
4) Many implementations of new[] allocate at least 16 bytes, plus
the overhead needed for delete[]
5) So what? 40MB is not a lot. Worry when it exceeds 1.5Gb. Memory
is cheap. Writing a custom string class is not. BTDT.

HTH,
Michiel Salters
 
S

simon

1) Windows Task Manager is not suited for this

yea, but it was what raised suspicion in the first place.
What might be better?
2) vector only stores sFileData objects, not the strings themselves
3) Even when vector has excess size (which is common, don't want to
reallocate after each pusch_back) it won't include the strings
4) Many implementations of new[] allocate at least 16 bytes, plus
the overhead needed for delete[]

Are you saying that std::string might actually be better in that case?
What might be a better way?
5) So what? 40MB is not a lot. Worry when it exceeds 1.5Gb. Memory
is cheap. Writing a custom string class is not. BTDT.

40Mb or 4Gb, there is still something not quite right, and i would preffer
to know what it is rather than brushing it under the rug.
 
V

velthuijsen

1) Windows Task Manager is not suited for this
yea, but it was what raised suspicion in the first place.
What might be better?

What suspicion? It only tells that the total program is now using 40 MB
not that this particular part of your program is using 40 MB.
Or if it is a change in the total amount of memory used there could be
a near infinite number of other reasons that the program is now using
40 MB instead of the expected 5 MB increase.
You'd need a profiler to check if it is indeed the vector of structs
that is the problem.
 
S

simon

What suspicion?

That i was doing something wrong or that i did not understand something
else.
It only tells that the total program is now using 40 MB
not that this particular part of your program is using 40 MB.
Or if it is a change in the total amount of memory used there could be
a near infinite number of other reasons that the program is now using
40 MB instead of the expected 5 MB increase.
You'd need a profiler to check if it is indeed the vector of structs
that is the problem.

I placed a break point before reading the file, check the memory, and one
after reading the file.
I then compared the before and after.
The odds of it been something else are fairly small, IMO.

Simon
 
C

Carlos Martinez Garcia

simon said:
1) Windows Task Manager is not suited for this


yea, but it was what raised suspicion in the first place.
What might be better?

2) vector only stores sFileData objects, not the strings themselves
3) Even when vector has excess size (which is common, don't want to
reallocate after each pusch_back) it won't include the strings
4) Many implementations of new[] allocate at least 16 bytes, plus
the overhead needed for delete[]


Are you saying that std::string might actually be better in that case?
What might be a better way?

No, std::string have the same problem. That isn't the cause.
40Mb or 4Gb, there is still something not quite right, and i would preffer
to know what it is rather than brushing it under the rug.

40 Mb is about (42000000 bytes)
42000000 bytes / 100000 records = 420 bytes/record.

Even if each new char* uses at least 16 bytes, one record uses about
40 bytes.
Definetively I think you have a problem elsewhere probably in loop
 
V

velthuijsen

20 MB of file in memory is a good starter to explain away the
discrepancy.
You need something better then two brwakpoints and the taskmanager to
make statements about what uses the memory.
 
J

John Carson

simon said:
Hi,

First some background.

I have a structure,

struct sFileData
{
char*sSomeString1;
char*sSomeString2;
int iSomeNum1;
int iSomeNum2;
sFileData(){...};
~sFileData(){...};
sFileData(const sFileData&){...};
const sFileData operator=( const sFileData &s ){...}
};

std::vector< sFileData, std::allocator<sFileData>> address_;

for the sake of simplicity I remove the body of the 'tors
I have no memory leaks as far as I can tell.

Then I read a file, (each line is 190 chars mostly blank spaces).
In each line I 'read' info to fill in the structure.

Because there are some many blank spaces in the line I make sure that
my data is 'trimmed'.

So in effect sSomeString1 and sSomeString2 are never more than 10
chars, (although in the file they could be up to 40 chars).

I chose vectors because after reading the file I need to do searches
of sSomeString1 and sSomeString2, (no other reasons really).

But my problem is the size of address_ is not consistent with the
size of the file.

The file is around 13Mb with around 100000 'lines' of 190 chars each.
Because I remove blank spaces and I convert 2 numbers to int, (from
char). I guess I should not use more than half, 5Mb.

But after loading I see that I used around 40Mb, (3 times more than
the original size).

as far as I can tell you cannot really tell the size of a vector, but
I use windows and the task manager and I can see the size of my app
before and after reading the file, (I do nothing else).

I think your problem has nothing to do with the vector. As has already been
pointed out, the vector doesn't store the characters, only the pointer. With
VC++, sizeof(sFileData) is 16. The memory used by the vector should be
16*address_.capacity() plus a small amount of overhead, which we can
approximate with sizeof(address_). Try this:

#include <vector>
#include <iostream>
using namespace std;

struct sFileData
{
char*sSomeString1;
char*sSomeString2;
int iSomeNum1;
int iSomeNum2;
sFileData(){}
~sFileData(){}
sFileData(const sFileData&){}
const sFileData operator=( const sFileData &s ){ return *this;}
};

std::vector< sFileData, std::allocator<sFileData> > address_;


int main()
{
sFileData sfd;
for(int i=0; i<100000; ++i)
address_.push_back(sfd);
cout << "storage size of vector is approx ";
cout << sizeof(address_)+sizeof(sFileData)*address_.capacity() << endl;
return 0;
}

When I run this, I get

storage size of vector is approx 2212100

and task manager similarly shows about a 2Mb increase in memory useage.
Accordingly, it seems that the other 38Mb is due to whatever else you are
doing to allocate memory for the characters --- unless, as someone else
suggested, you are reading the whole file into memory and not taking that
into account.
 
S

simon

20 MB of file in memory is a good starter to explain away the
discrepancy.

I don't read the whole file in memory.

I use fopen(...)
read each chunk of data using fread(...) and then close the file using
fclose(...).
You need something better then two brwakpoints and the taskmanager to
make statements about what uses the memory.

Well i am sorry, but i can see that before i read the file i use x amount of
memory and that just after i finish reading the file, (after the fclose), i
used x+40mb.

Simon
 
S

simon

No, std::string have the same problem. That isn't the cause.

That's what i thought.
40 Mb is about (42000000 bytes)
42000000 bytes / 100000 records = 420 bytes/record.

Even if each new char* uses at least 16 bytes, one record uses about
40 bytes.
Definetively I think you have a problem elsewhere probably in loop

I cannot see where the problem might be.
I read each line of data, create a structure with the data.

and then push_back(...) the data.

Simon
 
S

simon

Hi,
I think your problem has nothing to do with the vector. As has already
been pointed out, the vector doesn't store the characters, only the
pointer. With VC++, sizeof(sFileData) is 16. The memory used by the vector
should be 16*address_.capacity() plus a small amount of overhead, which we
can approximate with sizeof(address_). Try this:

When I run this, I get

storage size of vector is approx 2212100

I will try that, but i should get the same thing myself.
and task manager similarly shows about a 2Mb increase in memory useage.
Accordingly, it seems that the other 38Mb is due to whatever else you are
doing to allocate memory for the characters --- unless, as someone else
suggested, you are reading the whole file into memory and not taking that
into account.


I don't read the whole file in memory.

I use fopen(...)
read each chunk of data using fread(...) and then close the file using
fclose(...).

Simon
 
L

Larry I Smith

simon said:
Hi,

First some background.

I have a structure,

struct sFileData
{
char*sSomeString1;
char*sSomeString2;
int iSomeNum1;
int iSomeNum2;
sFileData(){...};
~sFileData(){...};
sFileData(const sFileData&){...};
const sFileData operator=( const sFileData &s ){...}
};

std::vector< sFileData, std::allocator<sFileData>> address_;

for the sake of simplicity I remove the body of the 'tors
I have no memory leaks as far as I can tell.

Then I read a file, (each line is 190 chars mostly blank spaces).
In each line I 'read' info to fill in the structure.

Because there are some many blank spaces in the line I make sure that my
data is 'trimmed'.

So in effect sSomeString1 and sSomeString2 are never more than 10 chars,
(although in the file they could be up to 40 chars).

I chose vectors because after reading the file I need to do searches of
sSomeString1 and sSomeString2, (no other reasons really).

But my problem is the size of address_ is not consistent with the size of
the file.

The file is around 13Mb with around 100000 'lines' of 190 chars each.
Because I remove blank spaces and I convert 2 numbers to int, (from char). I
guess I should not use more than half, 5Mb.

But after loading I see that I used around 40Mb, (3 times more than the
original size).

as far as I can tell you cannot really tell the size of a vector, but I use
windows and the task manager and I can see the size of my app before and
after reading the file, (I do nothing else).

So what could be the reason for those inconsistencies?
How could I optimize my code to compress those 40mb even more?

Many thanks

Simon

Please provide a complete (compilable) code example that
demonstrates the problem. Include the complete struct def
for sFileData (ctors, dtors, operator=, etc) and a simple
main() that uses the SAME fopen/fread/push_back/fclose code
block/loop used in your real program (your problem may be there).

Regards,
Larry
 
S

simon

Please provide a complete (compilable) code example that
demonstrates the problem. Include the complete struct def
for sFileData (ctors, dtors, operator=, etc) and a simple
main() that uses the SAME fopen/fread/push_back/fclose code
block/loop used in your real program (your problem may be there).

Regards,
Larry

if i run the code bellow i use 700k just before the for(...) loop
At the end of the loop i have 16880.

So I used around 16Mb to store what appears to be

(5+8)*100000 = 800005; or just under a Mb
Plus the size of the struct, around 2Mb I should not use more than 4Mb to
store the data.

Did I miss something here?

/////////////////////////////////

#include <vector>
#include <iostream>
using namespace std;

struct sFileData
{
char*sSomeString1;
char*sSomeString2;
int iSomeNum1;
int iSomeNum2;
sFileData(){
NullAll();
}
~sFileData(){
CleanAll();
}
sFileData(const sFileData&sfd)
{
NullAll();
*this = sfd;
}
const sFileData& operator=( const sFileData &sfd ){
if( this != &sfd)
{
CleanAll();
iSomeNum1 = sfd.iSomeNum1;
iSomeNum2 = sfd.iSomeNum2;

if( sfd.sSomeString1 ){
sSomeString1 = new char[strlen(sfd.sSomeString1)+1];
strcpy( sSomeString1, sfd.sSomeString1 );
}
if( sfd.sSomeString2 ){
sSomeString2 = new char[strlen(sfd.sSomeString2)+1];
strcpy( sSomeString2, sfd.sSomeString2 );
}
}
return *this;
}

void CleanAll(){
if(sSomeString1) delete [] sSomeString1;
if(sSomeString2) delete [] sSomeString2;
}
void NullAll(){
sSomeString1 = 0;
sSomeString2 = 0;
iSomeNum1 = 0;
iSomeNum2 = 0;
}

};

std::vector< sFileData, std::allocator<sFileData> > address_;


int main()
{
for(int i=0; i<100000; ++i)
{
sFileData sfd;

sfd.iSomeNum1 = 1;
sfd.iSomeNum2 = 2;
sfd.sSomeString1 = new char[5];
sfd.sSomeString2 = new char[8];
strcpy( sfd.sSomeString1, "Helo" );
strcpy( sfd.sSomeString2, "Goodbye" );

address_.push_back(sfd);
}
return 0;
}


/////////////////////////////////

Thanks

Simon
 
S

simon

if i run the code bellow i use 700k just before the for(...) loop
At the end of the loop i have 16880.

So I used around 16Mb to store what appears to be

(5+8)*100000 = 800005; or just under a Mb
Plus the size of the struct, around 2Mb I should not use more than 4Mb to
store the data.

Did I miss something here?

And it is dog slow as well.
There must be a better way.

Simon
 
C

Clark S. Cox III

if i run the code bellow i use 700k just before the for(...) loop
At the end of the loop i have 16880.

So I used around 16Mb to store what appears to be

(5+8)*100000 = 800005; or just under a Mb
Plus the size of the struct, around 2Mb I should not use more than 4Mb
to store the data.

Did I miss something here?

I suspect that your problem is that you're using the Windows Task
Manager to determine memory usage. If you use real debugging and memory
tracking tools, you're more likely to get accurate results.

[OT]For example, when I run your code on my computer (a Mac), and use
the debugging tools provided with the OS (MallocDebug), I see that the
code has allocated approximately 1.5 MB of memory.

However, if I look at the memory usage with other tools such as top, I
see that my program is taking about 41.7M total of VM. I strongly
suspect that this is analogous to what you are seeing on your
machine.[/OT]

Another possibility is that the implementation of the STL on your
platform doesn't return memory to the OS directly, but keeps it around
in case it's needed later. If this is the case, then every time the
vector reallocates itself, the old buffer *might* look to the OS as if
it were a leak. If this is happening, it would account for as much as
(assuming I did my estimations correctly) (16 * 100000 / 2 *
sizeof(sFileData)), or roughly 12MB.

Try again with proper tools (ask on a news group appropriate to
developing/debugging on your platform, and/or check that newsgroups FAQ
for such tools). Such tools should show you exactly what's happening.
 
S

simon

I suspect that your problem is that you're using the Windows Task Manager
to determine memory usage. If you use real debugging and memory tracking
tools, you're more likely to get accurate results.

Maybe, but if windows thinks I am using +16Mb then I must try and fix the
problem.
But I realize that it is not the right group for this.

The speed is a bit of a problem as well. I cannot believe how long it takes
to fill 4Mb.
Try again with proper tools (ask on a news group appropriate to
developing/debugging on your platform, and/or check that newsgroups FAQ
for such tools). Such tools should show you exactly what's happening.

I might do that then.

Simon
 
J

Jeff Flinn

simon said:
if i run the code bellow i use 700k just before the for(...) loop
At the end of the loop i have 16880.

So I used around 16Mb to store what appears to be

(5+8)*100000 = 800005; or just under a Mb

??? 5+8 = 13, 13*100000 = 1300000 -> 1.3MB
Plus the size of the struct, around 2Mb I should not use more than 4Mb to
store the data.

Did I miss something here?

Try adding "address_.reserve(100000);" just before your for loop.

Although I can't imagine why you're not using std::string's here to obviate
the manual memory management.
/////////////////////////////////

#include <vector>
#include <iostream>
using namespace std;

struct sFileData
{
char*sSomeString1;
char*sSomeString2;
int iSomeNum1;
int iSomeNum2;
sFileData(){
NullAll();
}
~sFileData(){
CleanAll();
}
sFileData(const sFileData&sfd)
{
NullAll();
*this = sfd;
}
const sFileData& operator=( const sFileData &sfd ){
if( this != &sfd)
{
CleanAll();
iSomeNum1 = sfd.iSomeNum1;
iSomeNum2 = sfd.iSomeNum2;

if( sfd.sSomeString1 ){
sSomeString1 = new char[strlen(sfd.sSomeString1)+1];
strcpy( sSomeString1, sfd.sSomeString1 );
}
if( sfd.sSomeString2 ){
sSomeString2 = new char[strlen(sfd.sSomeString2)+1];
strcpy( sSomeString2, sfd.sSomeString2 );
}
}
return *this;
}

void CleanAll(){
if(sSomeString1) delete [] sSomeString1;
if(sSomeString2) delete [] sSomeString2;
}
void NullAll(){
sSomeString1 = 0;
sSomeString2 = 0;
iSomeNum1 = 0;
iSomeNum2 = 0;
}

};

std::vector< sFileData, std::allocator<sFileData> > address_;

int main()
{
for(int i=0; i<100000; ++i)
{
sFileData sfd;

sfd.iSomeNum1 = 1;
sfd.iSomeNum2 = 2;
sfd.sSomeString1 = new char[5];
sfd.sSomeString2 = new char[8];
strcpy( sfd.sSomeString1, "Helo" );
strcpy( sfd.sSomeString2, "Goodbye" );

address_.push_back(sfd);
}
return 0;
}


/////////////////////////////////

Thanks

Simon
 
S

simon

??? 5+8 = 13, 13*100000 = 1300000 -> 1.3MB

OOps, my bad...
Try adding "address_.reserve(100000);" just before your for loop.

Sorry, that's doesn't help, but loading is faster...
Although I can't imagine why you're not using std::string's here to
obviate the manual memory management.

maybe, but if i replace std::string all over, i still use the same amount of
memory, (16Mb).

Simon
 
L

Larry I Smith

simon said:
if i run the code bellow i use 700k just before the for(...) loop
At the end of the loop i have 16880.

So I used around 16Mb to store what appears to be

(5+8)*100000 = 800005; or just under a Mb
Plus the size of the struct, around 2Mb I should not use more than 4Mb to
store the data.

Did I miss something here?

/////////////////////////////////

#include <vector>
#include <iostream>


// for strlen, strcpy
using namespace std;

struct sFileData
{
char*sSomeString1;
char*sSomeString2;
int iSomeNum1;
int iSomeNum2;
sFileData(){
NullAll();
}
~sFileData(){
CleanAll();
}
sFileData(const sFileData&sfd)
{
NullAll();
*this = sfd;
}
const sFileData& operator=( const sFileData &sfd ){
if( this != &sfd)
{
CleanAll();
iSomeNum1 = sfd.iSomeNum1;
iSomeNum2 = sfd.iSomeNum2;

if( sfd.sSomeString1 ){
sSomeString1 = new char[strlen(sfd.sSomeString1)+1];
strcpy( sSomeString1, sfd.sSomeString1 );
}
if( sfd.sSomeString2 ){
sSomeString2 = new char[strlen(sfd.sSomeString2)+1];
strcpy( sSomeString2, sfd.sSomeString2 );
}
}
return *this;
}


void CleanAll(){
if(sSomeString1) { delete [] sSomeString1; sSomeString1 = 0; }
if(sSomeString2) { delete [] sSomeString2; sSomeString2 = 0; }
}

void NullAll(){
sSomeString1 = 0;
sSomeString2 = 0;
iSomeNum1 = 0;
iSomeNum2 = 0;
}

};

std::vector< sFileData, std::allocator<sFileData> > address_;


int main()
{
for(int i=0; i<100000; ++i)
{
sFileData sfd;

sfd.iSomeNum1 = 1;
sfd.iSomeNum2 = 2;
sfd.sSomeString1 = new char[5];
sfd.sSomeString2 = new char[8];
strcpy( sfd.sSomeString1, "Helo" );
strcpy( sfd.sSomeString2, "Goodbye" );

address_.push_back(sfd);

// the push_back() (above) makes a COPY of sfd,
// and puts that copy into the vector.
// the copy constructor for sFileData allocates
// space to hold copies of the strings from the
// sFileData being copied from. So we must free
// the strings in 'sfd' after the push_back() (new
// copies were allocated by the copy of 'sfd' that
// was added to the vector), otherwise they will
// not be freed and we will have a memory leak until
// the program ends (i.e. 100000 extra copies of
// sSomeString1 and sSomeString2 will exist).
sfd.CleanAll();
}
return 0;
}


/////////////////////////////////

Thanks

Simon

See the comments and changes embedded in the code above.

Test 1:
On my machine (Gateway PII 450MHZ with 384MB of RAM),
adding some 'cout' and 'clock()' statements to the above code
(which I named simon.cpp), and compiling with GCC g++ v3.3.5,
I got this:

larry@linux:~/x> ./simon

sizeof sFileData = 16
MINIMUM memory used per sFileData = 29 (16 + 5 + 8)
MINIMUM memory used for 100000 sFileData instances = 2,900,000
press any alpha key followed by Enter to start
v
execution time = 0.76 secs
press any alpha key followed by Enter to finish
v

The working-memory set for ./simon was 5,616KB

On most operating systems, memory allocated by malloc or new, then
freed with free or delete is not returned to the operating system until
the program terminates. This memory remains in the program's heap
where it MAY be reused by malloc and new to fulfill additional requests
from memory. Large numbers of small allocations/deallocations tend
to fragment the heap, causing inefficient memory usage.
This is not an STL issue, as you'll see in the paragraph labeled
"Test 3" below, it is a dynamic memory allocation issue.

Test 2:
I modified the program to use std::string instead of allocating
char[] with new. The execution time dropped from 0.76 secs to
0.68 secs, but the memory usage increased by approx 400KB. The
400KB is a relatively fixed memory overhead due to additional STL
stuff brought in by std::string.

Test 3:
Next I modified the program to use fixed length char[] arrays for
the 2 strings (no usage of malloc or new at all). The execution time
dropped to 0.30 seconds and the memory usage dropped to 2624KB
(sizeof(sFileData) is now 24 because of the 2 fixed length char[]
arrays).

The above experiments help demonstrate the overhead incurred
when large numbers of dynamic allocations and deallocations
are made using malloc, new, free, and delete.

Regards,
Larry
 
S

simon

// for strlen, strcpy
#include <string.h>

I did not need that, maybe it's a window thing.
void CleanAll(){
if(sSomeString1) { delete [] sSomeString1; sSomeString1 = 0; }
if(sSomeString2) { delete [] sSomeString2; sSomeString2 = 0; }
}

Ok, but that's just 'good' practice, not really related to my prblem.
// the push_back() (above) makes a COPY of sfd,
// and puts that copy into the vector.
// the copy constructor for sFileData allocates
// space to hold copies of the strings from the
// sFileData being copied from. So we must free
// the strings in 'sfd' after the push_back() (new
// copies were allocated by the copy of 'sfd' that
// was added to the vector), otherwise they will
// not be freed and we will have a memory leak until
// the program ends (i.e. 100000 extra copies of
// sSomeString1 and sSomeString2 will exist).
sfd.CleanAll();

Are you sure the destructor would not handle it?
See the comments and changes embedded in the code above.

Test 1:
On my machine (Gateway PII 450MHZ with 384MB of RAM),
adding some 'cout' and 'clock()' statements to the above code
(which I named simon.cpp), and compiling with GCC g++ v3.3.5,
I got this:

larry@linux:~/x> ./simon

sizeof sFileData = 16
MINIMUM memory used per sFileData = 29 (16 + 5 + 8)
MINIMUM memory used for 100000 sFileData instances = 2,900,000
press any alpha key followed by Enter to start
v
execution time = 0.76 secs
press any alpha key followed by Enter to finish
v

The working-memory set for ./simon was 5,616KB

I get 16Mb
On most operating systems, memory allocated by malloc or new, then
freed with free or delete is not returned to the operating system until
the program terminates. This memory remains in the program's heap
where it MAY be reused by malloc and new to fulfill additional requests
from memory. Large numbers of small allocations/deallocations tend
to fragment the heap, causing inefficient memory usage.
This is not an STL issue, as you'll see in the paragraph labeled
"Test 3" below, it is a dynamic memory allocation issue.

I see, i will try and ask the MFC group to see how i can release the memory
to the system.
Test 2:
I modified the program to use std::string instead of allocating
char[] with new. The execution time dropped from 0.76 secs to
0.68 secs, but the memory usage increased by approx 400KB. The
400KB is a relatively fixed memory overhead due to additional STL
stuff brought in by std::string.

Yes, i also noticed that.
Test 3:
Next I modified the program to use fixed length char[] arrays for
the 2 strings (no usage of malloc or new at all). The execution time
dropped to 0.30 seconds and the memory usage dropped to 2624KB
(sizeof(sFileData) is now 24 because of the 2 fixed length char[]
arrays).

Also noticed it. A bit of a shame really, the memory just does seem to be
handled properly at all.
by doing test 3 it appears that less memory is used.
I understand that the memory is free, but it is not freed to other
applications, on some smaller system that can impact the performances.

I wish I knew how to release the memory.
The above experiments help demonstrate the overhead incurred
when large numbers of dynamic allocations and deallocations
are made using malloc, new, free, and delete.

Thanks,

Simon.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,012
Latest member
RoxanneDzm

Latest Threads

Top