Small question with respect to scope and performace

S

sravanreddy001

Hi,

I have small doubt.

There are two functions.
1) stores lot of data.. atleast 100 MB
2) another doesn't use above data.

so, if 2 is called from 1.
where the contents of 1 be moved to stack? and 2. is called?

In that case, does maintaining the data as global values, will reduce that overhead? (Security of data can be managed and not an issue)

Thanks,
Sravan.
 
N

Noah Roberts

Hi,

I have small doubt.

There are two functions.
1) stores lot of data.. atleast 100 MB
2) another doesn't use above data.

so, if 2 is called from 1.
where the contents of 1 be moved to stack? and 2. is called?

In that case, does maintaining the data as global values, will reduce that overhead? (Security of data can be managed and not an issue)

I assume you've run a profiler and have seen that there's something
going on with these two functions that is causing your performance
issues...because NOBODY would EVER try to optimize their code before
even checking to see if it was necessary. That would be stupid.

If you're not passing the data around there is NO overhead associated
with it. If you are passing the data around then what happens is
contingent upon HOW you are passing it around and what, if any
optimizations your compiler performs. It might remove the passing
around bit if it's smart enough to know it's not necessary.
 
R

Richard Damon

Hi,

I have small doubt.

There are two functions.
1) stores lot of data.. atleast 100 MB
2) another doesn't use above data.

so, if 2 is called from 1.
where the contents of 1 be moved to stack? and 2. is called?

In that case, does maintaining the data as global values, will reduce that overhead? (Security of data can be managed and not an issue)

Thanks,
Sravan.

If 1 stores its data on the stack, it will be there once the function
creates it, and not "moved" there when 2 is called, so there is no
overhead there.

If 1 stores its data else where, it won't be moved there either.


To be pedantic, actually this isn't the way the standard says it,
because the standard never refers to "the stack". Typical implementation
though is to place variables of storage class "auto" (as opposed to
global, static or objects created on the heap) on the stack.

100 MB of data is large to be all "on the stack" (but possible), but my
guess is that much of this data is apt to be in containers, where the
control structure for the container may be on the stack, but most of the
data will be on the heap.
 
S

sravanreddy001

[THERE ARE MANY QUESTIONS. PLEASE REPLY TO ANYTHING YOU WANT TO.]
[THANKS IN ADVANCE.]

Hi Noah,Richard:

Both your answers were helpful for my understanding.

Noah: True, I'm working in some performance issues and dealing with lot of small files..
total of 150,000 files, and creating multiple vectors in each iteration.

The amount memory I'm saving is very less may be around 10 MB.. max 15 MB not more than that..

But, my applications starts and grows over 500MB.
I'm thinking there might some memory leak, or its equivalent.

How could that be?

most of my code structure is like this.


for(i=0;i<150000;i++){
vector<string> data;

// read from files, store into data.
//and write 'data' to another file.

}


QUESTION: Is it mandatory to erase the contents in the next loop, or this is automatically handled by compiler/runtime/OS.

working in unix. I'm also creating soo many strings like

string str = //some source;

are these also released after each run?

--> At some places, I'm reading file in one go. by creating memory using 'malloc'. Should this be released explicitly?


If any one of you can help me with similar stuff, please drop a mail to sravanreddy001 @ gmail.com I wont bother you
 
R

Richard Damon

See interspersed

[THERE ARE MANY QUESTIONS. PLEASE REPLY TO ANYTHING YOU WANT TO.]
[THANKS IN ADVANCE.]

Hi Noah,Richard:

Both your answers were helpful for my understanding.

Noah: True, I'm working in some performance issues and dealing with lot of small files..
total of 150,000 files, and creating multiple vectors in each iteration.

The amount memory I'm saving is very less may be around 10 MB.. max 15 MB not more than that..

But, my applications starts and grows over 500MB.
I'm thinking there might some memory leak, or its equivalent.

How could that be?

most of my code structure is like this.


for(i=0;i<150000;i++){
vector<string> data;

// read from files, store into data.
//and write 'data' to another file.
vector, and the strings in it, will be destroyed here, when the loop
exits the context of the body of the loop.
}


QUESTION: Is it mandatory to erase the contents in the next loop, or this is automatically handled by compiler/runtime/OS.

working in unix. I'm also creating soo many strings like

string str = //some source;

are these also released after each run?

--> At some places, I'm reading file in one go. by creating memory using 'malloc'. Should this be released explicitly?
All memory you allocate from the heap needs to be deallocated. Automatic
variables are deallocated when you leave their scope.

If you allocate memory, and do not put it into something that is smart
enough to automatically deallocate it for you, then you need to do the
deallocation yourself. "Smart Pointers" are one method to make sure the
memory is deallocated, but they default to assuming the memory was
allocated with new, not malloc. (but some can be told how to deallocate
the object). containers will destroy the object put in them, but if they
are holding pointers, that destroys the POINTER, not the pointed to object.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,537
Members
45,021
Latest member
AkilahJaim

Latest Threads

Top