Deleting the Array.

S

Sabiyur

Hi All,
I am coding as below.

int *x = new int[10];
int * y= x;
............
.............

del [] y;
x=NULL;

When we are freeing the array, it should free the all memory locations
corresponding to all elements.
The compiler stores the number of elements of array, and releases the
memory accordingly.

So If we use del [] y; Does it knows how many locations to delete?
Becuae y is just copy of x.
I don't know how to test this case. Please help me.

Thanks
Sabiyur
 
V

Victor Bazarov

Sabiyur said:
Hi All,
I am coding as below.

int *x = new int[10];
int * y= x;
...........
............

del [] y;
x=NULL;

When we are freeing the array, it should free the all memory locations
corresponding to all elements.
The compiler stores the number of elements of array, and releases the
memory accordingly.

So If we use del [] y; Does it knows how many locations to delete?
Becuae y is just copy of x.
I don't know how to test this case. Please help me.

Yes, it does. Both 'x' and 'y' are just values. The number of elements
in the array behind that pointer is the implementation business. How it
figures that number from the pointer value is up to it entirely. You can
copy the value as many times as you wish, provided that you eventually
use 'delete[]' to free the memory.

V
 
S

Salt_Peter

Sabiyur said:
Hi All,
I am coding as below.

int *x = new int[10];
int * y= x;
...........
............

del [] y;
x=NULL;

When we are freeing the array, it should free the all memory locations
corresponding to all elements.
The compiler stores the number of elements of array, and releases the
memory accordingly.

So If we use del [] y; Does it knows how many locations to delete?

Yes it does, but don't take my word for it - proove it with a dummy
class.
If you add a copy ctor and op=, that dummy is no dummy anymore: a
usefull debugging technique.

#include <iostream>

class A
{
public:
A() { std::cout << "A()\n"; }
~A() { std::cout << "~A()\n"; }
};

int main()
{
A* p_a = new A[5];
A* p_b = p_a;
delete [] p_b;
}

Something else that might interest you:

#include <boost/shared_array.hpp>

int main()
{
boost::shared_array< A > sp_a(new A[5]);
}
 
T

t.lehmann

You can avoid many problems relating to this kind of arrays by using
something like STL. Classes wrapping this kind of arrays are managing
all kind of "low level" memory managment so you needn't invest time to
think about it...
 
P

peter koch

Sabiyur skrev:
Hi All,
I am coding as below.

int *x = new int[10];
int * y= x;
...........
............

del [] y;
x=NULL;

When we are freeing the array, it should free the all memory locations
corresponding to all elements.
The compiler stores the number of elements of array, and releases the
memory accordingly.

So If we use del [] y; Does it knows how many locations to delete?
Becuae y is just copy of x.
I don't know how to test this case. Please help me.
It is okay. But I recommend that you use std::vector instead - this
beast makes your life much, much easier.

/Peter
 
V

Victor Bazarov

peter said:
Sabiyur skrev:
Hi All,
I am coding as below.

int *x = new int[10];
int * y= x;
...........
............

del [] y;
x=NULL;

When we are freeing the array, it should free the all memory
locations corresponding to all elements.
The compiler stores the number of elements of array, and releases the
memory accordingly.

So If we use del [] y; Does it knows how many locations to delete?
Becuae y is just copy of x.
I don't know how to test this case. Please help me.
It is okay. But I recommend that you use std::vector instead - this
beast makes your life much, much easier.

....and in some cases your code much much slower...

V
 
T

t.lehmann

...and in some cases your code much much slower...

If your code does need "soooo" much speed then write in assembler! For
all (most) other categories try using object oriented concepts!

You can also think about writing containers youself but STL people and
other libraries doing similar stuff have invested many time over years.
Decide yourself!
 
P

peter koch

Victor Bazarov skrev:
peter said:
Sabiyur skrev:
Hi All,
I am coding as below.

int *x = new int[10];
int * y= x;
...........
............

del [] y;
x=NULL;

When we are freeing the array, it should free the all memory
locations corresponding to all elements.
The compiler stores the number of elements of array, and releases the
memory accordingly.

So If we use del [] y; Does it knows how many locations to delete?
Becuae y is just copy of x.
I don't know how to test this case. Please help me.
It is okay. But I recommend that you use std::vector instead - this
beast makes your life much, much easier.

...and in some cases your code much much slower...

I believe you will be hard pressed to find modern compilers where
std::vector is "much much slower" than std::vector. I even believe you
will have problems finding a compiler where you will even notice the
difference.
But never mind that. Even if it was the case that std::vector was "much
much slower" (say a factor ten), I'd still recommend std::vector to the
OP and then - if profiling told you - reluctantly advice about using
new []. new [] is so much more errorprone and fragile and the OP
obviously not very experienced.

/Peter
 
V

Victor Bazarov

peter said:
Victor Bazarov skrev:
peter said:
Sabiyur skrev:
Hi All,
I am coding as below.

int *x = new int[10];
int * y= x;
...........
............

del [] y;
x=NULL;

When we are freeing the array, it should free the all memory
locations corresponding to all elements.
The compiler stores the number of elements of array, and releases
the memory accordingly.

So If we use del [] y; Does it knows how many locations to delete?
Becuae y is just copy of x.
I don't know how to test this case. Please help me.

It is okay. But I recommend that you use std::vector instead - this
beast makes your life much, much easier.

...and in some cases your code much much slower...

I believe you will be hard pressed to find modern compilers where
std::vector is "much much slower" than std::vector. I even believe you
will have problems finding a compiler where you will even notice the
difference.

Visual Studio 2005, optimizing for size, does not inline calls to any
of 'std::vector' members, which in some cases causes too much overhead
for function calls when access to a simle array is sufficient. It is
especially noticeable when done millions of times in a loop.
But never mind that. Even if it was the case that std::vector was
"much much slower" (say a factor ten),

How did you guess [the factor] so well?
I'd still recommend
std::vector to the OP and then - if profiling told you - reluctantly
advice about using new []. new [] is so much more errorprone and
fragile and the OP obviously not very experienced.

Never mind the OP's experience. I was talking in general. And trust
me, I *have* profiled those things.

V
 
V

Victor Bazarov

If your code does need "soooo" much speed then write in assembler! For
all (most) other categories try using object oriented concepts!

No, thank you. I can live without assembler, since in most cases C++
is very close to it when using low-level constructs like pointers.

Do not underestimate the effects of calling functions unnecessarily.
The indexing operator is a function. It costs you.

Of course one should not downplay the cost of maintaining code. The
lower the level of constructs, the higher the cost of maintenance.
Every time a higher-level construct is replaced with a lower-level one,
the cost of maintenance needs to be incorporated into the decision
making process. But do not blindly dismiss constructs of the language
of which some people don't have a good grasp.
You can also think about writing containers youself but STL people and
other libraries doing similar stuff have invested many time over
years. Decide yourself!

Yes, one always has to decide. And the decision has to be made based
on measuring the performance instead of some arbitrary investment some
arbitrary "STL people" have made.

V
 
P

peter koch

Victor Bazarov skrev:
peter said:
Victor Bazarov skrev: [snip]
I believe you will be hard pressed to find modern compilers where
std::vector is "much much slower" than std::vector. I even believe you
will have problems finding a compiler where you will even notice the
difference.

Visual Studio 2005, optimizing for size, does not inline calls to any
of 'std::vector' members, which in some cases causes too much overhead
for function calls when access to a simle array is sufficient. It is
especially noticeable when done millions of times in a loop.
But never mind that. Even if it was the case that std::vector was
"much much slower" (say a factor ten),

How did you guess [the factor] so well?
In that case, I'd have different optimisation settings for the code
with the millions of loops (and remove the "secure checking" "feature"
that is still enabled even at max optimisation.
I'd still recommend
std::vector to the OP and then - if profiling told you - reluctantly
advice about using new []. new [] is so much more errorprone and
fragile and the OP obviously not very experienced.

Never mind the OP's experience. I was talking in general. And trust
me, I *have* profiled those things.

I did answer in the context of the OP. More experienced programmers
will know to profile and find out how to optimise anyway.

/Peter
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,734
Messages
2,569,441
Members
44,832
Latest member
GlennSmall

Latest Threads

Top