delete does not work

J

junw2000

Below is a simple code:

#include <iostream>

class base{
public:
base(): i(11){std::cout<<"base constructor"<<'\n';}
virtual void f() = 0;
virtual ~base(){ std::cout<<"base destructor"<<'\n';}
int i;
};

class derv1 : public base{
public:
derv1(){std::cout<<"derv1 constructor"<<'\n';}
void f(){}
~derv1(){std::cout<<"derv1 destructor"<<'\n';}
};

class derv2 : public derv1{
public:
derv2(){i=22;std::cout<<"derv2 constructor"<<'\n';}
~derv2(){std::cout<<"derv2 destructor"<<'\n';}
};

int main(){
base *b1;
derv1 *d1;

derv2 *d2 = new derv2;

b1 = d2;
std::cout<<b1<<" "<<d2->i<<'\n';
delete b1; //LINE1

std::cout<<"after delete d1"<<'\n';
std::cout<<b1->i<<" "<<d2->i<<'\n'; //LINE2
}


I delete b1 at LINE1. Why LINE2 still output the correct result---22?

Thanks.

Jack
 
A

Alf P. Steinbach

* (e-mail address removed):
int main(){
base *b1;
derv1 *d1;

derv2 *d2 = new derv2;

b1 = d2;
std::cout<<b1<<" "<<d2->i<<'\n';
delete b1; //LINE1

std::cout<<"after delete d1"<<'\n';
std::cout<<b1->i<<" "<<d2->i<<'\n'; //LINE2
}


I delete b1 at LINE1. Why LINE2 still output the correct result---22?

At that point any behavior whatsoever is correct, because the behavior
of accessing the object after destruction, is undefined.
 
P

Phlip

junw2000 said:
delete b1; //LINE1
std::cout<<b1->i<<" "<<d2->i<<'\n'; //LINE2

Please put spaces around operators, for readability.
I delete b1 at LINE1. Why LINE2 still output the correct result---22?

Because any use of b1 after the delete is undefined behavior. That means the
program could appear to work correctly, or the nearest toilet could explode,
or anything in between.

Tip: Learn all the ways to avoid undefined behavior (basically by avoiding
anything the slightest bit suspicious), and keep all your code within this
subset. For example, many folks use only smart-pointers, and if you can't,
at least do this:

delete b1;
b1 = NULL;

You can test b1 for NULL now, and if you slip up you will _probably_ get a
hard but reliable crash.
 
L

Lav

Below is a simple code:

#include <iostream>

class base{
public:
base(): i(11){std::cout<<"base constructor"<<'\n';}
virtual void f() = 0;
virtual ~base(){ std::cout<<"base destructor"<<'\n';}
int i;
};

class derv1 : public base{
public:
derv1(){std::cout<<"derv1 constructor"<<'\n';}
void f(){}
~derv1(){std::cout<<"derv1 destructor"<<'\n';}
};

class derv2 : public derv1{
public:
derv2(){i=22;std::cout<<"derv2 constructor"<<'\n';}
~derv2(){std::cout<<"derv2 destructor"<<'\n';}
};

int main(){
base *b1;
derv1 *d1;

derv2 *d2 = new derv2;

b1 = d2;
std::cout<<b1<<" "<<d2->i<<'\n';
delete b1; //LINE1

std::cout<<"after delete d1"<<'\n';
std::cout<<b1->i<<" "<<d2->i<<'\n'; //LINE2
}


I delete b1 at LINE1. Why LINE2 still output the correct result---22?

Thanks.

Jack

Hi Jack,

Better practice would be

delete b1;
b1 = NULL;

because you never know how delete 'ed variable are going to respond.

Thanks
Lav
 
P

Pete Becker

Lav said:
Better practice would be

delete b1;
b1 = NULL;

because you never know how delete 'ed variable are going to respond.

You never know how null pointers are going to respond, either. In a code
snippet as short as the original there's no point in setting the pointer
to null. You know from inspection that it shouldn't be used after the
delete. There's no need to test it.
 
P

Puppet_Sock

Lav wrote:
[snip]
Better practice would be

delete b1;
b1 = NULL;

because you never know how delete 'ed variable are going to respond.

While this won't *cause* problems (so far as I'm aware)
it won't catch plenty of common newbie problems.

I try to keep pointers as data members of a class.
I also try to keep it such that the pointer is assigned to
memory in an initiation step such as a ctor or a named
ctor, and deleted in the dtor. This way, my memory leaks
and wild pointers are as localized as I can make them,
and tend to be easier to find and easier to correct.

It's an extension of a principle that I illustrate as follows.
Whenever I put a { in code, I immediately put in the
corresponding } to balance it. Then I put in the stuff
that goes between. So too with new calls. Whenever
I put in a new, I immediately put in the delete that
goes with. Then I am careful to only refer to the mem
allocated while between those two. The easy way to
do that is to lean on the object ctor/dtor process.

Similarly with other resources like file streams, etc.
Socks
 
N

Noah Roberts

Puppet_Sock said:
Lav wrote:
[snip]
Better practice would be

delete b1;
b1 = NULL;

because you never know how delete 'ed variable are going to respond.

While this won't *cause* problems (so far as I'm aware)
it won't catch plenty of common newbie problems.

Name one problem that it won't catch that not setting to 0 does.
I try to keep pointers as data members of a class.
I also try to keep it such that the pointer is assigned to
memory in an initiation step such as a ctor or a named
ctor, and deleted in the dtor. This way, my memory leaks
and wild pointers are as localized as I can make them,
and tend to be easier to find and easier to correct.

It is much easier to tell that there is a problem when the pointer has
an obviously erroneous value, like 0, instead of a possibly valid one.
It is hard to figure out that the problem is caused by your object
having been deleted if it looks like a valid object. By setting to 0
you not only increase the likely hood of being able to check if the
pointer has been deleted before calling it, you also increase the
obviousness of the problem when you don't. It is also, though
undefined, much more likely to crash than do something completely
random as read/write to location 0 is always bad and outside your
memory space.

For example. Say you have the following code:

delete x;

When it runs you get an explosion. Why? You look at your ptr and it
has some random value...the object appears normal. Why did it blow up?
You trace back a few miles and end up in some OS specific event loop
and into no-man's land. There is no indication as to why you are
exploding, yet it does. Somewhere, in response to some previous event,
/something/ happened to some object related to that instruction...or
maybe it suffered from a heap thrashing as described below...

What would happen if you had set that value to 0?

Imagine the following code in that scenario:

x->doY(); // doY() writes to a buffer inside an X.

What happens if that pointer was not set to 0? Half the time it
crashes and half the time you get odd heap corruptions and you crash in
totally unrelated areas of code; now you need special debugger tools
and libraries to track buffer overruns and heap corruptions when they
occur and not when their effect is felt...this can take hours, days,
weeks.... What if you had set that to 0? Well, in most systems, you
would crash immediately (except in cases like the OP when the class is
VERY trivial) and the fact that it was because the ptr is invalid would
be very obvious. You could then either fix the logic error that caused
you to do this on a deleted pointer, or if there is no logic problem
and you just need to make sure it never happens:

if (x) x->doY();

There are numerous cases when setting the pointer to 0 saves time by
making bugs more obvious, making intent and purpose clear, and removing
double deletions. There are a few situations in which it doesn't
matter. There is no situation in which it is better not to and there
is no way to track what is happening unless you do.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,772
Messages
2,569,593
Members
45,104
Latest member
LesliVqm09
Top