Why compilers do not "p = NULL" automatically after programs do "delete p"?

G

gary

Hi,
We all know the below codes are dangerous:
{
int *p = new int;
delete p;
delete p;
}
And we also know the compilers do not delete p if p==NULL.

So why compilers do not "p = NULL" automatically after programs do "delete p"?
 
D

dragoncoder

So why compilers do not "p = NULL" automatically after >programs do "delete p"?

Because they can't. You will have to pass somehow the address of p to
set p to NULL. Of course this is just one aspect of it.
 
N

Nan Li

gary said:
Hi,
We all know the below codes are dangerous:
{
int *p = new int;
delete p;
delete p;
}
And we also know the compilers do not delete p if p==NULL.

So why compilers do not "p = NULL" automatically after programs do "delete p"?


One problem is efficiency. 'p = NULL' at least takes one instruction
to complete. If you delete a point in a very deep loop, the cost could
be huge.

Nan
 
K

Karl Heinz Buchegger

gary said:
Hi,
We all know the below codes are dangerous:
{
int *p = new int;
delete p;
delete p;
}
And we also know the compilers do not delete p if p==NULL.

So why compilers do not "p = NULL" automatically after programs do "delete p"?

* Because they don't have to.
* Because it creates a false sense of security:

int* p = new int;
int* q = p;

delete q; // Assume q gets set to NULL by the compiler ...
delete q; // ... so this would be fine
delete p; // But this is not. Just because the compiler set q to NULL
// it would not do the same with p
 
N

Nan Li

dragoncoder said:
Because they can't. You will have to pass somehow the address of p to
set p to NULL. Of course this is just one aspect of it.

I think if p is a l-value variable, the compiler is able to assign it
to NULL( it is almost equal to adding 'p=NULL' after 'delete p'.) But
if p is something like constant, temporary variable, or const, it
cannot be modified. So in general, the compiler cannot do that.
 
T

tony_in_da_uk

Hi Gary,

My thoughts:

- In "delete <expression>", the expression doesn't have to be
assignable to - it could be a calculated value, a const value etc.,

- Seems to be a case of Design-by- Contract-like philosophy, where a
facility doesn't presume to do something which may be unnecessary just
to alleviate a programmatic error. This philosophy sometimes
incorporates an argument such as "if we guarantee the second delete is
correct, then programmers lose the likelihood that their program
SIGSEGVs (or equiv), which means they're less likely to realise they
need to correct their code". DbC's basically a load of crap, but I've
heard such arguments.

- Code size and performance. In the dark old days, people genuinely
cared about a few extra bytes for an extra instruction, and a few extra
clock cycles. Though in the last 20 years my home computer's gone from
3.375kHz and 32kb RAM to 2.8GHz and 1GB RAM, not all systems are
grunty, and a few programmers still care.

- More generally, there's an attitude of "do the minimum, and let the
safety-conscious choose what else they'd like to do". Is a simple
assignment of 0 really enough to satisfy you? You don't want a checked
allocation system that throws or aborts on error? C++ lets you write
and use your own allocation routines which reflect your own concerns.

Cheers,

Tony
 
R

Ron Natalie

gary said:
Hi,
We all know the below codes are dangerous:
{
int *p = new int;
delete p;
delete p;
}
And we also know the compilers do not delete p if p==NULL.

So why compilers do not "p = NULL" automatically after programs do "delete p"?
The argument to delete is an rvalue. Delete wouldn't necessarily be
able to change it even if it wanted to.

Further, it only fixes the problem in trivial cases (i.e., when the
pointer value isn't stored in multiple locations).
 
K

Kaz Kylheku

gary said:
Hi,
We all know the below codes are dangerous:
{
int *p = new int;
delete p;
delete p;
}

That is only a textbook example illustrating a double delete. What is
dangerous in the real world are all kinds of hard-to-find bugs that
don't stand out like the above contrived example.

Also, note that /any use whatsoever/ of the value of p after the first
delete p expression is undefined. Not just double deletes. Even
comparing p to another pointer is undefined behavior.
So why compilers do not "p = NULL" automatically after programs do "delete p"?

Because that would not achieve any benefit for actual programs which
are not textbook examples.

The real problem is that the program may have copies of the pointer in
variables other than just p. These pointers may be hidden throughout
the network of live object's in that program's run-time space.

Assigning NULL to p won't do anything about these other copies.

One such trivial scenario is when deletion is wrapped behind a
function:

void ObjectFactory::Destroy(Object *x)
{
delete x;
}

Assigning NULL to x here won't do anything useful, because the scope is
ending. The caller can still write:

factory->Destroy(obj);
factory->Destroy(obj);

Both x and obj are copies of the same pointer; assigning to local
variable x does nothing about obj still having a copy of that now
invalid value.

Also, the argument to delete might not be a modifiable lvalue.

int *const ptr = new int[10];

delete ptr;

ptr = 0; // error, it's a const!

The argument to delete might contain conversions so that it's not an
lvalue:

// p is a (void *), but we know it points to an object of SomeClass

delete (SomeClass *) p;

((SomeClass *) p) = 0; // error, assignment to non-lvalue

Lastly, consider that delete can be a user-defined operator. If it
performed this type of assignment in some conditions, would
user-defined allocators override it?

The delete operator shouldn't be regarded as a deallocating function
but as an annotation which says "the object referenced by this pointer
won't be touched by the program any longer". If the delete operator has
no effect on the value of a variable, then we can ignore the annotation
and use an alternate means of computing the object lifetimes.

Suppose that the we replace the global delete operator with one that
does nothing, and add a garbage collector underneath. We suspect that
the program contains bugs, such as uses of objects that have been
deleted, and multiple deletes. But these problems are ironed out with
the garbage collector. The assigning delete would interfere with this
solution. It would leave the suspicion that some pointers that are
overwritten with null by delete will be dereferenced.
 
S

savagesmc

You should check out what are called "smart pointers". For example the
STL has std::auto_ptr<>, and the boost library has 4 or 5 different
types of smart pointers that are each useful in a specific scenario.

Those smart pointers are the standard way of wrapping naked pointers in
a safe & correct manner.

I know that some of the boost smart pointers are going to be adopted in
the next version of the c++ standard, so that should tell you something
about how important and relevant they are.

Check them out.

Steve
 
M

Mike Smith

gary said:
Hi,
We all know the below codes are dangerous:
{
int *p = new int;
delete p;
delete p;
}
And we also know the compilers do not delete p if p==NULL.

p certainly is deleted if it == NULL. However, the C++ Standards
defines the deleting of NULL as having no effect.
 
A

Andrey Tarasevich

gary said:
We all know the below codes are dangerous:
{
int *p = new int;
delete p;
delete p;
}
And we also know the compilers do not delete p if p==NULL.

So why compilers do not "p = NULL" automatically after programs do "delete p"?
...

Because it wouldn't really achieve anything. In the real life the problem you
were trying to describe by the above example usually looks as follows

int* p1 = new int;
int* p2 = p1;

delete p1;
delete p2;

Setting one pointer to null won't help at all.
 
C

Calum Grant

Nan said:
One problem is efficiency. 'p = NULL' at least takes one instruction
to complete. If you delete a point in a very deep loop, the cost could
be huge.

Very true. But isn't this a case of over-optimization? I mean, the
number of CPU cycles in the memory allocator will be much greater than
the effort to set p=0.

I think for normal application code, pointers are best avoided. It's
much better to use iterators, containers and smart pointers, since then
all these problems are taken care of.

Calum
 
?

=?ISO-8859-15?Q?Juli=E1n?= Albo

gary said:
So why compilers do not "p = NULL" automatically after programs do "delete
p"?

Several reasons:

p can be const, can be an expression...

Many times p will go out of scope just after delete, no need to modify it.

If you want to always assign NULL to a pointer being delete'd, you can
esaily write a template function that do both things.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,483
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top