delete a pointer

K

Kai-Uwe Bux

Balog said:
Of course in itself it doesn't.

Then, we basically agree.

However leaving the pointer variable hanging around after deletion ensures
that you can't have it.

That triggers the questions: why / in which cases would I want that
invariant? what could I do in order to maintain it? do those methods scale
with regard to refactoring code and extending the program? ...

Only when concrete answers to those questions are proposed, in the context
of a given project, can a rational decision be made as to whether setting a
pointer variable to 0 after deletion should be part of the coding rules for
the project.

Coincidentally, my code happens to maintain the invariant (let's say at
entry and exit of functions) and nowhere does it employ the strategy of
setting a pointer to 0 after deletion. The most frequent case in my code
base is that deletion happens in a destructor and the pointer variable goes
out of scope after the destructor completes.

Yeah, programming is hard. Correct programming even more so. We need
methods, attention, and a deal of luck too.

That observation is no grounds to do fishy things without even an attempt
to explanation why...

Nobody proposed doing "fishy things"; and with regard to an explanation /
rationale for the chosen coding rules more context would be required.

And lack of them suffice even less. Or ignoring them on a just a whim.

You sound angry, or let's phrase that more positively: you sound passionate
about this point. I would prefer a more sine ira et studio approach to the
problem, but that's just me. If there are concrete experiences leading to
strong feelings about this point, I would be interested in learning about
them.


Best

Kai-Uwe Bux
 
B

Balog Pal

James Kanze said:
It doesn't buy you that last invariant, and the strongest
argument against using it is that it creates a false sense of
security.

It does buy it -- just not alone. While leaving the invalid pointer hangging
around definitely prevents it. And I see no benefit just added danger to
have UB.
The proof being that someone of your experience and
knowledge seems to be mislead by it.

Mislead how?
(That last sentence is
*not* meant to be ironic. I've seen enough of Balog's postings
to know that he really does know C++.)

Indeed, and we even work to the similar standards of correcness
requirements. That is why I'm surprised on this thread, that reads clearly
as 'forget the rule about NULL-ing and do whatever'. And not the 'it can be
a start but not enough, also there can be ....' if that was the original
intention.

You likely read my personal coding policy a few times that is close to
"delete is banned from user code" ( meaning delete is restricted to a few
library classes implementing RAII/RRID-like controllers; the rest of the
world must use those controllers as locals or members...)

But I don't go out against a NULL-ing policy just because it does not apply
to in my more perfect infrastructure. Guess because I was unfortunete
enough to see a plenty of rotten legacy code, where just that policy
sprinkled in blindly would make a pretty improvement.

Not solving all the problems, but at least move some out of UB land to a
correct case (i.e preventing double free) or a clear null-pointer crash (as
many practical systems give that reliably.)

There are levels between trach and perfect.
The only way to ensure that that last invariant holds is to use
garbage collection and ban the unary & operator.

I never used GC and never thought of banning & either, though am pretty sure
you can't find pointer-related UB in my programs....
(In fact, if
you really mean it, you'd have to also ban destructors and all
functions with destructor like semantics: dispose, etc. Which
is why even languages like Java, which use garbage collection
and don't allow local objects, can't really enforce this
invariant.)

I think I lost your chain of thought.

Creating a design where lifetime and ownership issues are correct is not
trivial but not impossible either. In C++ you can go pretty fare before
needing to pass around pointers. And the small population you need CAN be
kept valid.
 
B

Balog Pal

James Kanze said:
Not that I'm convinced that the standard even achieves this.
Off hand, if I were reformulating this, I'd say that there are
four categories of pointers: pointers to valid objects, pointers
to one past the end of an array, null pointers and invalid
pointers.

I read the standard that very way.
And then reformulate much of the rules concerning
pointers in terms of these categories. For the issue in
question: I'm pretty sure that the intent of the standard is
that an lvalue to rvalue convertion of an invalid pointer
(according to my definition immediately above) results in
undefined behavior. But my certitude is based at least
partially on discussions I've had with some of the original
authors of the C90 standard.

I reacall a bunch on threads on that on csc, clcm and other forums. The
base question nornally like 'what counts as *using* a pointer'. And the
conclusion was all times IIRC that any attempt to inspect the value,
including passing as argument, use in comparision, etc is such. In
standardese terms I'd probably tie it to lvalue to rvalue conversion, that
would cover most situations.

3.7.3.2 p4 states that any use of an invalid pointer is undefined. That
much is clear, too bad "use" is indeed underspecified.
(And I rather think that the
motivation for this undefined behavior is irrelevant today, if
it ever was relevant.)

Francis Glassborow did show a code fragment from a WIN16 compilation where a
pointer was passed using instruction

LES BX
push es
push bx

that actually crashed with an invalid selector loaded. Stupid as it sounds
the compiler is allowed to generate such code. We can hardly speculatively
exclude situations where pointers are not passed in general registers
accepting any patterns.

So better keep to the rule to not have invalid pointers around. I still
fail to see any benefit from having them really. Keeping to a fully valid
state feels so much cleaner.
 
B

Balog Pal

Kai-Uwe Bux said:
That triggers the questions: why / in which cases would I want that
invariant? what could I do in order to maintain it? do those methods scale
with regard to refactoring code and extending the program? ...

That is IMO a strange question. That differs just slightly from "why you
want the program be in correct state".

Probably we can agree that the difference-set is restricted to the "unused"
pointers that are members of living objects and a few-lines area for locals.

I keep that population to minimum, and see no reason why having all those
few pointers set to NULL would hurt. Just like I don't keep spolled food
around the house.
Only when concrete answers to those questions are proposed, in the context
of a given project, can a rational decision be made as to whether setting
a
pointer variable to 0 after deletion should be part of the coding rules
for
the project.

Coincidentally, my code happens to maintain the invariant (let's say at
entry and exit of functions) and nowhere does it employ the strategy of
setting a pointer to 0 after deletion. The most frequent case in my code
base is that deletion happens in a destructor and the pointer variable
goes
out of scope after the destructor completes.

If it disappears, then there is no need to nulling. (however optimizers
probably remove the assignment anyway, just as in the sases when you set a
different value to the pointer a few lines after... back with C we used
macros that did free and set null almost exclusively without observing
peroformance problems.)
Nobody proposed doing "fishy things"; and with regard to an explanation /
rationale for the chosen coding rules more context would be required.

IMO there is a fair (even if not perfect) rationale to setting an invalid
pointer to NULL, and would be interested in the rationale of not setting it.
Honestly it sounds like 'ignore the seat belts, as they won't detect a bomb
attached to your ignition'.
You sound angry, or let's phrase that more positively: you sound
passionate
about this point.

Not really, I'm more like baffled.
 
B

Balog Pal

James Kanze said:
The only time this rule makes any sense at all is if the pointer
will be reused, but not immediately, and you have to check to
determine whether it is already in use before reusing it.

Even if you don't check the value just assign, you can hit UB making copy of
the object with that pointer member.
 
K

Kai-Uwe Bux

Balog said:
That is IMO a strange question. That differs just slightly from "why you
want the program be in correct state".

It appears that you are ignoring two out of three questions. As for the
first, it's not equivalent to correctness of anything. It's more the
question as to whether you want to declare the invariant part of what counts
as a "correct state".
Probably we can agree that the difference-set is restricted to the
"unused" pointers that are members of living objects and a few-lines area
for locals.

I keep that population to minimum, and see no reason why having all those
few pointers set to NULL would hurt. Just like I don't keep spolled food
around the house.


If it disappears, then there is no need to nulling. (however optimizers
probably remove the assignment anyway, just as in the sases when you set a
different value to the pointer a few lines after... back with C we used
macros that did free and set null almost exclusively without observing
peroformance problems.)


IMO there is a fair (even if not perfect) rationale to setting an invalid
pointer to NULL, and would be interested in the rationale of not setting
it. Honestly it sounds like 'ignore the seat belts, as they won't detect a
bomb attached to your ignition'.
[...]

:)

The question has two aspects, at the very least: (a) should nulling pointers
upon deletion be part of an overall strategy to ensure resource correctness
of the program? and (b) could nulling pointers be added on top of any other
strategy without harm?

As for (a), my experience is that RAII and deliberate use of smart pointers
(of various flavors) renders nulling pointers superfluous and still ensures
that all non-null pointers are valid. As a by-product, a null value of a
pointer usually has a more specific meaning (e.g., indicating the leaf of a
data structure).

Now for (b), I don't think it's always a harmless addition. Of course, if
the code is correct, it will stay correct. However we have to think of bugs.
In my code, a double deletion is almost certainly indicative of a deeper
problem (saying that I lost track of which pointees exist). I prefer seeing
valgrind spot that double deletion to the double deletion turning into a
silent null-op. That allows me to fix the bug at its root and probably
improve my understanding of the code (potentially leading to the discovery
of more subtle bugs). However, that depends on the tool-chain I use to test
my programs. Given the environment and my other coding idioms, setting a
pointer to 0 after deletion would do more harm than good on average.

A different kind of bug is dereferencing an invalid pointer. I have been
plagued less by these critters. Hence, I am more sensitive to double
deletions and memory leaks. If my code base was different and my coding
style more prone to this kind of bug, then the cost-benefit ratio of nulling
pointers upon deletion would probably change.

(Being a little paranoid, I have seriously considered creating a smart
pointer just for debugging. It would detect double deletion, dereferencing
invalid pointers, and using pointers polymorphically with classes that lack
virtual destructors.)


Best

Kai-Uwe Bux
 
B

Balog Pal

Alf P. Steinbach said:
First, you can have more than one pointer to the same object, and in that
case the assignment of 0 just lies.

Lies how? That pointer will not hold an invalid value for sure. the clones,
if exist are separete entities and are handled elsewhere.
Secondly, the only purpose of the 0 is to check it. Checking it means that
the code holds on to that pointer variable so that there's a need for
checking it.

A copy operation can also use the pointer, without checking, and causing UB.
This in turn means fewer invariants available and higher complexity, which
means higher probability of bugs, not lower.

Huh? This sounds like the
more we study -> more we know
more we know -> more we forget
more we forget -> less we know
so why bother with study?
And so it *can* give a false sense of security.

Okay, you can make up a case with that false sense, while I can site a ton
of accidents that caused actual bugs that would be prevented by consistent
nulling. Could we switch to pragmatic mode please? Invalid pointers DO
impose danger. Real.

While "sense of security" connected to programming, especially with
languages like C or C++ I just can't recall from practice, or even mentioned
as actual experience. In the professional scene at least.

Or when there are other pointers to the object. Or when the quality is
high enough that the costs of assigning 0 outweights the marginal
advantage.

What cost? I'd bet when assignemnt is redundant, it is removed by the
optimizer. And that for the few cases that it can't be optimized, yet
measurable as difference, there exist a better design that eliminats the
particular pointer and runs even faster.
The costs include false sense of security, more code, and being steered
towards using unsafe raw pointer handling.

Objection, leadning.

The frequency of raw pointer usage is a completely unrelated story. The
scope was what to do with the raw pointers after deletion. concluding a
good practice of setting them NULL, or setting DEADBEEF or leaving alone or
whatever is *not* by any means a suggestion to use them instead of whatever
else, or at all.
 
Ö

Öö Tiib

Balog said:
"Kai-Uwe Bux" <[email protected]>
Probably we can agree that the difference-set is restricted to the
"unused" pointers that are members of living objects and a few-lines area
for locals.
I keep that population to minimum, and see no reason why having all those
few pointers set to NULL would hurt. Just like I don't keep spolled food
around the house.
If it disappears, then there is no need to nulling. (however optimizers
probably remove the assignment anyway, just as in the sases when you set a
different value to the pointer a few lines after...  back with C we used
macros that did free and set null almost exclusively without observing
peroformance problems.)
IMO there is a fair (even if not perfect) rationale to setting an invalid
pointer to NULL, and would be interested in the rationale of not setting
it. Honestly it sounds like 'ignore the seat belts, as they won't detect a
bomb attached to your ignition'.

[...]

:)

The question has two aspects, at the very least: (a) should nulling pointers
upon deletion be part of an overall strategy to ensure resource correctness
of the program? and (b) could nulling pointers be added on top of any other
strategy without harm?

As for (a), my experience is that RAII and deliberate use of smart pointers
(of various flavors) renders nulling pointers superfluous and still ensures
that all non-null pointers are valid. As a by-product, a null value of a
pointer usually has a more specific meaning (e.g., indicating the leaf of a
data structure).

Most raw pointers are indeed used for navigation around the data and
so 0 is the edge of navigation. Smart pointers are often too dumb for
such cases. Other usage of raw pointers is performance optimization
where smart pointer was too slow. Rest of the raw pointer usages might
be well considered as premature optimization.

Lets take for smart pointer example the shared_ptr. Instead of
deleteing its pointee people may reset() the pointer at some spot.
After then it is NULL. If they reset() it later again, then yes,
valgrind does detect nothing there too. Lets say developers have to
refactor it into raw pointer there instead of shared_ptr because of
performance. Setting it to NULL after deleting makes raw pointer not
perfect but at least it behaves a tiny bit like shared_ptr.

It is sad that interface of some smart pointers causes too lot of
noise in code. Setting a raw pointer to NULL is clean and tidy code if
to compare with serious weak_ptr usage.
Now for (b), I don't think it's always a harmless addition. Of course, if
the code is correct, it will stay correct. However we have to think of bugs.
In my code, a double deletion is almost certainly indicative of a deeper
problem (saying that I lost track of which pointees exist).

Is the existence of pointees a meta-information? Do you keep track of
the pointees by using something else, not by using pointer values that
point at them? If this is meta information then i am not sure how well
it scales.
I prefer seeing
valgrind spot that double deletion to the double deletion turning into a
silent null-op.

This sounds like some sort of "single new single delete" idiom used,
so multiple places of deletion are design bugs. I do not follow any
such SESE-like idioms. Most important objects keep more valuable
resources than the memory in what they locate (like files, hardware
ports, operating system handles) and destruction as soon as it is
clear they are not needed anymore is often a good idea. Everybody love
that "resource acquisition is initialization" thingy. Why some
seemingly hate its "resource releasing is uninitialization"
brother? :D
(Being a little paranoid, I have seriously considered creating a smart
pointer just for debugging. It would detect double deletion, dereferencing
invalid pointers, and using pointers polymorphically with classes that lack
virtual destructors.)

Double deletion happens to me when these are different pointers to
same object so nulling one does not invalidate other. Dereferencing
invalid pointers happens rarely to me too, but i can not say same
about my team members. Detecting delete polymorphic pointer to class
without virtual destructor there are some static analysis tools.

Btw ... polymorphic shared_ptr actually calls correct destructor
itself if most derived object pointer was passed to it when
constructing. Protected non-virtual destructors work better with
shared_ptr. This might also be bad and hide some design flaw from you?
 
J

James Kanze

It does give that "this pointer does not propagate dangling
values" sense of security. How this is false sense?

Because that's not normally a problem. (Unless you only have
one pointer in your application.)
Assigning 0 to pointer is valid form of reusing it. 0 is
useful value of pointer. Value that points at unavailable
places causes bugs when it is used.

Assigning null to a pointer is a valid form of reusing it. No
problem with that. So is assigning some other value to it,
e.g.:
delete p;
p = new Something;
In this case, assigning null to p between the two statements
might even be useful, if the new fails.

But how often do you reuse pointers?
It seems like there are just fixed set of cases. In some cases
assigning 0 is pointless and in other cases it is fruitful.
The cases when the rule does not make sense are when pointer
is immediately reused again or leaves scope or is destroyed
itself.

The problem is that most people who propose the rule aren't
thinking in terms of reuse. They're arguing safety.
 
J

James Kanze

[...]
Sure. Then i attempt to set the other pointers to that object
also to 0. NULL value shows that these pointers point at
nothing now.

That may or may not be valid. It depends on where those other
pointers are. (If they're in a map, for example, it's almost
certainly better to remove them from the map than to set them to
null.)
Yes. There are no ways to check if pointer has valid value. Or
are there?

In general, not portably, not in reasonable time, and probably
not with 100% accuracy.

Ban the unary & operator, and use garbage collection, and it's
fairly easy. But that first one can sometimes be a very awkward
restriction.

[...]
What is the safer raw pointer handling? I see what you mean by
false sense of security. People think that setting pointer to
NULL helps with more things than it helps.

What people?

The only safe way of handling pointers (raw or otherwise) is
good design. And code review to ensure that the design was
adhered to.
 
J

James Kanze

I read the standard that very way.

Up until that point, yes. Although I think you have to read
quite a bit of the standard to get to that point---the standard
never actually says it in so many words.
I reacall a bunch on threads on that on csc, clcm and other
forums. The base question nornally like 'what counts as
*using* a pointer'. And the conclusion was all times IIRC that
any attempt to inspect the value, including passing as
argument, use in comparision, etc is such. In standardese
terms I'd probably tie it to lvalue to rvalue conversion, that
would cover most situations.

To inspect the value, you need an lvalue to rvalue conversion.
3.7.3.2 p4 states that any use of an invalid pointer is
undefined. That much is clear, too bad "use" is indeed
underspecified.

Yes. At one point, I think it (or something else) said that
using the value of an invalid pointer results in undefined
behavior. Which is more or less what I think is wanted
(although I find the formulation using lvalue to rvalue
conversion more precise).
Francis Glassborow did show a code fragment from a WIN16
compilation where a pointer was passed using instruction
LES BX
push es
push bx
that actually crashed with an invalid selector loaded. Stupid
as it sounds the compiler is allowed to generate such code.

The C standard was formulated expressedly with this case in
mind. Except... what happens in the above code if the pointer
is null? I doubt somehow that 0 is a valid selector (at least
in user code); it shouldn't be.
We can hardly speculatively exclude situations where pointers
are not passed in general registers accepting any patterns.
So better keep to the rule to not have invalid pointers
around. I still fail to see any benefit from having them
really. Keeping to a fully valid state feels so much cleaner.

Nobody's arguing that you should keep invalid pointers around.
Generally, however, after a delete, you don't keep the pointer
around. And making that one pointer no longer invalid doesn't
solve any real problem, since all of the other pointers to the
object remain invalid.
 
J

James Kanze

It does buy it -- just not alone.

It does buy it, but it doesn't. So which is it?
While leaving the invalid pointer hangging around definitely
prevents it.

Who's talking about leaving an invalid pointer hanging around?
And I see no benefit just added danger to have UB.

And I see no added danger.
Mislead how?

Well, you said that it bought the program-wide invariant of "all
pointers are either valid or NULL". Which is more than just
misleading---it's downright false.
Indeed, and we even work to the similar standards of
correcness requirements. That is why I'm surprised on this
thread, that reads clearly as 'forget the rule about NULL-ing
and do whatever'.

There's no "do whatever" in there. The argument is forget the
rule about NULL-ing, because it's not sufficient, and whatever
you do which is sufficient will render it caduc.

[...]
I never used GC and never thought of banning & either, though
am pretty sure you can't find pointer-related UB in my
programs....

I was speaking about a more or less mechanical way of ensuring
the validity of all pointers, at all moments, despite design and
coding errors. Certainly a carefully designed and coded program
will not have UB, pointer related or otherwise.
I think I lost your chain of thought.

The fact that garbage collection prevents memory from being
recycled as long as there is a pointer to it doesn't guarantee
that the pointed to object is still valid. In a lot of
applications, object lifetime depends on external events.
Creating a design where lifetime and ownership issues are
correct is not trivial but not impossible either. In C++ you
can go pretty fare before needing to pass around pointers.
And the small population you need CAN be kept valid.

It depends on the application. If you need to create and
destroy objects in response to external events, and navigate
between such objects, you need pointers, and you need to pass
them around. But that doesn't mean you cannot avoid undefined
behavior. (In such cases, the observer pattern goes a long way
to solving many of the issues.)
 
Ö

Öö Tiib

    [...]
Sure. Then i attempt to set the other pointers to that object
also to 0. NULL value shows that these pointers point at
nothing now.

That may or may not be valid.  It depends on where those other
pointers are.  (If they're in a map, for example, it's almost
certainly better to remove them from the map than to set them to
null.)

Yes, of course it is better to get either rid of all pointers to
object or to set their value to detectable 0. I usually use smart
pointers in standard containers (like maps) unless it has been proven
that these slow performance.
In general, not portably, not in reasonable time, and probably
not with 100% accuracy.

Ban the unary & operator, and use garbage collection, and it's
fairly easy.  But that first one can sometimes be a very awkward
restriction.

I can sometimes live without & operator, but garbage collection does
not often suit me. It is important to control exactly when some things
are destroyed (because of the precious contents these have). For some
whim of fate the objects with precious contents tend to be also the
ones whose reference more than one other object needs to know. Relying
on garbage collection leads to two-phase destruction or additional
wrapper layer and all the code bloat that it causes. Reference
counting is more helpful since it lets to hunt down the references and
reset them when precious things must to be released.

There sure are domains where garbage collection suits better, but
again terrible whim of fate ensures that java teams are used there and
i get only the dangerous jobs.
    [...]
What is the safer raw pointer handling? I see what you mean by
false sense of security. People think that setting pointer to
NULL helps with more things than it helps.

What people?

People who set just single pointer to NULL after delete but forget
that it helps only with that single pointer. It is often enough
because most of the time there is only a single pointer that is
pointing at object but when it is not it may hit hard. Same applies to
other delete-like-things that invalidate iterators, references and
whatever other pointer-like things.
The only safe way of handling pointers (raw or otherwise) is
good design.  And code review to ensure that the design was
adhered to.

Amen to that.
 
J

Jeremy

Hi,

When my snob co-work is codereviewing my code,
he use an expression on the following code.

delete a;     <== my code

//His comments : You must set the ptr values back to NULL after
delete.

I know that it is good practice after deleting a pointer variable, to
set it NULL.
But my question is is it reuqired?
I hope somebody shows me c++ delete rules on this.

TIA

I know this has been talked to death, but as a final note, if you use
a static analysis tool and If you don't set the pointer to null after
deletion, you will likely get an error telling you to do so, unless
you specifically suppress the message.
 
J

James Kanze

On 09.05.2010 14:38, * Öö Tiib:
[...]
It does give that "this pointer does not propagate
dangling values" sense of security. How this is false
sense?
First, you can have more than one pointer to the same
object, and in that case the assignment of 0 just lies.
Sure. Then i attempt to set the other pointers to that
object also to 0. NULL value shows that these pointers
point at nothing now.
That may or may not be valid. It depends on where those
other pointers are. (If they're in a map, for example, it's
almost certainly better to remove them from the map than to
set them to null.)
Yes, of course it is better to get either rid of all pointers
to object or to set their value to detectable 0. I usually use
smart pointers in standard containers (like maps) unless it
has been proven that these slow performance.

If you adopt the "classical" solution based on
boost::shared_ptr, and use weak pointers in the map, you'll leak
memory. Smart pointers aren't an answer to everything. I cases
where the lifetime is deterministic, you really need the
observer pattern, to remove the pointer from the map, rather
than just nulling it.

[...]
I can sometimes live without & operator, but garbage
collection does not often suit me.

It depends on what you're doing, and what you want garbage
collection to do.
It is important to control exactly when some things are
destroyed (because of the precious contents these have).

Which is independent of garbage collection. Except that garbage
collection can be used to ensure that any access to the object
after you've destroyed it causes an immediate program failure,
rather than random behavior.
For some whim of fate the objects with precious contents tend
to be also the ones whose reference more than one other object
needs to know. Relying on garbage collection leads to
two-phase destruction or additional wrapper layer and all the
code bloat that it causes.

I don't follow you there. Garbage collection has no influence
as to when the object is destroyed. The two issues are (or
should be) orthogonal.
Reference counting is more helpful since it lets to hunt down
the references and reset them when precious things must to be
released.

Again, I don't follow you. Most of the time, reference counting
is used as a slow and slightly broken form of garbage
collection.
There sure are domains where garbage collection suits better,
but again terrible whim of fate ensures that java teams are
used there and i get only the dangerous jobs.

:). If something has to work, you don't use Java. On the
other hand, I've used the Boehm collector once or twice in
applications which had to work. C++ with garbage collection
works well. And if you want to be sure that it either works or
crashes (rather than getting random results), you can't do it
without garbage collection.
[...]
What is the safer raw pointer handling? I see what you mean by
false sense of security. People think that setting pointer to
NULL helps with more things than it helps.
What people?
People who set just single pointer to NULL after delete but
forget that it helps only with that single pointer. It is
often enough because most of the time there is only a single
pointer that is pointing at object

Usually, when there is just a single pointer, that pointer goes
out of scope immediately after the delete, so setting it to null
is just wasted effort.
 
J

James Kanze

On May 7, 7:52 am, Back9 <[email protected]> wrote:

[...]
I know this has been talked to death, but as a final note, if
you use a static analysis tool and If you don't set the
pointer to null after deletion, you will likely get an error
telling you to do so, unless you specifically suppress the
message.

What static analysis tool is broken to that point?
 
Ö

Öö Tiib

On 09.05.2010 14:38, * Öö Tiib:
    [...]
It does give that "this pointer does not propagate
dangling values" sense of security. How this is false
sense?
First, you can have more than one pointer to the same
object, and in that case the assignment of 0 just lies.
Sure. Then i attempt to set the other pointers to that
object also to 0. NULL value shows that these pointers
point at nothing now.
That may or may not be valid.  It depends on where those
other pointers are.  (If they're in a map, for example, it's
almost certainly better to remove them from the map than to
set them to null.)
Yes, of course it is better to get either rid of all pointers
to object or to set their value to detectable 0. I usually use
smart pointers in standard containers (like maps) unless it
has been proven that these slow performance.
If you adopt the "classical" solution based on
boost::shared_ptr, and use weak pointers in the map, you'll leak
memory.  Smart pointers aren't an answer to everything.  I cases
where the lifetime is deterministic, you really need the
observer pattern, to remove the pointer from the map, rather
than just nulling it.

Valuable debugging tools: shared_ptr::unique() and
shared_ptr::use_count().
I use shared_ptr in map anyway, mostly for more convenient interface
than weak_ptr has.
Observer pattern, events and/or signals are also useful of course, but
like smart pointers cost something everything does.
    [...]
I can sometimes live without & operator, but garbage
collection does not often suit me.

It depends on what you're doing, and what you want garbage
collection to do.

Actually i perhaps tried GC too many years ago and something is
changed there. I found myself explicitily deallocating everything
there was. As debugging tool to detect memory leaks it was a bit too
expensive and jumpy.
Which is independent of garbage collection.  Except that garbage
collection can be used to ensure that any access to the object
after you've destroyed it causes an immediate program failure,
rather than random behavior.

OK, now that sounds very good. I must ensure that there are no such
things. Write access check i already have but it is debug/test time
check. Does it really work with read access too? The one that i tried
was not certainly ensuring that.
I don't follow you there.  Garbage collection has no influence
as to when the object is destroyed.  The two issues are (or
should be) orthogonal.

Yes. So i have to track down and reset or kill all the pointers too.
(To achieve it I may signal to the pointers or make pointers to
observe the object.) What i do. So i do not see how garbage collection
helps me, unless i use two layers or phases of destruction.
Again, I don't follow you.  Most of the time, reference counting
is used as a slow and slightly broken form of garbage
collection.

I use reference counting like pointers with run-time debugging
attached. One should know how to reach everybody who have the
reference to object under axe so if reaching them did fail then there
is a bug. I probably have to revisit Garbage Collection and see what
new is there and how it can help me. Thanks for pointing out that
there might be things that i have overlooked.
:).  If something has to work, you don't use Java.  On the
other hand, I've used the Boehm collector once or twice in
applications which had to work.  C++ with garbage collection
works well.  And if you want to be sure that it either works or
crashes (rather than getting random results), you can't do it
without garbage collection.

Yes, the few who need quality (no matter what) pay really well ... but
ensuring that there is quality is tricky and dangerous too. If to look
into shops then lot of things are sold simply to lie to customer. It
has not to work. It has to break to prove to customer that he needs
better one. Largest amounts of money are made exactly like that.
    [...]
What is the safer raw pointer handling? I see what you mean by
false sense of security. People think that setting pointer to
NULL helps with more things than it helps.
What people?
People who set just single pointer to NULL after delete but
forget that it helps only with that single pointer. It is
often enough because most of the time there is only a single
pointer that is pointing at object

Usually, when there is just a single pointer, that pointer goes
out of scope immediately after the delete, so setting it to null
is just wasted effort.

You should see the code of applications (and even operating systems)
in the gadgets that you use every day. Often without noticing much.
For example you pay with credit card using awful row of applications
between you and your bank. One peek immediately breaks all immersion
of how things should be. Despite that it is *your real money* there
flowing in wires. "Set pointer to null after delete" is really a good
suggestion, nothing to do. Other as good (but takes wizard rank to
have right to say it) is "by this design you may not delete here. Leak
here until you fix your design".
 
J

James Kanze

Actually i perhaps tried GC too many years ago and something
is changed there. I found myself explicitily deallocating
everything there was. As debugging tool to detect memory
leaks it was a bit too expensive and jumpy.
OK, now that sounds very good. I must ensure that there are no
such things. Write access check i already have but it is
debug/test time check. Does it really work with read access
too? The one that i tried was not certainly ensuring that.

It's not that automatic. It works because you only access (read
or write) through member functions, and the member functions all
start with a validation check, that the object is still good.
For this to work, however, you have to ensure that the memory
doesn't get reallocated, since if it does, the next user might
put something that looks valid where you'd written something
easily recognizable as invalid.
Yes. So i have to track down and reset or kill all the
pointers too. (To achieve it I may signal to the pointers or
make pointers to observe the object.)

Generally, yes. Forgetting to do so is a frequent cause of
leaked memory.
What i do. So i do not see how garbage collection helps me,
unless i use two layers or phases of destruction.

I don't understand what you mean by two layers or phases of
destruction.

Garbage collection helps by ensuring that the memory cannot be
reused as long as anyone has a pointer to it.
I use reference counting like pointers with run-time debugging
attached. One should know how to reach everybody who have the
reference to object under axe so if reaching them did fail
then there is a bug.

But reference counting doesn't help in reaching everybody.
 
Ö

Öö Tiib

It's not that automatic.  It works because you only access (read
or write) through member functions, and the member functions all
start with a validation check, that the object is still good.
For this to work, however, you have to ensure that the memory
doesn't get reallocated, since if it does, the next user might
put something that looks valid where you'd written something
easily recognizable as invalid.

Yes, the major issue is "unavoidable" maintenance that has to solve
someones urgent "fatal million dollar problem" ... and it breaks
something else up about at 50% of cases. Things that help to be more
paranoid and defensive are all good.

Ok. Is not the above solution technically using UB? I somehow feel
there some analogy with asserting that (this!=0) at start of each
member function. Does CG library claim that it turns such UB into
defined behavior?
Generally, yes.  Forgetting to do so is a frequent cause of
leaked memory.


I don't understand what you mean by two layers or phases of
destruction.

Like reusable container for precious materials. Analogy:
std::fstream::close() ensures that precious things are released and
fstream object is still around and fresh to reuse. shared_ptr (and its
reset()) is most thin of such layers. In other words i do not see how
garbage collection helps to replace refcounting (that smart pointers
are mostly about).
Garbage collection helps by ensuring that the memory cannot be
reused as long as anyone has a pointer to it.

No matter if it has been explicitily destroyed/deallocated? That
sounds also very interesting of course.
But reference counting doesn't help in reaching everybody.

Yes, but it has information that reaching everybody has failed.
Usually some maintenance has added users to object but forgot to make
the users reachable when object needs to die and be buried. So there
is bug but situation is under control and software has possibilities
to runtime decide if to leak the object (most safe), destroy it no
matter what (most dangerous) or stop with error (most correct but also
usually most unacceptable). Such reference counting can be cheaply
replaced with raw pointers (for a cost that such run-time debugging/
resolving is replaced with possible UB). How does garbage collection
help to reach everybody? What is there better?
 
M

Michael Angelo Ravera

Hi,

When my snob co-work is codereviewing my code,
he use an expression on the following code.

delete a;     <== my code

//His comments : You must set the ptr values back to NULL after
delete.

I know that it is good practice after deleting a pointer variable, to
set it NULL.
But my question is is it reuqired?
I hope somebody shows me c++ delete rules on this.

TIA

This is a very common house rule. It allows for simple checks for
validitiy of the pointer.

To be more to the point, whenever you delete that to which a pointer
points, you should set the value to whatever the local implementation
of INVALID_POTNTER is unless house practice is to set it to NULL.

The reason for this is that some programs for efficiency, or some
paths throught the code may have already created an object (and some
might have created it and then deleted it). It is VERY handy to be
able to perform a quick test to see if you need to create an object
and do so, if needed 9and use the old one, if it exists).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,483
Members
44,901
Latest member
Noble71S45

Latest Threads

Top