Richard Heathfield said:
Richard Bos said:
True enough - you shouldn't *rely* on it, in the sense of arbitrarily
littering your code with calls to free(). Nevertheless, setting pointers to
NULL after you're done with them is a /good/ habit, not a bad one. It's
called "defence in depth".
I'm not going to say that I disagree with you, but I'm going to offer
a counterargument anyway.
Setting a free()d pointer to NULL can prevent certain kinds of errors
-- or rather, it can prevent the *symptoms* of certain kinds of
errors.
For example:
some_type *ptr = malloc(sizeof *ptr);
...
free(ptr);
...
/*
* Now I don't remember whether I called free(ptr) or not.
* I'll free it here, just in case.
*/
free(ptr);
As written, the second call to free() invokes undefined behavior. In
fact, evaluating ptr in preparation for the call invokes UB; the call
to free() doesn't really have anything to do with it.
If the first free() call were replaced with:
free(ptr);
ptr = NULL;
then the second free() call wouldn't invoke UB. But it would *still*
(probably) be a symptom of a logical error, namely the failure to keep
track of whether you've already called free(). Setting ptr to NULL
after the first free() would mask the symptom of the error; leaving it
alone could *potentially* cause the second free() to blow up, making
the error easier to detect. (Or it could do nothing; such is the
nature of undefined behavior.)
On the other hand, you could legitimately have reached the point of
the second free() by any of several paths, some of which have free()d
the pointer and some of which haven't. In that case, *if* all the
paths that free() it then set it to NULL, then free()ing it again is
harmless and sensible.
I'd probably be more comfortable with a program design that invokes
free() exactly once, if and only if it's needed, but if having your
program do a little unnecessary work at run time makes it easier to
develop, that's not always a bad thing.
That's certainly true, but it is also a good idea to recognise that you
might be fallible, and to take precautions against that fallibility. A null
pointer is far more useful than a pointer with an indeterminate value.
In an ideal world, the question would never arise; you wouldn't make
that mistake in the first place, or at least you'd already have
corrected it.
In software intended to be maximally robust (which, I hasten to add,
isn't *always* worth the effort), it probably makes sense to implement
this kind of defense in depth *and* to detect and log cases where
errors were caught by the second or later line of defense.