if(!p) { assert(false); return NULL; } ... ?

M

Martin B.

I'm curious what others think of this pattern:

+ + + +
TX* f(TY* p, ...) {
if (!p) {
assert(false);
return NULL;
}
...
}
+ + + +

I'm not sure whether I should accept it as defensive programming or
think it's just plain superfluous?!

So I have a function that is required to only accept a non-null pointer,
"documented" by the assert statement, only to have the function fail
"gracefully" should it be passed a NULL pointer. (Obviously, returning
NULL is *only* done in this case, otherwise a valid pointer will be
returned.)

cheers,
Martin


p.s.: While I'm at it, I'll give one of the reasons such things are
explained: Developer had a crash dump where the immediate crash reason
was dereferencing of `p` inside f when it was NULL. Since no one was
able to reproduce it in due time, they added the check "so it won't
crash anymore". Which, yes, it doesn't, but now we have this code and
even longer debugging sessions when the program doesn't do what it's
supposed to do (but doesn't crash).
 
V

Victor Bazarov

I'm curious what others think of this pattern:

+ + + +
TX* f(TY* p, ...) {
if (!p) {
assert(false);
return NULL;
}
...
}
+ + + +

I'm not sure whether I should accept it as defensive programming or
think it's just plain superfluous?!

It's a hack. A proper way to handle the situation with non-acceptable
argument value would be to throw an exception, unless, of course, there
is a "nothrow" requirement from the client (caller)...
So I have a function that is required to only accept a non-null pointer,
"documented" by the assert statement, only to have the function fail
"gracefully" should it be passed a NULL pointer. (Obviously, returning
NULL is *only* done in this case, otherwise a valid pointer will be
returned.)

Since we're not offered the specification, we can't judge how close this
code matches them, can we? Trying to divine the requirements from the
code hinges on the assumption that the code correctly implements them.
At the same time judging the code's correctness assumes that the
requirements (specification) is known independent from the code. So,
you got a chicken and an egg problem, AFAICT.
cheers,
Martin


p.s.: While I'm at it, I'll give one of the reasons such things are
explained: Developer had a crash dump where the immediate crash reason
was dereferencing of `p` inside f when it was NULL. Since no one was
able to reproduce it in due time, they added the check "so it won't
crash anymore". Which, yes, it doesn't, but now we have this code and
even longer debugging sessions when the program doesn't do what it's
supposed to do (but doesn't crash).

That's why I called it a hack. Welcome to code maintenance hell!

V
 
T

Tobias Müller

Martin B. said:
I'm curious what others think of this pattern:

+ + + +
TX* f(TY* p, ...) {
if (!p) {
assert(false);
return NULL;
}
...
}
+ + + +

I'm not sure whether I should accept it as defensive programming or think
it's just plain superfluous?!

So I have a function that is required to only accept a non-null pointer,
"documented" by the assert statement, only to have the function fail
"gracefully" should it be passed a NULL pointer. (Obviously, returning
NULL is *only* done in this case, otherwise a valid pointer will be returned.)

cheers,
Martin


p.s.: While I'm at it, I'll give one of the reasons such things are
explained: Developer had a crash dump where the immediate crash reason
was dereferencing of `p` inside f when it was NULL. Since no one was able
to reproduce it in due time, they added the check "so it won't crash
anymore". Which, yes, it doesn't, but now we have this code and even
longer debugging sessions when the program doesn't do what it's supposed
to do (but doesn't crash).

If your function cannot handle NULL pointers, then it should probably take
a reference as parameter.
If you do this consequently, you only have to check for NULL where you know
that the pointer can really be NULL. And at that place it should be clear
how to handle it.

Tobi
 
G

goran.pusic

I'm curious what others think of this pattern:



+ + + +

TX* f(TY* p, ...) {

if (!p) {

assert(false);

return NULL;

}

...

}

+ + + +



I'm not sure whether I should accept it as defensive programming or

think it's just plain superfluous?!



So I have a function that is required to only accept a non-null pointer,

"documented" by the assert statement, only to have the function fail

"gracefully" should it be passed a NULL pointer. (Obviously, returning

NULL is *only* done in this case, otherwise a valid pointer will be

returned.)



cheers,

Martin





p.s.: While I'm at it, I'll give one of the reasons such things are

explained: Developer had a crash dump where the immediate crash reason

was dereferencing of `p` inside f when it was NULL. Since no one was

able to reproduce it in due time, they added the check "so it won't

crash anymore". Which, yes, it doesn't, but now we have this code and

even longer debugging sessions when the program doesn't do what it's

supposed to do (but doesn't crash).

Congratulations, you, too, have been hit by Hoare's billion $ mistake! So that's 1000000001$ mistake now (and counting).

Unfortunately, such code is abound. I say it's a consequence of crap C programming. Well, poor design, really (it's unknown whether NULL is allowed ornot). It did emerge from the desire to protect from crashes, as post-scriptum says, but the hard cold truth is that someone had given up on finding abug. Worse yet, the code stays (forever, sadly ;-)) because it's seen as expedient to sweep the bug under a rug.

In C, correct code for the above is:

TX* f(TY* p, ...) {
assert(p); // If this fires, we die horrible death

...
}

In C++, it's

retval f(TY& p)
{
....
}

What I do when confronted with such code, is to change parameter type into a reference and try to fix the mess. Unfortunately, it's prohibitive in large codebases.

Finally, I don't buy the defensive coding argument. To me, it's by far the best to crash right on the spot than to add piles of uncertainty like that.When you crash, you send a powerful signal that something is wrong, and are therefore obliged to fix that. When you "defend", you just make a bigger mess. It's much better to factor the crashes in early. That also forces youto think about protecting relevant code outputs on time, as well as havingmeans to debug crashes.

Goran.
 
M

Martin B.

A constraint violation that fails gracefully in production builds
(-DNDEBUG), and dies with a stack trace in debug builds?

Given the other methods of obtaining a stack trace are a pain, and
assuming the return value gets checked, I can't see anything to get
worked up about.

Yes, but I wrote that the only way this function can return NULL is the
"impossible" case. Even if the return value is checked, all that *is*
done there is ignore the error further. Leading to a non-crashing, but
non-working application, without any info to the user.
 
M

Martin B.

If your function cannot handle NULL pointers, then it should probably take
a reference as parameter.
If you do this consequently, you only have to check for NULL where you know
that the pointer can really be NULL. And at that place it should be clear
how to handle it.

I have to say the reference suggestion seem highly naive.

You can't rework a whole large code base, and changing this function
from a ptr to a ref would just shift the burden of checking for NULL to
the call site that now has to dereference a pointer to pass it to the
function.
 
I

Ike Naar

I have to say the reference suggestion seem highly naive.

You can't rework a whole large code base, and changing this function
from a ptr to a ref would just shift the burden of checking for NULL to
the call site that now has to dereference a pointer to pass it to the
function.

But now you assume that, when using call-by-reference,
the call site needs to dereference a pointer, which need not be the case.

With call-by-pointer, the situation might look like

void f(Type *arg) /* precondition: arg must not be NULL */
{
/* assure arg != NULL */
/* use *arg */
}

Type var = /* meaningful initialization */;
f(&var);

with call-by-reference this would look like

void f(Type &arg)
{
/* use arg */
}

Type var = /* meaningful initialization */;
f(var);

If this is the typical usage of f, using call-by-reference leads to
simpler code that does not need NULL checks, neither in f nor at
the call site.
 
J

Jorgen Grahn

But now you assume that, when using call-by-reference,
the call site needs to dereference a pointer, which need not be the case.

With call-by-pointer, the situation might look like

void f(Type *arg) /* precondition: arg must not be NULL */
{
/* assure arg != NULL */
/* use *arg */
}

Type var = /* meaningful initialization */;
f(&var);

with call-by-reference this would look like

void f(Type &arg)
{
/* use arg */
}

Type var = /* meaningful initialization */;
f(var);

If this is the typical usage of f, using call-by-reference leads to
simpler code that does not need NULL checks, neither in f nor at
the call site.

Yeah. And what often happens when you do such transformations is, you
find that 95% of the call sites are non-problematic (f(&var)) and 5%
are the ones where you actually *do* have a potential NULL pointer.

If you can find a way to deal with that at the call site you've
improved the code a lot -- the unsafety is focused to fewer code paths,
and you're encouraging new code to be safe.

/Jorgen
 
J

Jorgen Grahn

I'm curious what others think of this pattern:

+ + + +
TX* f(TY* p, ...) {
if (!p) {
assert(false);
return NULL;
}
...
}
+ + + +

Seems like an odd way of saying:

TX* f(TY* p, ...)
{
assert(p);
if (!p) {
return NULL;
}
...
}

assert() works best if it asserts something meaningful.
"My bank account is not empty" instead of "if my bank account is
empty, the Moon is made of blue cheese".

/Jorgen
 
M

Martin B.

On 28.09.2012 08:43, Tobias MC¼ller wrote:

I have to say the reference suggestion seem highly naive.

You can't rework a whole large code base

Why not? (...)[/QUOTE]

Someone has to pay for it.
... break the build; you just fix the call sites and hit the build button
again; and again; and again. Yes, it may take long time for a really large
code base, but in the end everything is supposed to be cleaner. Been
there,

Yes, but your boss is going to ask you why he should pay fo it.
done that, by introducing const-correctness to a large codebase where it
was not taken into account seriously from the start.

If you do not have a one-button build of the whole codebase, then this is a
real problem and must be dealt with first.

I have a one-button build. It just takes too long.

cheers,
Martin
 
M

Martin B.

Seems like an odd way of saying:

TX* f(TY* p, ...)
{
assert(p);
if (!p) {
return NULL;
}
...
}

assert() works best if it asserts something meaningful.
"My bank account is not empty" instead of "if my bank account is
empty, the Moon is made of blue cheese".

Yeah, but with the second version you do state the condition twice (and
inverted at that). I have to say I'd rather have the assert(false) or at
least
...
if (!p) {
assert(p);
...
the assert inside the if condition.

cheers,
Martin
 
J

Jorgen Grahn

Yeah, but with the second version you do state the condition twice (and
inverted at that).

People think differently. The duplication doesn't bother me at all --
especially since the assert (in my mind) belongs together with the
function prototype -- it says "this function takes a non-NULL TY*, but
I can't really express that in the language without major rewriting".

And also since the condition really *does* serve two purposes here,
which was the problem which made you post in the first place.

/Jorgen
 
T

Tobias Müller

Martin B. said:
Why not? (...)

Someone has to pay for it.[/QUOTE]

I'm getting payed for actually fixing the bug and not for implement some
obscure workaround that propagates the effects of the bug away from its
source, making bugfixing even harder.
Yes, but your boss is going to ask you why he should pay fo it.

My boss does not supervise every little step I make as long I'm doing my
job well.
I have a one-button build. It just takes too long.

If you choose the workaround, you will need to fix the call site too, since
now the function can return NULL. And since you didn't change the function
signature, the compiler won't even help you. I suspect that this takes even
longer.

Anyway, the time you need for this is nothing compared to having to do
customer support because of a nasty bug.

Tobi
 
N

Nick Keighley

Yes, sure. As the codebase will get cleaner he will pay less in the long
term,

in the long term we're all dead. If we spend hours refactoring the
whole code base we aren't adding functionality or fixing end-user
visible bugs. And the system test cost has just rocketed because we've
touched every file in the system.

And reference/pointer problem is the least of it. You mentioned const
correctness, how about exception safety, how about reducing the number
of independent exception types (from about 20), or refactoring every
function that is more than 1000 lines... etc. If I'm modifying
something then I clean it up.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,479
Members
44,899
Latest member
RodneyMcAu

Latest Threads

Top