That's an exponential increase in the amount of code. Try the
same technique with:
printf("%s: %s([%s, %s])=%s, new state=%s\n",
state?state:"<no-state>",
op?op:"<no-op>",
v1?v1:"<no-v1>",
v2?v2:"<no-v2>",
res?res:"<no-result>",
newstate?newstate:"<no-state>");
1. We don't worry about programmer convenience. We worry about bug free
code. If that requires more lines, then so be it.
More lines of code can, often, result in more bugs.
This is true if one assumes a constant number of bugs per line of code. We
have found however that using our strict coding standards along with
comprehensive testing, that we our lower our bug/line ratio significantly
enough that adding extra lines of code for improved testing or for source
clarity can be justified.
So far as I can tell, this remains 100% pure handwaving. Prohibiting their
use just means someone is still checking them, and then you have a bunch
of asserts. This means that, in production, either you have completely
unchecked use of null pointers, or your software instantly crashes with
error messages meaningless to the user without any attempt at saving data.
As I have already mentioned, we avoid the direct use of pointers as
much as possible. I would have used a collection in this example.
A direct pointer need never have been used.
Neither of these is a good choice. Robust code should be checking *anyway*;
that would be the way to generate genuinely bug-free code.
We actually would provide checking to back up the assertions; however,
these checks would be located directly after the associated assertion
at the very top (and bottom) of each function where we verify all of
the functions inputs, outputs, as well as verifying that nothing input
changes that is not supposed to. This is also part of our coding
standard. For this example however, this code is merely redundant.
Should we find that the function has been given illegal inputs we know
that something is wrong in the code itself and can no longer be trusted
so the only safe action that we have is to log the error, signal
the monitor program that we need to be restarted, and abort. This is
still a bug and considered unacceptable; however, to us, a program crash
is no excuse for being unable to provide service. Program crashes are
also something that we must deal with.
There is no good way for any non-trivial program to be sure it's covered all
possible code paths during testing; as a result, you can never be sure the
asserts caught everything. Leaving them in means that your program aborts
without warning rather than making any effort at all to recover. Removing
them probably means the same thing, if you're lucky.
That is true in theory; but, something as simple as a NULL pointer being
used is too easily tested for. First, this would ve a very clear violation of
our coding standards and the program design; not to mention it would rather
odd for somebody to have entered printing code where it has no earthly
business in the program. Programs are broken into descreet modules for a
reason.
However, assuming somebody did make such a wierd decision, the first place
that it would most likely be noticed is in the code review. For a mistake
like this to happen it must pass by both programmers in the team that
created it and be signed off by a second team doing the peer review. Four
eyes should have seen this. They should seen an idiom that almost never
used and most likely somewhere that they know be design that it never
should have been.
If not, while being submitted to the revision database, it enters our unit
testing. This starts verifying all of the functions of each module from
the bottom up. Since each module is tested separately, an unplanned
dependency for the printing module in one of the modules will prevent the
tests from being able to link properly and thus fail the tests. The
revision database will then refuse to add the revision to the database and
send notification with test results back to the programmer. Even assuming
the error happens in a module that was designated to have the printer
dependency, the function is tested with against all valid inputs and dummy
functions test check any outputs or data passed to other functions. Again,
a NULL pointer passed anywhere is a big flashing neon light. Also note,
that smaller highly directed functions have less code paths to test making
coverage testing much easier.
Assuming the code passes the unit tests, it will be submitted, as a whole,
to a suite of function tests. Error conditions such as no available
printers are just the kinds of errors that we look for. Since almost every
function in our codebase aborts on receiving a NULL pointer, for this error
to pass the functional testing phase: the pointer must most likely be local to a
function and *never* be passed to another function (which violates the
black box factor of the printer_dev struct) or the offending code would
likely be in some added feature never specified by the client.
So, while it is still theoretically possible that a derefernced NULL
pointer, that should never have been used in the first place, makes it
throught all of these tests, we don't seem to have a problem with it.
The bugs that we struggle with are those which are much harder to
test for. It is much more difficult, for instance, to verify that a
non-NULL pointer is really pointing where it should be or that memory
which was allocated is properly cleaned up after an error that causes
an abortion of an incomplete operation.