I think you're being dishonest (or perhaps naive). It's not zero
effort to write error checking code, and it's not zero effort to read
code with lots of extra error checking cluttering up the code.
No, you're just not reading carefully.
Go back and look for the words "once the habit has been developed".
Consider a scenario in which A() calls B(), B() calls C(), C() calls
D(), and D() calls E(). At each level, memory allocations may occur.
That means, in every function, code must be explicitly written to
check if any given allocation failed, and if it has, to pass that
information to the caller.
More code is usually harder to write.
More code is usually harder to read.
Code that has cleanly written checks for allocation failures is easier
to read and write than code that constantly makes me ask "What if that
fails?".
More code usually introduces the potential for more bugs.
Leaving out checks for errors GUARANTEES more bugs. I'll take the
additional code and the risk of getting it wrong over the certainty
that the shorter code is wrong, thanks.
In any case, let's see how this scenario might look in C:
int A(void) {
void *a = malloc(...);
if (!a) { goto OutOfMemoryError; }
/* do some work */
if (B() == ENOMEM) {
free(a);
goto OutOfMemoryError;
}
/* do some more work */
free(a);
return SUCCESS;
OutOfMemoryError:
/* handle it */
}
int A(void)
{
void *a=malloc(size_of_a);
if(!a) goto FAIL_FIRST_MALLOC;
/*do some work*/
if(B()==ENOMEM) goto FAIL_B_FAILED;
free(a);
return SUCCESS;
FAIL_B_FAILED:
free(a);
FAIL_FIRST_MALLOC:
/*Handle failure*/
}
int B(void) {
void *b = malloc(...);
if (!b) { return ENOMEM; };
/* do some work */
if (C() == ENOMEM) {
free(b);
return ENOMEM;
}
/* do some more work */
free(b);
return SUCCESS;
}
int B(void)
{
void *b=malloc(size_of_b);
if(!b) goto FAIL_FIRST_MALLOC;
/*do some work*/
if(C()==ENOMEM) goto FAIL_C_FAILED;
/*do some more work*/
free(b);
return SUCCESS;
FAIL_C_FAILED:
free(b);
FAIL_FIRST_MALLOC:
return ENOMEM;
}
Or maybe:
int B(void)
{
int ret=SUCCESS;
void *b=malloc(size_of_b);
if(!b) { ret=ENOMEM; goto FAIL_FIRST_MALLOC; }
/*do some work*/
if(C()==ENOMEM) { ret=ENOMEM; goto FAIL_C_FAILED; }
/*do some more work*/
FAIL_C_FAILED:
free(b);
FAIL_FIRST_MALLOC:
return ret;
}
[snip C(),D(),E() isomorphic to B()]
This is obviously a toy example, but it looks pretty cluttered and
ugly already.
It gets less cluttered and ugly if you use a less cluttered and ugly
error handling strategy.
For one thing, trying to back out all of your allocations so far at
every point where one of them can fail blows up pretty quickly when
more than one thing can fail.
It's about 62 lines of code.
I count ten or twelve non-blank, non-comment lines in each of A() and
B() (both yours and mine).
If the error handling is done sensibly, it generates one additional
line of code in the main flow of control per check. Hardly "cluttered
and ugly". The actual handling of errors is done at the end of the
function, after the main flow of control returns, where you can decide
whether it's important to look at it or not. It's also easy to verify
that resources are only cleaned up on failure if they were allocated
before the failure occurred.
(This model obviously breaks when your error recovery strategy is more
than just "clean up and push the failure up the call stack", but that's
not the case under discussion here.)
How many lines of code do the "do some work" comments represent? I bet
it's more than ten or twelve, and quite possibly enough to make ten or
twelve look vanishingly small in comparison.
Now extrapolate this out
to a non-trivial application with hundreds, thousands, or even tens of
thousands of functions that allocate memory.
One line per allocation in the part of the function that does the
interesting bits.
Error recovery code that cleanly backs out the allocations, and can
sometimes be combined with normal cleanup in functions that allocate
resources to work with instead of allocating them for higher-level
functions.
Not exactly a great cognitive burden. If I'm reading that code, the
amount of time I spend on the error recovery will probably be lost in
the noise when you put it up against the time I'll spend wrapping my
brain around how hundreds, thousands, or even tens of thousands of
functions work together.
Now let's compare this to a language with exceptions and garbage
collection:
[...]
This is about 37 lines of code.
And when you fill out the "do some work" comments, suddenly you're
comparing something like 362 vs. 337 lines of code, and for some reason
it doesn't look like such a big difference anymore.
If you're trying to simplify it using lines of code as your metric,
leaving out the error handling is Just Not Worth Your Time.
It was easier to write (after all,
there's a lot less code), it's easier to read (less visual clutter),
and it's more reliable/dependable (less code usually means less
opportunities for bugs).
Well, yeah, if all you do is allocate and deallocate resources, using a
language that handle the deallocation for you means you only have to
write half as much code. That should be obvious.
If you actually do something with the resources you allocate, then
suddenly you're writing maybe 5% less code? 10%? 25% if you're doing
something trivial?
The error handling code is going to be some of the easiest bits of that
code to write, because once you've gotten into the habit of doing it,
it practically writes itself.
My intention is merely to show that handling memory in C is a
non-trivial task and that it's disingenuous to claim otherwise.
Checking for and failing cleanly on simple resource allocation errors
instead of aborting the entire program is not exactly what I would call
"non-trivial".
Nobody is claiming that a partial recovery from a resource failure at
the level where it makes sense to make error-handling decisions is
trivial, but that's not what we're talking about here, and that's no
more nontrivial in C than it is in a language with GC and exceptions.
(If anything, having fewer unknowns in C might make it easier.)
Malcolm is getting beat up for trying to solve part of the complexity
problem of managing memory in C.
No, he's "getting beat up" because his proposed solution introduces
more problems than it solves.
For a very wide range of
applications, Malcolm's xmalloc() is just fine.
No. In addition to the design flaw that makes it inappropriate for
most uses, his implementation also has several rather serious errors
that that it appears he'd rather try to defend than fix.
Setting aside my developer hat for the moment, and putting on my user
hat: There are plenty of applications where I simply don't care if the
application terminates with an out of memory error.
I don't care if it says "I need more memory than I can get to do what
you asked me to".
Terminating the whole thing (including discarding any unsaved state)
because one subtask couldn't allocate as big a scratch buffer as it
wanted is an entirely different matter.
For this set of
applications, I'd rather the developers add features, rather than
apply obscene levels of Obsessive Compulsive Disorder to every single
allocation in the entire system.
I would rather the developers of ANY software get the features they
already have right before they add new features.
If you really think that "more b0rken features" is preferable over
"stuff that actually works", I think we have nothing further to
discuss.
dave