Malcolm said:
Richard Heathfield said:
Malcolm McLean said:
<snip>
There quite a strong case for a safemalloc() library function that
does terminate with an error message on fail.
No, there isn't. Library routines have no business deciding to terminate
the program.
The problem is that the code is disproprtionate. [...]
[code snipped; see up-thread]
Now that is about as simple as an arbitrary dataset is likely to be -
just an array of strings. I counted 14 lines purely to handle malloc()
failures.
Line counts in code that won't even compile aren't all
that persuasive.
But even so: I counted 58 lines in all. If you expended
*no* lines on error-checking you'd still have 44. The Sixth
Commandment envisions a ratio of two lines of error-handling
per line of payload ("yea, even though the checks triple the
size of thy code"), so why are you complaining about a rate
less than one-twelfth as great? The Commandment describes an
extreme circumstance, true, but instructs the Righteous to be
prepared to work at least twelve times as hard as you do.
To my mind this is a very serious disadvantage of the "caller handles
error" paradigm.
The "serious disadvantage" seems to be that it makes lazy
programmers work harder than they'd like to.
If you require every "leaf" function to do its own error-
handling, you defeat reusability. Nice little function here
that does something my program needs -- oh, too bad, can't
use it because the actions it takes on error don't mesh with
my program's strategy. Contrast with a function that informs
its caller "Sorry; couldn't do it" and allows the caller to
decide what to do next in light of its greater understanding
of overall context -- it's the latter function that can be
reused, not the former.
Should fprintf() halt the program on an I/O error?
Should fopen() halt the program if unable to open the file?
Should strtod() halt the program on a malformed input?
Then why should malloc() halt the program on a whim?