Richard said:
Andrew Poelstra said:
Well, the check is done in the obvious way:
p = malloc(n * sizeof *p);
if(p == NULL)
{
The response to a failure depends on the situation. I've covered this in
some detail in my one-and-only contribution to "the literature",
I take it you're referring to "C Unleashed"; I haven't read it and am not
currently in a position to read it, so I hope you'll excuse me if I clumsily
raise points you discuss in depth in the book.
so I'll just bullet-point some possible responses here:
* abort the program. The "student solution" - suitable only for high school
students and, perhaps, example programs (with a big red warning flag).
This is not doing justice to the significant amount of work required to make
a program robust in the face of memory exhaustion. Quite bluntly: it's not
always worth it in terms of development time versus worst possible outcome
and likelihood of that outcome, even for programs used outside high school.
I'm not saying the tradeoffs involved are always correctly assessed (it's
probably a given that the cost of failure is usually underestimated), but I
do believe they exist.
I presume that instead of "aborting the program" we may read "exiting the
program immediately but as cleanly as possible", by the way, with the latter
being just that bit more desirable.
* break down the memory requirement into two or more sub-blocks.
Applicable only if the failure is a result of trying to allocate more memory
than you really need in one transaction, which is either a flaw or an
inappropriate optimization (or both, depending on your point of view).
Will solve the problem, in the sense that "don't do that then" will cure any
pain you may experience while moving your arm. The bit we're interested in
is when you've decided that you absolutely have to move your arm.
* point to a fixed-length buffer instead (and remember not to free it!)
Thread unsafe (unavoidably breaches modularity by aliasing a global, if you
like it more general), increased potential for buffer overflow, requires
checking for an exceptional construct in the corresponding free() wrapper
(I'm assuming we'd wrap this).
I don't see how this would ever be preferential to your next solution:
* allocate an emergency reserve at the beginning of the program
How big of an emergency reserve, though? To make this work, your biggest
allocation should never exceed the emergency reserve (this could be tedious
to enforce, but is doable) and it will only allow you to complete whatever
you're doing right now.
Of all the solutions suggested, though, I'd say this one has the most
potential for practical success, *if* we signal the OOM condition *in
addition to* returning emergency reserve storage. At the end of an
individual transaction, we can detect that the next one will fail if no more
memory has become available, so we can take any measures we can take
(including failing gracefully) with the program in a well-known state (we
can do the same for a fail-fast allocation like malloc() offers, but it
requires more effort, and may involve rewriting things we can't rewrite).
* use virtual memory on another machine networked to this one (this
is a lot of work, but it may be worth it on super-huge projects)
This is not really an answer to "how do I deal with memory allocation
failure in a program", but to "how do I make sure my program doesn't
encounter memory allocation failure". For such projects as you mention
you'll probably want a custom memory allocation library to begin with (which
may or may not involve the standard malloc() at all, depending on the
portability/performance tradeoffs involved).
Well, I wasn't really trying to insult you. I was just trying to communicate
my concern at the prevalence of this really bad practice of not checking
that an external resource acquisition attempt succeeded. You wouldn't omit
a check on fopen or socket or connect, so why on malloc?
Playing devil's advocate for a moment: because the failures you cite are
both more common and easier to recover from, if they are recoverable at all,
so checking for them is more worthwhile.
This doesn't actually justify omitting a *check* on failure, of course,
since even dying with an error in a well-defined way is better than invoking
undefined behavior. The cost of checking is so insignificant that, if you
really need to get your savings there, you're probably doing memory
allocation wrong to begin with.
S.