The point is that they don't. If fopen() fails, there is nothing
to recover from.
False. If fopen() fails, it could have been because the user had a
typo in a filename he wanted to open in another window to cut/paste
from. When you bail out of the app because this file couldn't be
opened, that's just plain /stupid/. There are cases where a fopen()
failure could be critical with no possible way to recover, continue, or
retry, but they are rare.
Though if all your application does is fopen(),
then you can safely abort() when fopen() fails.
Or, you could try and determine why it failed, and take corrective
action, either automatically, or at the user's direction. But hey,
that cuts into the Xbox360 time, eh?
So, you work with a list, and you append an element to it.
Now you do list = g_list_append(list, something); with malloc
error handling you'll have to test whether list_append()
succeeded. Not much more typing, no. A little bit more, huh?
C++ exceptions would be appropriate here, but manual error
checking in C code *is* much much more typing.
No, it's way better if the guy happens to have a dozen apps open at the
moment, and this one is really critical work that he's been entering
data into for the last 2 hours. You could do this:
Display a message to the effect of "Unable to add new record due to an
out of memory condition, please close some other applications if you
would like to try again, or save the work in progress to prevent data
loss"
But instead, this /wonderfully/ designed application aborts and dumps
all his work. That's so much better, right? Clearly this "simpler"
solution is much better than actually protecting the user's data in
glib land. I just can't figure out how they convinced anyone to use
it.
Maybe the real issue with open source is who it lets into the party,
not how much the cover charge is.
There are about five bazillion allocations, debugger won't do. Random
malloc() failures will do as a nice stress test, yes. But you still
won't be able to test it properly. At least not that piece of code
where it will segfault when user runs it (here I assume that user
will be able to see it, perhaps on windows).
If you check, it won't segfault. That's the whole point. You'll
detect an error, and handle it, instead of merrily trudging along and
counting on the runtime to abort your entire process so you don't have
to worry about branch prediction hits in your wonderfully bloated, yet
somehow pseudo-optimized pile of crap you foist on the user community.
A better thing to do
is to test malloc() failure in one place, and possibly do what you
can do there, and abort.
That is an option, out of many that are available in general. With
glib or other designs with that model, that's pretty much all you're
left with. Any chance of doing anything even remotely professional in
light of an allocation failure is out the window.
Yeah, mozilla leaking too much. Or evolution leaking too much (?).
It would be nice to see something more substantial than "I know
for a fact" (debugged it, looked at the core file?). And would
be nice to hear about gedit or gnumeric crashing because of malloc()
failure.
Turn off overcommit, fire up some quick command line tools to chew up
ram and fill up the swap, and watch them all start crashing. Easy.
Avoiding losing your work in an emergency condition is just
a different story,
No, it isn't. In the majority of cases where a malloc() fails other
than immediately after program launch there is a potential for data
loss, file corruption, leaving stale files laying around, etc.
say you can have your application lost its
terminal or X connection, in those cases you can possibly do
something to save user's work. And that's something you can
(try to) do from inside g_malloc() when malloc() fails. It's
not necessary to write a g_list_append() which can fail for
that.
What about when the g_list_append() is the thing that fails, and it's
trying to add the 4000th new element to this list, of which the 3999
other elements are already in, but the data has not been saved? This
little "I didn't feel it was necessary to handle that" comment isn't
going to make the user happy at all. But who cares, it's free software
right, they have no complaints coming.
So, you got an event, you need to put it into the event queue.
Either you allocate memory for that (and it fails), or you
preallocate memory, it is not enough, and you try to allocate
again (and it fails). What can you do apart from some emergency
action (saving important data or something) and exit? How
do you "recover"?
That little act of "saving important data or something" isn't a minor
thing. You /might/ not be able to recover completely and continue if
nothing else happened. You should be able to save work in progress
though. You might even be able to notify that the user that you're
having an issue with grabbing more memory, and let the user try and
shut down some other apps to make it available.
But what the hell, that's just too much trouble.