So you have to look to see what workarounds there are for this. Once
you've done that one, you can reuse the solution so even if it takes a
week amortise thatover all of the applications you will write!
but later you say there are no one-size-fits-all solutions?
On at least some there are ways to get it rendered immediately. I know
I've done that in the past. Once you have solved it for one application
reuse the solution for others.
If the toolkit being used is not one of those, then it is irrelevant that
some provide a means to do so, particularly if the "some" are not
available for the platform being targeted.
So you are saying all widget toolkits are badly designed. This is
possible.
I never said "badly designed", though I would agree "sub optimal in an
ideal world". There's a difference (to me, at least).
However, it has enough memory to do it.
How can you assert this?
If you try and fail
you are no worse off, if you try and succeed you are better off.
I'll agree with that, and wherever I use malloc() directly (or
g_try_malloc(), I do write error handling - which may or may not include
attempting popping up an error dialog depending on the situation).
See, there *are* ways to deal with it!
Right, but as you mentioned was a problem for xmalloc(), we have the same
problem here. Not enough context for most real-world applications to
recover at this point.
No, it's not too late, since as you save documents you can free up the
memory they used giving you the memory to pop up dialogues! Or, as
previously mentioned, have stuff pre-allocated.
Easier said than done, not that it /can't/ be done - but one could easily
argue that this is more effort than it is worth, and unless you are able
to test your failure cases thoroughly, not even reliable.
It is /more/ reliable to routinely auto-save the user's work (as you
mentioned elsewhere, to a file other than the original) because it is
much easier to warn users about problems (potential or no) and certainly
easier to implement recovery should the application crash due to
uncontrollable (kernel crash, power outage, etc) error conditions on the
next application startup.
Depending on the document, one could write the application such that any
button click (or whatever) would cause an auto-save in addition to some
timeout, thus reducing the likelihood of there being any unsaved changes
at any given point in time.
Since you obviously need this auto-save functionality in place if you are
serious about protecting the user's data at all costs anyway, then it
becomes no longer necessary to chain malloc() failures up your call stack
in order to use static emergency buffers.
At this point g_malloc() calling abort() becomes a moot point,
particularly if your auto-save code is robust against memory allocation
errors (keeping a small subsection of code bug free and robust against
all possible error conditions is a lot easier and less costly in
developer time than it is to do that for an application several million
lines of code long).
In that instance you would use a different solution, probably saving the
email in a draft folder or something like that. People have already said
there is not a one-size-fits-all solution!
Hey, guess what? Evolution did this using an auto-save approach and it
used g_malloc() in much of the application code.
Different approaches, same end result. Oh, sure, maybe in your ideal
case, the application exits from main() with a 'return 0;' as opposed to
an exit() call (or abort()), but that is irrelevant.
[snip]
You probably need mechanisms to signal the spell checker and print
process anyway to cope with the user choosing to abort them.
This is true, however you still need context information in order to do
so. I never said that the application wouldn't have the ability to cancel
the spell checker or printing, but in order to do so you need context. If
you are in a function being called asyncronously from somewhere which
might not even be your code which may not pass up your particular error
condition, then you are pretty much screwed unless your contexts are all
globally accessible.
While this may suggest the application (or the libs it depends on) is
poorly designed (or at least not suitably designed), the argument does
little to solve the actual problem at hand.
In the real world of end-user software development (e.g. not software
written for space ships or other areas where human lives are on the line)
where the application's design is based on incomplete specifications (as
in they tend to change mid-development) in combination with insufficient
allotted time, designing the perfect solution is downright impossible,
and so it is, unfortunately, not all too uncommon for the application's
design to be insufficient for every possible error condition.
If this is new to you, then you've never written real-world software and
I would appreciate having your pitty... because I, too, would love to
live in Ideal World where I have sufficient time and specifications to
use in order to come up with a proper design before I'm forced to begin
implementation
[snip]
That it depends on the daemon shows that is is possible, or it would be
a simple case that none do.
Normally they report something the system administrator is able to
understand. At least, most that I use do.
Key word: most
Yes, which is why things like xmalloc are a problem, because they do not
have that context.
Agreed in so much as they are not an ideal solution to the failing malloc
problem
They are, however, /a/ solution to the problem and might, in some
situations, be more than ample.
Depends. I've had a report come back to me (via at least a couple of
layers of intermediaries) that had exactly the information that the
"dialogue" provided (it was not a GUI application).
As have I, in my gui applications even.
Again, see above ;-)
Worth the effort though.
In an ideal world, perhaps. If you've already got an auto-save feature
then it is not necessarily worth the extra effort.
I would agree that it /is/ worth the effort in the case where the failing
malloc() call is in the auto-save code, however
You have to take extra care with any out-of-resource error to ensure you
can report it without the resource in question.
Lotus Notes has given me an out-of-memory dialogue. I'll leave you to
draw your own conclusions from this.
I would conclude, that, like some parts of Evolution, if it is unable to
allocate resources for some non-critical data structure(s), that it is
able to report the "out of memory" issue to the user.
I seem to recall you claiming VMWare reported "out of memory" conditions
to the user as well, but as Ben Pfaff noted, VMWare uses xmalloc-like
wrappers as well.
Maybe you cannot trap and deal with all of them if the underlying system
does not let you, but that does not mean you should ignore those you can
deal with!
Never said otherwise!
It may depend on exactly where you hit it. Of course, any time when your
application or library calls malloc it has the opportunity to do it!
Sure, the same goes for any application written on top of glib!
(Not the case if you use g_malloc() of course, but you are hardly forced
to use only g_malloc() just because you link with glib).
Then that is before the user has had a chance to enter any data in the
unnamed document, so they won't be as upset of it pops up a dialogue
saying "Out of memory, cannot create new document".
Not necessarily, but I will agree that this is /likely/ the case.
A single logging file to allow recovery on application restart is
possible. It requires some work on synchronisation, but if designed in
from the start is possible.
I've used this approach for some simpler applications.
Auto-save is actually not that much different to this.
Yes, regular auto-save is another way to protect users data, as long as
you are not saving over the original.
Agreed.
You may only be talking about free software written by volunteers,
I am talking about software written by anyone, but especially volunteers.
I am
talking about all software whether written by volunteers or not. The
open source community (some of it at least) wants to be taken as a
serious alternative to closed source, so it should take the same effort
to produce robust applications and libraries.
Amusing to me is that none of these developers are writing GUI apps or
libs afaict ;-)
It's not hard to find command-line programs and/or general purpose libs
that /are/ robust, like Ben Pfaff's AVL tree library for example, but
none of the ones I know of for writing GUI applications are of this
quality.
If I wanted to write an application that would meet your ideal criteria,
I'd have to write my application from the ground up, including the widget
toolkit. This is not only impractical from the development standpoint,
but also from the user's perspective where the application does not look
like any of his other applications. It would also not be able to share
much with the other applications running on the user's desktop and so
would use a lot more resources than a Good Enough solution.
Yes, you do need the libraries you are building on to pass up the
errors, hence the comments about glib.
Anyone using glib stand-alone should probably reconsider, especially if
they are writing "mission critical" applications.
Most people, however, use glib via Gtk+ - and being that is the /only/
practical widget toolkit available to C software developers for Unix, you
can't easily write off glib altogether.
I honestly would not be surprised if the other major contender in the
widget toolkit space (being Qt) had similar problems wrt memory
allocation failure conditions, but even if it did, you wouldn't be able
to write the application in C afaik (you'd have to switch to c++).
Or a form of logging as the user goes that you can use to recover (I've
used non-gui applications that do this). Ov course, you have to make
sure your auto-save and/or logging handle resources very carefully so
that they do not loose the last good state if they run out of memory.
Yes, this is what I've been saying.
It's very easy to remember I find since I *am* building my SW on top of
3rd party libraries.
These days no one has time to check all the code they rely on (and often
the source code is not available for everything). So yes, you rely to a
degree on others doing the job right.
Glad we agree so far.
As part of that you point out when
it is done wrong!
Well, discussing it here isn't going to get the problem solved. If you
truly feel that strongly about it, then you should either fix the problem
(free software, afterall) or at the very least submit a bug report! ;-)
You can't deal with *everything* but we were talking about dealing with
something where the libc *does* report a failure.
You /assume/ that all code paths properly handle OOM conditions
internally and propagate them back up the call stack. But libc is still
only implemented by humans last I checked, so there is a possibility of
bugs.
That's a pretty hefty assumption that you CANNOT rely on for mission
critical user data (since that's what your whole argument revolves around
in the g_malloc()-is-evil argument).
Because of this possibility, you MUST implement a safety net - aka auto-
save. Once you have auto-save in place and properly written to handle
every conceivable error condition that /it/ may encounter (OOM being
one), then the value gained by using malloc() over g_malloc() in the
remaining areas of the code begins to rapidly lose their practical value
(if the goal is simply to make sure the user's data is saved before
exiting).
Wouldn't you agree?
I've no problem with autosave being part of the recovery strategy. As
you say, it can help when there is nothing that can be done because the
kernel has crashed.
Right.
The do have the luxury of choosing which libraries to build on and of
reporting things which are a problem. You also have the luxury of not
using malloc wrappers that don't allow you to do suitable recovery.
Not always.
For bonus reading, you might check out Richard Gabriel's paper on Worse
Is Better.
GLib's g_malloc() must be "good enough" because more and more Gtk+
applications keep popping up like wildfire just as C overtook LISP due to
the Worse Is Better rule.
Jeff