xmalloc string functions

J

Jeffrey Stedfast

Kelsey Bjarnason said:
[snips]

On Mon, 28 Jan 2008 04:17:54 +0000, Yevgen Muntyan wrote:
All this apart from real
problems you have to solve. Yes, *real*. No, g_malloc() aborting an
application is not a real problem. Not for a regular desktop
application.

Except that at least one person *here*, in a comparatively small
community, has reported application crashes *precisely* due to this.

Yes, Kelsey, but so what? Read what he's saying: this is *not* a real
problem. It's only data - how could mere data possibly be important
enough to justify spending a little extra time working out how to
salvage it in the event of an allocation failure?

Apparently, "regular old desktop applications", such as Office suites,
database front-ends, image management programs like Photoshop, Gimp,

ironically, The GIMP is built on top of glib and gtk+ ;-)

"oops"
etc., Mail applications and such, all contain information too worthless
to be worth wasting the precious developer's time on. What's a few
hours of work here or there on the user's behalf, when the developer has
a LAN party to go to, or needs to practice riding his unicycle up and
down the hall?

When I worked on Evolution, we took data integrity seriously and did
everything we could to make sure that user's email (being composed or
just receiving via pop) would not be lost using a variety of techniques.

Were we perfect? Probably not, but there was no lack of effort. Someone
else mentioned in this thread that they perused the source code to
Evolution and found that it did do proper error checking in at least the
MIME/charset code (other than where it used g_malloc() because that by
definition cannot fail), so you don't have to take my word for it (and
hey, he was one of the guys bashing glib's g_malloc, so he's not exactly
"on my side").

My LAN parties were typically hackfests on Evolution in my spare time, so
I guess that's not my excuse ;)

In places where we it made sense, we avoided the use of g_malloc() and
either used g_try_malloc() instead so we could handle cases where large
alloc's were likely... or we would incrementally handle the data so as to
avoid ever needing the entire thing in memory at a given point in time.

I worked on Evolution for about 5 years spending far more than 40 hours
per week working on it and you'll probably hear a lot of people say it's
"crap" because it has a lot of bugs.

As much as I'd love to blame crashes on g_malloc() failing like one of
the original posters in this thread used as an example, it's simply not
true. Most of our bugs are genuine bugs that are unrelated to g_malloc()
abort()ing because memory was scarce. Even most of the cases where the
backtrace submitted was an abort() inside g_malloc(), it was typically
because we tried to malloc an invalid amount of memory (I say invalid
because the size was calculated wrongly via bad pointer arithmetic or
whatever).

If my partner and I had had to do NULL checks for every malloc() call
(and handling them in some idealistic way) made in the portions of
Evolution we worked on (never mind fixing glib's g_malloc() usage), we'd
still be trying to reach the functionality released in 1.0.0 back in 2001.

I'm sure many Evolution haters would cheer, but there are far more people
who like and rely on Evolution than the few dozen that ridicule it, even
with all of its bugs.

This goes for many desktop applications (GNOME or not)... my point is
that users would rather have an application that might crash if the
system runs out of memory than no application at all.


Jeff
 
F

Flash Gordon

Jeffrey Stedfast wrote, On 03/02/08 06:57:
what if the system doesn't even have enough memory to pop up such a
dialog? Likely it'd take far more resources to display said dialog than
it would take to make the calculation ;-)

Have it pre-created together with all the resources it will require,
then it doesn't need any more resources.
ah, but that also requires a clipboard memory buffer be allocated... but
you have no memory left ;-)

Have it pre-created, then if you need a larger buffer for the next step
and you can't enlarge it you only loose that last step.
Sorry to be a smart-ass here, but you clearly have not thought about this
problem in the context of the average desktop application.

Some of us have and have already suggested having and using a buffer
that is large enough for the emergency action.
In a small daemon, if you get a malloc() failure, you have a lot more
options open to you than you do in a graphical desktop application
because you can do a lot more with very small amounts of memory (or none
at all).

A few examples:

- start dropping idle client connections (which would likely not require
any /new/ allocations) until you have enough memory to do the critical
operation you need to do

Drop that background print-job or spell check that is consuming memory
for your desktop app.
- print an out-of-memory or "sorry, can't do that right now" error to the
client socket (or terminal) which would likely not require any new memory
allocations (error strings could be static)

Pre-create the out-of-memory dialogue and any resources it requires.
- wait until memory becomes available

With an appropriate pre-created dialogue you can do that on a GUI
application as well.
On the other hand, say, a word processor application, if the user
requests some sort of action and a malloc() fails for 12 bytes, what is
it supposed to do?

Any of the above.
If the documents the user has open have already-opened file descriptors,
the app might be able to save them before going down - but:

That is easy to arrange.
1. it certainly doesn't have the option of displaying an error dialog.

Yes it does if it pre-creates it during application start-up.
2. if any of the files are unnamed or otherwise would require any of:

Don't allow them to be unnamed. You can create a name at the same time
as creating the otherwise unnamed document.
a) filename generation (would require string building)

Which can be done using a pre-allocated emergency buffer
b) file descriptor opening (which takes memory)

Which could have been opened when the document was created
c) user-interaction (this one is right out)

Which can be avoided or use pre-created dialogue boxes that won't need
any more memory than they already have.
the application would certainly not be able to save the documents at that
point...

If it was planned for it could. I've come up with ways of dealing with
all of the problems. Not to mention the possibility of opening an
emergency file at start-up and if it is non-empty using it to recover
from the previous crash, or if it is empty using it to write the
information to allow recovery.
3. can't simply wait because that would be a "hang" which is likely to
cause the user to kill it anyway.

You can't simply wait with most network daemons either. You just make
sure you have appropriate resources already available for your recovery
strategy.
Feel free to fix it?

I don't know about Richard, but I have quite enough work already.
Oh wait, I forgot that this whole thread is actually a pissing contest
more than anything else, so that people who don't actually write desktop
applications can feel superior to those who do.

I think it is more annoyance at application we might otherwise consider
using that would just throw away our hard work in situations some of us
do hit.
 
J

Jeffrey Stedfast

Jeffrey Stedfast wrote, On 03/02/08 06:57:

Have it pre-created together with all the resources it will require,
then it doesn't need any more resources.

if a dialog takes more resources than you're ever likely to need in a
calculator app to do a calculation, doesn't this feel wasteful for the
99.9999999% cases?

Let's also not forget that the act of /showing/ the dialog may, in fact,
require memory allocations depending on the way the system works.

For example, requesting that a dialog be shown may not actually show the
dialog immediately... it might only queue the operation for the next
rendering pass.

Said rendering pass may require more allocations, but at this point it's
too late to simply unwind the stack to the point where you requested the
show(). Since no widget toolkit I know of has a way of notifying the
application of said error, what is it to do?


For Gtk+, you actually do have an option... GLib uses a vtable for malloc/
realloc/calloc/free that you can initialize with your own routines at
init.

You could potentially do your own NULL-check there so that you can be pre-
warned about memory allocation errors coming up, but it'll lack context
(who tried to allocate this memory? for what purpose?), but I suppose if
you had everything pre-allocated, ready to go - you could call some
global prepare_for_abort() function that could perhaps iterate thru all
of your unsaved files and save them quick before the abort() call in the
g_malloc() wrapper. This wouldn't allow you to pop up any dialogs,
however, because at this point its too late.
Have it pre-created, then if you need a larger buffer for the next step
and you can't enlarge it you only loose that last step.

this one indeed is likely an easier and more reliable method for this
particular instance, but not all desktop applications can go around using
this type of approach.

For example, would it be a good idea for an email application to set
aside this clipboard buffer? :)

I think we'd both agree the answer is no.
Some of us have and have already suggested having and using a buffer
that is large enough for the emergency action.

the problem with this approach (and it's not a terrible one), is that it
means you have to be diligent about making sure it's a big enough buffer
to handle all your possible failure cases gracefully. In an application
that is 2 million lines of code, this is not trivial to accomplish.

Oh, and it's only 2 million instead of 2.5-3 million because it uses
g_malloc() :)
Drop that background print-job or spell check that is consuming memory
for your desktop app.

That means you'd have to have that print job context or spell-check
context global somewhere, or have some way of getting it from a lot of
different locations... GUI apps can't always pass errors up to main() (or
where ever your main event loop lives) quite so easily as the average
daemon can.
Pre-create the out-of-memory dialogue and any resources it requires.

doable if you don't want to give any specifics... daemons are often not
very user-friendly in their error reporting... depending on the daemon,
it might be as simple as an integer error code or as forthcoming as a
string from strerror(), but rarely do they report something that the user
is able to understand. Sure, "out of memory, cannot perform that
operation" may work for simple applications where only 1 thing at a time
is ever going on, but if the application happens to be doing many things
at once the user will want to know /which/ operation could not be
completed because memory was unavailable?

Trust me, this is the case... applications I've worked on have actually
had these sorts of complaints filed against them. It's funny, because all
the user testing I've seen indicates that users never read the dialogs
anyway ;-)
With an appropriate pre-created dialogue you can do that on a GUI
application as well.

see above.
Any of the above.

Easier said than done, I'm afraid...
That is easy to arrange.


Yes it does if it pre-creates it during application start-up.

see above, although I suppose if you really wanted to, you could make an
exception for the "out of memory" dialog case as opposed to other error
dialogs your application might use.

First... I wonder if there are any widget toolkits that don't already
abort() (or similar) when they run out of memory or in any other
conditions without giving my calling code a chance to handle it?

As someone else mentioned, X already has this limitation... so right
there, that means there's no Unix toolkits that you can use. Guess we'll
all just have to write applications for ... does Windows or MacOSX handle
this? I somehow doubt it.
Don't allow them to be unnamed. You can create a name at the same time
as creating the otherwise unnamed document.

What if the act of creating a name is what finds the out-of-memory
condition?
Which can be done using a pre-allocated emergency buffer

what about memory that some lower-level stuff might require that you
can't control? Perhaps you use a library for writing said file out to
disk once you have the filename... even if only libc, you still need
enough memory for an fopen() to succeed - you can't make /it/ grab from
your emergency buffer.

Well, actually I suppose you could always replace the malloc symbol, but
that still leaves you the problem of not necessarily knowing the memory
requirements of lower-level library functions which means you can't
accurately calculate how much emergency buffer you need.
Which could have been opened when the document was created

this isn't doable if it's possible to have a large number of things
opened. You may quickly run out of file descriptors.
Which can be avoided or use pre-created dialogue boxes that won't need
any more memory than they already have.

see above
If it was planned for it could. I've come up with ways of dealing with
all of the problems.

not very good ways.

I maintain that a better way is simply to auto-save user's work... far
far simpler to implement and far more reliable a solution unless you are
able to provide 100% test coverage for your application.

Remember, we are talking about free software desktop applications written
by volunteers, here, not programmers paid 150k/yr to write branches for
each memory allocation they ever make in their application or library.

Especially provided that these programmers can at best assume that the
authors of the libraries they are building on top of have also done their
work of error checking every malloc and properly handling it and/or
chaining it up to the caller.

This is, in fact, where your whole argument falls apart. As a developer
of GNOME desktop applications (hell, scratch that - of X11 based
applications), you already KNOW that I cannot rely on glib or gtk+ (or
Xlib) to gracefully handle all memory allocation errors... so I have no
choice but to resort to my auto-save approach.

It's too easy to forget that large applications are generally built on
top of code that other people wrote that you may or may not be able to
read the source code for to verify that they properly handle all errors.
You could even make this argument for daemon authors - how many of you
have actually read through all of the source code for the libc you build
your applications on top of? None? Remember that and you might want to
rethink not using the auto-save approach (in addition to error checking).
I think it is more annoyance at application we might otherwise consider
using that would just throw away our hard work in situations some of us
do hit.

I've hit it as well, and yes, I'm also annoyed when it happens - but it
is a problem that cannot be easily remedied by the application developers
if the software stack somewhere underneath the code they've written is
faulty (and I have personally run into Linux kernel and GNU libc bugs
that have cause problems in applications I've written).

Since you cannot rely on error checking to catch all errors, it's best to
have a fallback plan - which is auto-save. It's a lot less likely to fail
and a lot more tolerant of buggy code (either in your application or
below your stack).

In the real world, developers do not have the luxury (or desire, most of
the time) to write applications from the ground up. They build on top of
software that already exists.

Jeff
 
E

Ed Jensen

Jeffrey Stedfast said:
If my partner and I had had to do NULL checks for every malloc() call
(and handling them in some idealistic way) made in the portions of
Evolution we worked on (never mind fixing glib's g_malloc() usage), we'd
still be trying to reach the functionality released in 1.0.0 back in 2001.
[SNIP]

This goes for many desktop applications (GNOME or not)... my point is
that users would rather have an application that might crash if the
system runs out of memory than no application at all.

It's nice to see a little reality injected in this thread.

The folks around here that think every single malloc in every single
application should carefully propagate up the entire call chain are
obnoxiously unrealistic.
 
F

Flash Gordon

Jeffrey Stedfast wrote, On 03/02/08 14:51:
if a dialog takes more resources than you're ever likely to need in a
calculator app to do a calculation, doesn't this feel wasteful for the
99.9999999% cases?

Last time I checked (which was a long time ago I know) a single dialogue
box did not require that much especially as we are talking about a
calculator which is already a GUI one.
Let's also not forget that the act of /showing/ the dialog may, in fact,
require memory allocations depending on the way the system works.

So you have to look to see what workarounds there are for this. Once
you've done that one, you can reuse the solution so even if it takes a
week amortise thatover all of the applications you will write!
For example, requesting that a dialog be shown may not actually show the
dialog immediately... it might only queue the operation for the next
rendering pass.

On at least some there are ways to get it rendered immediately. I know
I've done that in the past. Once you have solved it for one application
reuse the solution for others.
Said rendering pass may require more allocations, but at this point it's
too late to simply unwind the stack to the point where you requested the
show(). Since no widget toolkit I know of has a way of notifying the
application of said error, what is it to do?

So you are saying all widget toolkits are badly designed. This is
possible. However, it has enough memory to do it. If you try and fail
you are no worse off, if you try and succeed you are better off.
For Gtk+, you actually do have an option... GLib uses a vtable for malloc/
realloc/calloc/free that you can initialize with your own routines at
init.

See, there *are* ways to deal with it!
You could potentially do your own NULL-check there so that you can be pre-
warned about memory allocation errors coming up, but it'll lack context
(who tried to allocate this memory? for what purpose?), but I suppose if
you had everything pre-allocated, ready to go - you could call some
global prepare_for_abort() function that could perhaps iterate thru all
of your unsaved files and save them quick before the abort() call in the
g_malloc() wrapper. This wouldn't allow you to pop up any dialogs,
however, because at this point its too late.

No, it's not too late, since as you save documents you can free up the
memory they used giving you the memory to pop up dialogues! Or, as
previously mentioned, have stuff pre-allocated.
this one indeed is likely an easier and more reliable method for this
particular instance, but not all desktop applications can go around using
this type of approach.

For example, would it be a good idea for an email application to set
aside this clipboard buffer? :)

I think we'd both agree the answer is no.

In that instance you would use a different solution, probably saving the
email in a draft folder or something like that. People have already said
there is not a one-size-fits-all solution!
the problem with this approach (and it's not a terrible one), is that it
means you have to be diligent about making sure it's a big enough buffer
to handle all your possible failure cases gracefully. In an application
that is 2 million lines of code, this is not trivial to accomplish.

How easy it is depends on a lot more that the line count. It also
depends on the design.
Oh, and it's only 2 million instead of 2.5-3 million because it uses
g_malloc() :)

If 25-33% of the code is handling out-of-resource conditions then you
probably have a very badly designed application. Even Malcolm does not
expect it to be that high for the application as a whole!
That means you'd have to have that print job context or spell-check
context global somewhere, or have some way of getting it from a lot of
different locations...

You probably need mechanisms to signal the spell checker and print
process anyway to cope with the user choosing to abort them.
GUI apps can't always pass errors up to main() (or
where ever your main event loop lives) quite so easily as the average
daemon can.

Maybe not, but it still has mechanisms for the different parts of the
application to communicate.
doable if you don't want to give any specifics... daemons are often not
very user-friendly in their error reporting... depending on the daemon,

That it depends on the daemon shows that is is possible, or it would be
a simple case that none do.
it might be as simple as an integer error code or as forthcoming as a
string from strerror(), but rarely do they report something that the user
is able to understand.

Normally they report something the system administrator is able to
understand. At least, most that I use do.
Sure, "out of memory, cannot perform that
operation" may work for simple applications where only 1 thing at a time
is ever going on, but if the application happens to be doing many things
at once the user will want to know /which/ operation could not be
completed because memory was unavailable?

Yes, which is why things like xmalloc are a problem, because they do not
have that context.
Trust me, this is the case... applications I've worked on have actually
had these sorts of complaints filed against them. It's funny, because all
the user testing I've seen indicates that users never read the dialogs
anyway ;-)

Depends. I've had a report come back to me (via at least a couple of
layers of intermediaries) that had exactly the information that the
"dialogue" provided (it was not a GUI application).
see above.

Again, see above ;-)
Easier said than done, I'm afraid...

Worth the effort though.
see above, although I suppose if you really wanted to, you could make an
exception for the "out of memory" dialog case as opposed to other error
dialogs your application might use.

You have to take extra care with any out-of-resource error to ensure you
can report it without the resource in question.
First... I wonder if there are any widget toolkits that don't already
abort() (or similar) when they run out of memory or in any other
conditions without giving my calling code a chance to handle it?

Lotus Notes has given me an out-of-memory dialogue. I'll leave you to
draw your own conclusions from this.
As someone else mentioned, X already has this limitation... so right
there, that means there's no Unix toolkits that you can use.

Maybe you cannot trap and deal with all of them if the underlying system
does not let you, but that does not mean you should ignore those you can
deal with!
Guess we'll
all just have to write applications for ... does Windows or MacOSX handle
this? I somehow doubt it.

It may depend on exactly where you hit it. Of course, any time when your
application or library calls malloc it has the opportunity to do it!
What if the act of creating a name is what finds the out-of-memory
condition?

Then that is before the user has had a chance to enter any data in the
unnamed document, so they won't be as upset of it pops up a dialogue
saying "Out of memory, cannot create new document".
what about memory that some lower-level stuff might require that you
can't control? Perhaps you use a library for writing said file out to
disk once you have the filename... even if only libc, you still need
enough memory for an fopen() to succeed - you can't make /it/ grab from
your emergency buffer.

I suggested other alternatives for if this is not possible.
Well, actually I suppose you could always replace the malloc symbol, but
that still leaves you the problem of not necessarily knowing the memory
requirements of lower-level library functions which means you can't
accurately calculate how much emergency buffer you need.


this isn't doable if it's possible to have a large number of things
opened. You may quickly run out of file descriptors.

I suggested other alternatives as well.
see above

Again, see above :)
not very good ways.

A single logging file to allow recovery on application restart is
possible. It requires some work on synchronisation, but if designed in
from the start is possible.
I maintain that a better way is simply to auto-save user's work... far
far simpler to implement and far more reliable a solution unless you are
able to provide 100% test coverage for your application.

Yes, regular auto-save is another way to protect users data, as long as
you are not saving over the original.
Remember, we are talking about free software desktop applications written
by volunteers, here, not programmers paid 150k/yr to write branches for
each memory allocation they ever make in their application or library.

You may only be talking about free software written by volunteers, I am
talking about all software whether written by volunteers or not. The
open source community (some of it at least) wants to be taken as a
serious alternative to closed source, so it should take the same effort
to produce robust applications and libraries.
Especially provided that these programmers can at best assume that the
authors of the libraries they are building on top of have also done their
work of error checking every malloc and properly handling it and/or
chaining it up to the caller.

Yes, you do need the libraries you are building on to pass up the
errors, hence the comments about glib.
This is, in fact, where your whole argument falls apart. As a developer
of GNOME desktop applications (hell, scratch that - of X11 based
applications), you already KNOW that I cannot rely on glib or gtk+ (or
Xlib) to gracefully handle all memory allocation errors... so I have no
choice but to resort to my auto-save approach.

Or a form of logging as the user goes that you can use to recover (I've
used non-gui applications that do this). Ov course, you have to make
sure your auto-save and/or logging handle resources very carefully so
that they do not loose the last good state if they run out of memory.
It's too easy to forget that large applications are generally built on
top of code that other people wrote that you may or may not be able to
read the source code for to verify that they properly handle all errors.

It's very easy to remember I find since I *am* building my SW on top of
3rd party libraries.
You could even make this argument for daemon authors - how many of you
have actually read through all of the source code for the libc you build
your applications on top of? None? Remember that and you might want to
rethink not using the auto-save approach (in addition to error checking).

These days no one has time to check all the code they rely on (and often
the source code is not available for everything). So yes, you rely to a
degree on others doing the job right. As part of that you point out when
it is done wrong!
I've hit it as well, and yes, I'm also annoyed when it happens - but it
is a problem that cannot be easily remedied by the application developers
if the software stack somewhere underneath the code they've written is
faulty (and I have personally run into Linux kernel and GNU libc bugs
that have cause problems in applications I've written).

You can't deal with *everything* but we were talking about dealing with
something where the libc *does* report a failure.
Since you cannot rely on error checking to catch all errors, it's best to
have a fallback plan - which is auto-save. It's a lot less likely to fail
and a lot more tolerant of buggy code (either in your application or
below your stack).

I've no problem with autosave being part of the recovery strategy. As
you say, it can help when there is nothing that can be done because the
kernel has crashed.
In the real world, developers do not have the luxury (or desire, most of
the time) to write applications from the ground up. They build on top of
software that already exists.

The do have the luxury of choosing which libraries to build on and of
reporting things which are a problem. You also have the luxury of not
using malloc wrappers that don't allow you to do suitable recovery.
 
Y

ymuntyan

Jeffrey Stedfast wrote, On 03/02/08 14:51:





Last time I checked (which was a long time ago I know) a single dialogue
box did not require that much especially as we are talking about a
calculator which is already a GUI one.

How much did it require?
So you have to look to see what workarounds there are for this. Once
you've done that one, you can reuse the solution so even if it takes a
week amortise thatover all of the applications you will write!

You mean the week that it takes to write code which presents a dialog?
If the dialog is the only thing needed here, then it'd worth it. In
other words, you are not kidding, are you?
On at least some there are ways to get it rendered immediately. I know
I've done that in the past. Once you have solved it for one application
reuse the solution for others.

Yes, you can draw a dialog immediately. But
1) it needs memory (you have no idea how much memory, because
you need more than it's needed for the sole dialog object and
its chidren);
2) on windows (mac too, I believe), you can draw your window
once and it will stay there frozen, user at least will be able
to read what it says (though not click a button, which requires
more memory). On X, you got to redraw your windows all the time,
otherwise user simply won't see what the dialog says. And that
happens later.
So you are saying all widget toolkits are badly designed. This is
possible. However, it has enough memory to do it. If you try and fail
you are no worse off, if you try and succeed you are better off.

What do you mean you are no worse off? You spent lots of resources
on something which doesn't work, and it's no worse than before?
How about trying to do what's actually feasible, and not trying
to fix all the libraries under and above yours?
See, there *are* ways to deal with it!


No, it's not too late, since as you save documents you can free up the
memory they used giving you the memory to pop up dialogues! Or, as
previously mentioned, have stuff pre-allocated.

If you saved documents, you might as well kill the application
right there. Or someone needs a dialog saying "I saved your
documents, they are safe. Now you can try playing with me again.
Perhaps I'll work"?
In that instance you would use a different solution, probably saving the
email in a draft folder or something like that. People have already said
there is not a one-size-fits-all solution!

Did they? I thought there is one solution:

if (!(ptr = malloc(size)))
{
// handle it, easy
}

How easy it is depends on a lot more that the line count. It also
depends on the design.

And peace on the Earth depends on good will of people on the Earth.
And butter is usually made of butter.
If 25-33% of the code is handling out-of-resource conditions then you
probably have a very badly designed application. Even Malcolm does not
expect it to be that high for the application as a whole!





You probably need mechanisms to signal the spell checker and print
process anyway to cope with the user choosing to abort them.

Yeah, a global StuffCanceller. Every application has that.
Spell checker registers itself in StuffCanceller when it
checks spelling in entries in your application.
Maybe not, but it still has mechanisms for the different parts of the
application to communicate.

You should try to write a gui application once. You'll love
how they work: one part has no idea about other part (for good!)
If different unrelated parts knew about each other, or if
some central entity knew about every guy who can allocate
memory in the application, you'd have hard time even approaching
to working with such an application.
When you get OOM condition, all you can do is start killing
everybody around (and even that can trigger allocation in some
callback from someone who wants to save its state when a document
is closed or whatever).

Frankly, if everything was designed from ground again (and
that includes X, not "just" some funny libraries with funny names
starting with 'g'), it would be possible to handle OOM nicely.
It would require big amounts of resources [1] to design and write,
and then big amounts of resources to fix it and get it working
(the OOM handling part, that is). But, as it is, it's from the
ideal world software department.
The situation would be totally different if OOM actually
caused problems for users of course.

Yevgen

[1] If someone thinks it's easy, please try to design an OOM
handling mechanism for a gui application. Before that, write
a gui application (a calculator which can copy to clipboard
and has a menu will do fine, calculations part is not necessary),
and understand how it works. Really, "in a totally different
situation I did this" is sort of crap. You either know what
you are talking about or you don't. Does someone need a lesson
on how to write a webserver anyone? I've never written one
but I've got some nice ideas!
 
J

Jeffrey Stedfast

So you have to look to see what workarounds there are for this. Once
you've done that one, you can reuse the solution so even if it takes a
week amortise thatover all of the applications you will write!

but later you say there are no one-size-fits-all solutions? :)
On at least some there are ways to get it rendered immediately. I know
I've done that in the past. Once you have solved it for one application
reuse the solution for others.

If the toolkit being used is not one of those, then it is irrelevant that
some provide a means to do so, particularly if the "some" are not
available for the platform being targeted.
So you are saying all widget toolkits are badly designed. This is
possible.

I never said "badly designed", though I would agree "sub optimal in an
ideal world". There's a difference (to me, at least).
However, it has enough memory to do it.

How can you assert this?
If you try and fail
you are no worse off, if you try and succeed you are better off.

I'll agree with that, and wherever I use malloc() directly (or
g_try_malloc(), I do write error handling - which may or may not include
attempting popping up an error dialog depending on the situation).
See, there *are* ways to deal with it!

Right, but as you mentioned was a problem for xmalloc(), we have the same
problem here. Not enough context for most real-world applications to
recover at this point.
No, it's not too late, since as you save documents you can free up the
memory they used giving you the memory to pop up dialogues! Or, as
previously mentioned, have stuff pre-allocated.

Easier said than done, not that it /can't/ be done - but one could easily
argue that this is more effort than it is worth, and unless you are able
to test your failure cases thoroughly, not even reliable.

It is /more/ reliable to routinely auto-save the user's work (as you
mentioned elsewhere, to a file other than the original) because it is
much easier to warn users about problems (potential or no) and certainly
easier to implement recovery should the application crash due to
uncontrollable (kernel crash, power outage, etc) error conditions on the
next application startup.

Depending on the document, one could write the application such that any
button click (or whatever) would cause an auto-save in addition to some
timeout, thus reducing the likelihood of there being any unsaved changes
at any given point in time.

Since you obviously need this auto-save functionality in place if you are
serious about protecting the user's data at all costs anyway, then it
becomes no longer necessary to chain malloc() failures up your call stack
in order to use static emergency buffers.

At this point g_malloc() calling abort() becomes a moot point,
particularly if your auto-save code is robust against memory allocation
errors (keeping a small subsection of code bug free and robust against
all possible error conditions is a lot easier and less costly in
developer time than it is to do that for an application several million
lines of code long).
In that instance you would use a different solution, probably saving the
email in a draft folder or something like that. People have already said
there is not a one-size-fits-all solution!

Hey, guess what? Evolution did this using an auto-save approach and it
used g_malloc() in much of the application code.

Different approaches, same end result. Oh, sure, maybe in your ideal
case, the application exits from main() with a 'return 0;' as opposed to
an exit() call (or abort()), but that is irrelevant.

[snip]
You probably need mechanisms to signal the spell checker and print
process anyway to cope with the user choosing to abort them.

This is true, however you still need context information in order to do
so. I never said that the application wouldn't have the ability to cancel
the spell checker or printing, but in order to do so you need context. If
you are in a function being called asyncronously from somewhere which
might not even be your code which may not pass up your particular error
condition, then you are pretty much screwed unless your contexts are all
globally accessible.

While this may suggest the application (or the libs it depends on) is
poorly designed (or at least not suitably designed), the argument does
little to solve the actual problem at hand.

In the real world of end-user software development (e.g. not software
written for space ships or other areas where human lives are on the line)
where the application's design is based on incomplete specifications (as
in they tend to change mid-development) in combination with insufficient
allotted time, designing the perfect solution is downright impossible,
and so it is, unfortunately, not all too uncommon for the application's
design to be insufficient for every possible error condition.

If this is new to you, then you've never written real-world software and
I would appreciate having your pitty... because I, too, would love to
live in Ideal World where I have sufficient time and specifications to
use in order to come up with a proper design before I'm forced to begin
implementation :)

[snip]
That it depends on the daemon shows that is is possible, or it would be
a simple case that none do.


Normally they report something the system administrator is able to
understand. At least, most that I use do.

Key word: most :)
Yes, which is why things like xmalloc are a problem, because they do not
have that context.

Agreed in so much as they are not an ideal solution to the failing malloc
problem :)

They are, however, /a/ solution to the problem and might, in some
situations, be more than ample.
Depends. I've had a report come back to me (via at least a couple of
layers of intermediaries) that had exactly the information that the
"dialogue" provided (it was not a GUI application).

As have I, in my gui applications even.
Again, see above ;-)


Worth the effort though.

In an ideal world, perhaps. If you've already got an auto-save feature
then it is not necessarily worth the extra effort.

I would agree that it /is/ worth the effort in the case where the failing
malloc() call is in the auto-save code, however :)
You have to take extra care with any out-of-resource error to ensure you
can report it without the resource in question.


Lotus Notes has given me an out-of-memory dialogue. I'll leave you to
draw your own conclusions from this.

I would conclude, that, like some parts of Evolution, if it is unable to
allocate resources for some non-critical data structure(s), that it is
able to report the "out of memory" issue to the user.

I seem to recall you claiming VMWare reported "out of memory" conditions
to the user as well, but as Ben Pfaff noted, VMWare uses xmalloc-like
wrappers as well.
Maybe you cannot trap and deal with all of them if the underlying system
does not let you, but that does not mean you should ignore those you can
deal with!

Never said otherwise!
It may depend on exactly where you hit it. Of course, any time when your
application or library calls malloc it has the opportunity to do it!

Sure, the same goes for any application written on top of glib!

(Not the case if you use g_malloc() of course, but you are hardly forced
to use only g_malloc() just because you link with glib).
Then that is before the user has had a chance to enter any data in the
unnamed document, so they won't be as upset of it pops up a dialogue
saying "Out of memory, cannot create new document".

Not necessarily, but I will agree that this is /likely/ the case.
A single logging file to allow recovery on application restart is
possible. It requires some work on synchronisation, but if designed in
from the start is possible.

I've used this approach for some simpler applications.

Auto-save is actually not that much different to this.
Yes, regular auto-save is another way to protect users data, as long as
you are not saving over the original.
Agreed.


You may only be talking about free software written by volunteers,

I am talking about software written by anyone, but especially volunteers.
I am
talking about all software whether written by volunteers or not. The
open source community (some of it at least) wants to be taken as a
serious alternative to closed source, so it should take the same effort
to produce robust applications and libraries.

Amusing to me is that none of these developers are writing GUI apps or
libs afaict ;-)

It's not hard to find command-line programs and/or general purpose libs
that /are/ robust, like Ben Pfaff's AVL tree library for example, but
none of the ones I know of for writing GUI applications are of this
quality.

If I wanted to write an application that would meet your ideal criteria,
I'd have to write my application from the ground up, including the widget
toolkit. This is not only impractical from the development standpoint,
but also from the user's perspective where the application does not look
like any of his other applications. It would also not be able to share
much with the other applications running on the user's desktop and so
would use a lot more resources than a Good Enough solution.
Yes, you do need the libraries you are building on to pass up the
errors, hence the comments about glib.

Anyone using glib stand-alone should probably reconsider, especially if
they are writing "mission critical" applications.

Most people, however, use glib via Gtk+ - and being that is the /only/
practical widget toolkit available to C software developers for Unix, you
can't easily write off glib altogether.

I honestly would not be surprised if the other major contender in the
widget toolkit space (being Qt) had similar problems wrt memory
allocation failure conditions, but even if it did, you wouldn't be able
to write the application in C afaik (you'd have to switch to c++).
Or a form of logging as the user goes that you can use to recover (I've
used non-gui applications that do this). Ov course, you have to make
sure your auto-save and/or logging handle resources very carefully so
that they do not loose the last good state if they run out of memory.

Yes, this is what I've been saying.
It's very easy to remember I find since I *am* building my SW on top of
3rd party libraries.


These days no one has time to check all the code they rely on (and often
the source code is not available for everything). So yes, you rely to a
degree on others doing the job right.

Glad we agree so far.
As part of that you point out when
it is done wrong!

Well, discussing it here isn't going to get the problem solved. If you
truly feel that strongly about it, then you should either fix the problem
(free software, afterall) or at the very least submit a bug report! ;-)
You can't deal with *everything* but we were talking about dealing with
something where the libc *does* report a failure.

You /assume/ that all code paths properly handle OOM conditions
internally and propagate them back up the call stack. But libc is still
only implemented by humans last I checked, so there is a possibility of
bugs.

That's a pretty hefty assumption that you CANNOT rely on for mission
critical user data (since that's what your whole argument revolves around
in the g_malloc()-is-evil argument).

Because of this possibility, you MUST implement a safety net - aka auto-
save. Once you have auto-save in place and properly written to handle
every conceivable error condition that /it/ may encounter (OOM being
one), then the value gained by using malloc() over g_malloc() in the
remaining areas of the code begins to rapidly lose their practical value
(if the goal is simply to make sure the user's data is saved before
exiting).

Wouldn't you agree?
I've no problem with autosave being part of the recovery strategy. As
you say, it can help when there is nothing that can be done because the
kernel has crashed.
Right.


The do have the luxury of choosing which libraries to build on and of
reporting things which are a problem. You also have the luxury of not
using malloc wrappers that don't allow you to do suitable recovery.

Not always.


For bonus reading, you might check out Richard Gabriel's paper on Worse
Is Better.

GLib's g_malloc() must be "good enough" because more and more Gtk+
applications keep popping up like wildfire just as C overtook LISP due to
the Worse Is Better rule.

Jeff
 
F

Flash Gordon

How much did it require?

It was a long time ago.
You mean the week that it takes to write code which presents a dialog?
If the dialog is the only thing needed here, then it'd worth it. In
other words, you are not kidding, are you?

The was with reference to not being able to allocate enough memory to
even open a dialogue box, so the memory required is that needed to open
a dialogue box. I would not expect it to take a week to solve this
problem, so I plucked that figure out of thin air and pointed out that
as the work could be reused even that much time would not be a waist.
Yes, you can draw a dialog immediately. But
1) it needs memory (you have no idea how much memory, because
you need more than it's needed for the sole dialog object and
its chidren);

No, you DO know how much memory it defines because you know exactly what
the dialogue is. Or at the very least, you can easily find out what the
memory requirements are by reading the documentation.
2) on windows (mac too, I believe), you can draw your window
once and it will stay there frozen, user at least will be able
to read what it says (though not click a button, which requires
more memory).

I can't comment on that because I don't know what allocates the memory
for the click event. However, Windows could be designed sensibly...
On X, you got to redraw your windows all the time,

No, only when required.
otherwise user simply won't see what the dialog says. And that
happens later.

You can flush the event queue whenever you (the programmer) want.

Oh, and some of the X library functions at least can return that they
have failed due to lack of memory. Failure to handle that properly could
look like a problem with X to the uninitiated...
What do you mean you are no worse off? You spent lots of resources
on something which doesn't work, and it's no worse than before?

At least some of the time the X server will have enough memory for its
event queue so you will manage to get the dialogue box up.
How about trying to do what's actually feasible, and not trying
to fix all the libraries under and above yours?

I'm saying you use the facilities available in the library, which
includes the null returned by malloc on an allocation failure.
If you saved documents, you might as well kill the application
right there. Or someone needs a dialog saying "I saved your
documents, they are safe. Now you can try playing with me again.
Perhaps I'll work"?

It is always better to try and notify the user why the application has
failed.
Did they? I thought there is one solution:

if (!(ptr = malloc(size)))
{
// handle it, easy
}

That is not one solution only the minimal framework, the "handle it"
needs to be made to fit the situation, which is why people have *not*
been saying what goes on there.

Yeah, a global StuffCanceller. Every application has that.
Spell checker registers itself in StuffCanceller when it
checks spelling in entries in your application.

Well, if MS can provide a facility to cancel background print jobs then
I'm sure other SW developers can as well.
You should try to write a gui application once.

I have.
You'll love
how they work: one part has no idea about other part (for good!)
If different unrelated parts knew about each other, or if
some central entity knew about every guy who can allocate
memory in the application, you'd have hard time even approaching
to working with such an application.

One I did had two threads and they managed to talk to each other quite
nicely.
When you get OOM condition, all you can do is start killing
everybody around (and even that can trigger allocation in some
callback from someone who wants to save its state when a document
is closed or whatever).

When you design an application *you* are in control of which bits of it
might want to allocate more memory and you can design it so it does
things sensibly.
Frankly, if everything was designed from ground again (and
that includes X, not "just" some funny libraries with funny names
starting with 'g'), it would be possible to handle OOM nicely.
It would require big amounts of resources [1] to design and write,
and then big amounts of resources to fix it and get it working
(the OOM handling part, that is). But, as it is, it's from the
ideal world software department.

Just because the world is not ideal is no excuse to make things worse.
The situation would be totally different if OOM actually
caused problems for users of course.

It has already been pointed out that I am not the only person who runs
out of memory on high spec notebooks.
[1] If someone thinks it's easy, please try to design an OOM
handling mechanism for a gui application. Before that, write
a gui application (a calculator which can copy to clipboard
and has a menu will do fine, calculations part is not necessary),
and understand how it works.

I was just looking at the main X event loop of an application 5 minutes
ago. It is a couple of years since I looked at the code for the GUI
client application I have to maintain, but I am in the process of
helping a colleague start up a project to replace it.
Really, "in a totally different
situation I did this" is sort of crap. You either know what
you are talking about or you don't. Does someone need a lesson
on how to write a webserver anyone? I've never written one
but I've got some nice ideas!

I wasn't the one to bring daemons in to the discussion. However if I
ever get the time before we do the complete re-write of some server side
stuff I will be writing a minimal web server...
 
Y

ymuntyan

(e-mail address removed) wrote, On 03/02/08 18:51:



It was a long time ago.
OK.



The was with reference to not being able to allocate enough memory to
even open a dialogue box, so the memory required is that needed to open
a dialogue box. I would not expect it to take a week to solve this
problem, so I plucked that figure out of thin air and pointed out that
as the work could be reused even that much time would not be a waist.
OK.



No, you DO know how much memory it defines because you know exactly what
the dialogue is.

Totally wrong. For starters, a dialog has text inside. Try
to figure out how much memory given string of text will
take (hint: you can't). Bitmaps would do. Then, figure
out how much memory xlib needs for given drawing operations
(this is what I meant by "you need more").
Or at the very least, you can easily find out what the
memory requirements are by reading the documentation.

Good joke.
I can't comment on that because I don't know what allocates the memory
for the click event. However, Windows could be designed sensibly...


No, only when required.


You can flush the event queue whenever you (the programmer) want.

Yep, draw the dialog hidden by other windows. It will be drawn
nicely, except user won't see it. And when user brings it up,
you won't draw it because you "flushed the event queue".
You *have* to paint when X says to paint if you want the user
to see it.
Oh, and some of the X library functions at least can return that they
have failed due to lack of memory. Failure to handle that properly could
look like a problem with X to the uninitiated...

While a nice nit, it's just a BS nit. Xlib says "draw_whatever()
failed". Great. Handle it. But user won't see that silly text you
wanted to draw. Or are you talking about something else?
Perhaps about "some morons don't know to handle errors"?
At least some of the time the X server will have enough memory for its
event queue so you will manage to get the dialogue box up.

"Some of the time"? Don't you need it to show the dialog
when you need it to? Most of the time X server has enough
memory for everything, it's not so interesting.
I'm saying you use the facilities available in the library, which
includes the null returned by malloc on an allocation failure.
Cryptic.



It is always better to try and notify the user why the application has
failed.

"Better" is from enhancement requests department. Didn't we
talk about saving user data and stuff like that? Of course
you should try to notify user! But it's just a bonus task.
Spawning a small process which will show up a dialog has
much more chances to succeed than trying to show a dialog
in your process on OOM. Send a message over dbus to the
notification daemon (cheaper than a dialog), whatever.
It still has zero to do with how application can or can
not work after malloc() returned NULL.
That is not one solution only the minimal framework, the "handle it"
needs to be made to fit the situation, which is why people have *not*
been saying what goes on there.

It's not a framework. It's *nothing*.
Well, if MS can provide a facility to cancel background print jobs then
I'm sure other SW developers can as well.

Background print jobs, background syntax highlighting,
background recalculating text layout, background spell
checking, background resizing, background everything.
There are hundred different things going on in background.
You need a big fat OOM notifier, some central thing
which could signal OOM condition so that all those
guys could cancel themselves. You do not do

if (!(ptr = malloc(12)))
{
// cancel background print jobs
}
I have.


One I did had two threads and they managed to talk to each other quite
nicely.

Great! A question, what did the event dispatcher
(or whatever) do when it didn't have memory to
process an event (or whatever)? If it didn't allocate
memory, what did the application do when it didn't
have memory to process a key press?
"Threads" doesn't really sound like something exciting
or advanced; boring things like handling key press
are a real big job (it's not what I do thanks to
the toolkit; but your application must handle it
itself for sure).
When you design an application *you* are in control of which bits of it
might want to allocate more memory and you can design it so it does
things sensibly.

Pure total BS. You are not designing an application
starting from Xlib. If you are writing a toolkit
for your application, and implement X protocol, then
maybe. Even on Windows, free from many X niceties,
you are not designing the controls you use (unless
you are Borland or Microsoft, in which case you are
hundred times right).
Frankly, if everything was designed from ground again (and
that includes X, not "just" some funny libraries with funny names
starting with 'g'), it would be possible to handle OOM nicely.
It would require big amounts of resources [1] to design and write,
and then big amounts of resources to fix it and get it working
(the OOM handling part, that is). But, as it is, it's from the
ideal world software department.

Just because the world is not ideal is no excuse to make things worse.

What's worse again?
It has already been pointed out that I am not the only person who runs
out of memory on high spec notebooks.

Sure. But I am talking about: a user using GFrobnicator or
KThing losing his data because of OOM.

Yevgen
 
C

CBFalconer

Ed said:
Jeffrey Stedfast said:
If my partner and I had had to do NULL checks for every malloc()
call (and handling them in some idealistic way) made in the
portions of Evolution we worked on (never mind fixing glib's
g_malloc() usage), we'd still be trying to reach the functionality
released in 1.0.0 back in 2001.
[SNIP]

This goes for many desktop applications (GNOME or not)... my point
is that users would rather have an application that might crash if
the system runs out of memory than no application at all.

It's nice to see a little reality injected in this thread.

The folks around here that think every single malloc in every
single application should carefully propagate up the entire call
chain are obnoxiously unrealistic.

The point, I think, is that every malloc call should be handled
individually. If they abort, that may or may not be bad
programming. However if the abort call goes with the malloc call,
then revisions can be made intelligently. I.e, instead of
g_malloc(size) or xmalloc(size) calls, we want:

if (!(ptr = malloc(sizeof *ptr)) exit(EXIT_FAILURE);

and, in some cases, we can save the tests on ptr for later.
 
J

Jeffrey Stedfast

Ed said:
Jeffrey Stedfast said:
If my partner and I had had to do NULL checks for every malloc() call
(and handling them in some idealistic way) made in the portions of
Evolution we worked on (never mind fixing glib's g_malloc() usage),
we'd still be trying to reach the functionality released in 1.0.0 back
in 2001.
[SNIP]

This goes for many desktop applications (GNOME or not)... my point is
that users would rather have an application that might crash if the
system runs out of memory than no application at all.

It's nice to see a little reality injected in this thread.

The folks around here that think every single malloc in every single
application should carefully propagate up the entire call chain are
obnoxiously unrealistic.

The point, I think, is that every malloc call should be handled
individually. If they abort, that may or may not be bad programming.
However if the abort call goes with the malloc call, then revisions can
be made intelligently. I.e, instead of g_malloc(size) or xmalloc(size)
calls, we want:

if (!(ptr = malloc(sizeof *ptr)) exit(EXIT_FAILURE);

and, in some cases, we can save the tests on ptr for later.

If you are aware of the g_malloc() limitation when designing your
application, you can prepare for such a condition in a number of ways
including, but not limited to:

1. auto-saving state (or user documents) periodically (or even whenever
any change occurs)

2. plugging in your own malloc() implementation for GLib to use (it uses
a modifiable vtable for memory allocation functions) so that you can be
warned of up upcoming failures before g_malloc() has a chance to abort()

3. setup a Unix SIGABRT handler to handle it (which may or may not
including saving state and calling exit() on your own).


The people (person) screaming that the sky is falling is just trying to
push an anti-GNOME agenda, it's really not anything more than that.

E.g. he is a troll, and you have feed him dinner and offered him a place
to stay for the night.

That fact should have been obvious to you immediately upon reading his
message which was full of feigned shock and in his followup message
saying that he has never liked GNOME but couldn't place why he had such
negative feelings for it. Riiiiight.

Jeff
 
K

Kelsey Bjarnason

[snips]
The people (person) screaming that the sky is falling is just trying to
push an anti-GNOME agenda, it's really not anything more than that.

E.g. he is a troll, and you have feed him dinner and offered him a place
to stay for the night.

That fact should have been obvious to you immediately upon reading his
message which was full of feigned shock and in his followup message
saying that he has never liked GNOME but couldn't place why he had such
negative feelings for it. Riiiiight.

That would have been me, you miserable, insufferable, dishonest little
prick.

Let me explain this to you in such a manner even your defective little
excuse for a brain can grasp it.

My objections to Gnome are *purely* a matter of personal taste. I
*prefer* other options. I have *never* said Gnome was a bad idea or
didn't deserve to be continued. I *have* stated, explicitly, I think it
*should* continue.

How your screwed-up mess of mash that you use in place of a mind can
confuse this with an "anti-Gnome agenda" is not clear, but the fact you
can shows, quite clearly, you are incapable of any actual reasoning
beyond the level of "hungry, eat".

This conclusion is further bolstered by the fact you lack sufficient wit
to tell the difference between a humorous "Now I have an objective reason
to dislike Gnome" and an actual criticism of Gnome as a project.

It is bolstered still further by your inability to tell the difference
between someone who dislikes a memory allocation strategy and someone who
is against a library which uses it. Yes, I think it makes the library at
best unsafe, but that's because of the allocation strategy, not because
Gnome is inherently garbage.

I see someone with your name and email involved with several Gnome-
related projects. If *you* represent the level of intellect which the
Gnome project pulls from, it is doomed.

Good day, Sir, and good bye; I have no time to waste on the likes of you.
 
K

Kelsey Bjarnason

[snips]

Oh rubbish. If (an its a big IF) something so fundamental as mallocing
32 bytes fails then the chance of you being to gracefully retire is
virtually nil.

Ah, I see - you've never heard of multi-user and multi-tasking systems.
Let me explain:

In such a system, more than one program may be executed at the same time,
consuming resources, including memory. If one of the applications
requests a bit of memory _now_ which isn't available, this says nothing
of whether the memory will be available 10 seconds, or even 10
milliseconds, from now.

Such systems have been around since, oh, the 1960's at least, I'm
surprised you've never heard of them.

Even on a single-tasking system, the question of whether an allocation
failure is reason to abort an application is obviously very much in
doubt, as more than one person here has commented that this is a not good
way to design software.

That *you* cannot conceive of cases where allocation failure may not be
fatal isn't relevant: others can.
 
K

Kelsey Bjarnason

Unbelievable.

What, you think Mr. Heathfield - who, I'm *quite* sure knows just how
much respect I have for him, which is to say considerable, cannot take a
tongue-in-cheek joke?

You really think he is *that* thin-skinned? I wouldn't think so, but
hey, whatever floats your boat.
 
K

Kelsey Bjarnason

[snips]

You seem to miss the point. I can. But they rarely happen

Really? I'm sure you'll back this up with data from a wide-scale formal
research project analyzing that very question, right? Right?

Somehow I suspect you're blowing smoke out your backside, but I'm willing
to be wrong there - just show me the study.

That said, common, rare or occasional really doesn't make arse all
difference. The fact is allocations _can_ fail. A good - read
"competent" - programmer realizes this and deals with it. Deals with it,
ideally, in a manner which is something more than simply saying "I'm too
freakin' lazy to write error handling code, so abort."
and the fact
that YOU fail to see the fact that you made an arse of yourself by not
reading the documentation properly is not my issue.

Sorry, I did read the documentation. You know, *both* parts about
allocation, which cause it to involve a contradiction, as well as other
parts which clearly rest on the use of _one_ of the conflicting methods.

Sorry, which part didn't I read correctly?
I still have some sympathy for NOT checking all mallocs in certain
situations. It matters not about other processes when ones own can not
malloc 32 bytes.

I have no idea where you get this 32 bytes crap from. Xmalloc and glib
fail just as handily on 32KB or 32MB as on 32 bytes. Maybe you've never
needed to allocate anything more than 32 bytes at a time; others have.
What have YOU contributed to OSS that gives you the right to rubbish
such people as the Gnome development team?

What has being an OSS developer got to do with anything? Is there some
magic which prevents anyone but OSS coders writing decent code, or being
able to recognize design flaws?
One of life's sureties is that there's generally a reason for most
things.

I'm sure there are. Yet thus far, the only actual "reason" offered for
this design decision has been sheer laziness - that it's just too hard,
too messy, too ugly, to write error handling code. Yeah, well, nobody
said programming was easy.
It is easy to criticise. not so easy to build a project, in your free
time, like Gnome and QT etc.

I don't recall anyone saying it was easy. I do, however, see you
apparently trying to justify a _bad_ decision based, again, on nothing
more than laziness.

Maybe the Gnome developers *did* have some legitimate reason for doing
what they did. Let's stipulate that for the moment. Why, then, can
those defending the design - you, for example - come up with no defence
for it other than laziness?

Pretty sad, if that's the best reason for it.
 
M

Malcolm McLean

Kelsey Bjarnason said:
That would have been me, you miserable, insufferable, dishonest little
prick.

Let me explain this to you in such a manner even your defective little
excuse for a brain can grasp it.

My objections to Gnome are *purely* a matter of personal taste. I
*prefer* other options. I have *never* said Gnome was a bad idea or
didn't deserve to be continued. I *have* stated, explicitly, I think it
*should* continue.
Temper, temper.

Wouldn't it be easier just to say "now I see that my policy of passing the
error return from malloc() might have a few problems. Maybe I was wrong
about this all along?

Rather than try to pretend that glib is a bad or unusable library?
 
R

Richard

Flash Gordon said:
As others have pointed out, that is a good way to stop people using
your apps.

Oh rubbish. If (an its a big IF) something so fundamental as mallocing
32 bytes fails then the chance of you being to gracefully retire is
virtually nil.

The trick is to do all your allocing where you can before doing anything
critical.

You seem to be positioning yourself as some sort of failsafe guru. It's
simply not practical or even necessary in the mass of mainstream
applications.

The OS isnt giving you 32 bytes then there are much bigger fish to fry.
 
R

Richard

Flash Gordon said:
Malcolm McLean wrote, On 29/01/08 10:06:

It did not cause me significant extra work. The most likely reason for
the difference is that I designed the entire system knowing that
out-of-resource errors *do* occur so it was part of the entire design
concept rather than an extra I had to try and fit in.

I wonder if the cost v return ratio was worth it? I doubt it.
 
R

Richard

Flash Gordon said:
Well, I would not have used your BabyX library anyway, but now I have
even more reason to avoid it.

Chuckle. You really are in love with yourself aren't you.

Maybe you should go and rewite QT etc too eh? Knowing what you know
etc. Just make sure it's as efficient if not more so that it was before.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,065
Latest member
OrderGreenAcreCBD

Latest Threads

Top