xmalloc string functions

W

William Ahern

Yevgen Muntyan said:
I'd think evolution crashed because it crashed. You can look at
its list of bugs to see what I mean. If it was abort() inside
g_malloc(), then it probably leaked so much that it indeed has
eaten everything and asked for more. In which case you can't
really blame glib ;)

Actually, it was usually Galeon/Gecko which leaked, or rather whose leaks
would begin to trigger out-of-memory errors. But that's beside the point.
I'd much rather that Evolution fail to open a message than to exit entirely.
(Here I'd be actually grateful if such applications were killed
immediately, because they first freeze X, and I've got to wait
for ten minutes to switch to console and kill the application.
They just don't die themselves.)

If these were the only choices (crashing applications or a frozen screen),
I'd be more agreeable.

There were many bad spots in Evolution, but mostly the code was OK (I've
combed through much of it). Evolution had to deal with bad UTF encodings,
malformed MIME, networks coming up and going down. Handling memory
allocation failures would hardly add much in the way of complexity or
effort.
There is no xmalloc() wrapper in glib. Anyway, have you never
seen "mostly working" application which do not use glib? Bugs
are everywhere, on all platforms. Do you have a real base for
saying that g_malloc() is somehow responsible for crashes you
have seen? Something other than "evolution crashed", that is,
or "similar applications" (similar glib-based applications
which open messages, huh?).

I know for a fact that Evolution and similar applications have exited
because their malloc wrapper decided to exit. They've also crashed for
numerous other reasons. Bugs are bugs, but defective by design is hardly
excusable.

Exiting on malloc failure makes sense for a utility like sort(1). It doesn't
make sense for desktop applications, unless there's a separate strategy,
like a multi-process configuration where a component exiting is part of the
design of handling errors and making a best-effort recovery.

It makes sense for sort(1) because it categorically cannot perform its task
on a failure. And its practical because sort is usually just a sub-component
in a larger work. Sort can exit without killing the script or application
which called it.
You know, I have heard things like you are saying only from
people who talk about xmalloc() and related things. Never from
users. Why is that? Perhaps because buggy applications are
buggy applications, not some poor creatures crashing because
glib memory handling is broken?

Users are, sadly, inured to this issue. Nor can they discern the reason for
failure, so they aren't capable of judging the cost/benefit of any
particular behavior.

You sort of missed part of my point regarding Evolution. What I was
experiencing was a denial of service caused by processing untrusted data.
And by using glib you have no recourse. Indeed, much of glib's functionality
is simply a rehash of, by now, widely supported library interfaces, but with
the feature of exiting when memory becomes tight.

It's fine, nobody promiced glib will work for any program. It certainly
won't; but nevertheless it doesn't make abort-on-failing-malloc less
sensible strategy for a whole class of applications.

The problem is that such a strategy is usually the wrong one for the class
of applications which glib serves: desktop applications, and increasingly
network daemon services. Both of those usually involve monolithic
applications doing complex tasks for which memory allocation failure is only
one of dozens or hundreds of exceptional conditions. And yet out of all of
them people will argue memory allocation alone can be completely ignored,
simply because it's too burdensome.
Besides, a glib application can set up some sort of emergency memory
pool or something, so that failed malloc doesn't necessarily lead to
immediate abort(). Same sort of science fiction as "graceful exit with
saving data on *any* failed malloc() call in any possible application
in any possible situation" which seems to be so popular here ;)

It's not science fiction. It's just difficult. And sometimes the answer
involves not using C, rather than using C and choosing not to address the
problem.
 
E

Eric Sosman

Kelsey said:
[...]
You persist in demonstrating you have no concept at all how to design
software.

Malcolm believes that an assembly line worker with a hangnail
is within his rights to halt the entire factory, no matter what
the rest of the team might think.

Malcolm believes that a private whose rifle jams is justified
in surrendering his whole country, no matter what the generals and
presidents and commissars say.

Malcolm believes that one fouled spark plug should announce its
difficulty by blowing the whole car to smithereens.

Malcolm -- oh, what's the use?
 
R

Randy Howard

Malcolm said:
Here are six functions implemented on top of xmalloc(). No C programmer
should have any triouble providing the implemetations, though replace
and getquote are non-trivial.
[snip]

I've think we've got something quite powerful here, purely because none
of these functions can ever return null for out of memory conditions.
It massively simplifies string handling.

Take a look at glib,
http://library.gnome.org/devel/glib/2.14/glib-Memory-Allocation.html

Oh, good God. They didn't. Tell me they didn't.

One wonders how many applications they've screwed over with that bit of
asinine idiocy.

Sure. That's why seeing an app crash on a linux box is "no big
surprise" anymore. It's also one of the reasons I don't run Linux
anymore except when I absolutely have to. It's not the kernel's fault,
but it is a problem with the normal way the platform is deployed.
Malcolm should go work for them. He'd love it. "Errors? We don't do
errors, we just crash the app. Enjoy another piece of quality software
from the Gnome team."

Do they employ anyone at all?
 
I

Ian Collins

Randy said:
Sure. That's why seeing an app crash on a linux box is "no big
surprise" anymore. It's also one of the reasons I don't run Linux
anymore except when I absolutely have to. It's not the kernel's fault,
but it is a problem with the normal way the platform is deployed.
<OT>Unlike some other platforms, Linux and UNIX platforms in general
offer the user a choice of desktop environments. One popular
alternative is written in TOL, which is better equipped to manage
dynamic memory</OT>
 
Y

Yevgen Muntyan

William said:
Yevgen Muntyan said:
I'd think evolution crashed because it crashed. You can look at
its list of bugs to see what I mean. If it was abort() inside
g_malloc(), then it probably leaked so much that it indeed has
eaten everything and asked for more. In which case you can't
really blame glib ;)

Actually, it was usually Galeon/Gecko which leaked, or rather whose leaks
would begin to trigger out-of-memory errors. But that's beside the point.
I'd much rather that Evolution fail to open a message than to exit entirely.
(Here I'd be actually grateful if such applications were killed
immediately, because they first freeze X, and I've got to wait
for ten minutes to switch to console and kill the application.
They just don't die themselves.)

If these were the only choices (crashing applications or a frozen screen),
I'd be more agreeable.

There were many bad spots in Evolution, but mostly the code was OK (I've
combed through much of it). Evolution had to deal with bad UTF encodings,
malformed MIME, networks coming up and going down. Handling memory
allocation failures would hardly add much in the way of complexity or
effort.
[snip]

The problem is that such a strategy is usually the wrong one for the class
of applications which glib serves: desktop applications, and increasingly
network daemon services. Both of those usually involve monolithic
applications doing complex tasks for which memory allocation failure is only
one of dozens or hundreds of exceptional conditions.

This is very very wrong. A typical GUI application does not do a
switch like

switch (problem_to_handle)
{
....
}

to which you could add

case ALLOC_FAILED:

It's usually different, you got the main loop which
got to spin, you got those controls you got to draw,
and you got those callbacks which actually do the
job. And the callbacks do one thing at a time, they
do not handle dozens of exceptional conditions at once,
they do not handle exceptional conditions at all
in fact.
And yet out of all of
them people will argue memory allocation alone can be completely ignored,
simply because it's too burdensome.

No, because the effort would be gigantic, you would still fail to
do it properly, and at the end it would bring no benefit.

How would you test it? Imagine a toolkit which doesn't abort
on memory allocation failure: why would you have a slightest
reason to believe that given application won't just segfault
on malloc() failure? (The question which applies to all
applications which do try to handle malloc failure of course).
What you can be sure about is that there would be more chances
for an application to screw up and actually corrupt your data
or display wrong data.

All you can sensibly do on malloc() failure is to kill the
application. Maybe save the data you can save or something
(which you can do with glib). What else? Say, if malloc() failed
when the main loop code tries to process an event from Xlib, it
can start fail silently, it can be killed by an x error because
it failed to take into account some data it got (timestamp?),
it can grab the mouse and lock the X server, it can do many
other nasty things I can't make up right now.

Or, if malloc() failed when you tried to show a dialog telling
the user he got to call his mama, he won't call his mama! (Yes,
if the application crashed, the user will restart it, see the
dialog, and call his mama, users are like that. Those users who
are lucky enough to see an application aborting because memory
allocation failed. Randy Howard, you, and perhaps few other
people from comp.lang.c ;)

Of course I am talking about "small" allocations here, not
about stuff like allocating memory to load an image file (for
those g_malloc() is simply not used).

Perhaps in ideal world with ideal toolkits things would
be different, don't know about that. But I do know that
while dumb abort() is not the best possible solution,
talking about how it is easy to do things differently is
just a child talk. "Of course I would do it better!" Yeah.

Regards,
Yevgen
 
Y

Yevgen Muntyan

Ian said:
<OT>Unlike some other platforms, Linux and UNIX platforms in general
offer the user a choice of desktop environments. One popular
alternative is written in TOL, which is better equipped to manage
dynamic memory</OT>

You mean those application will abort in the unexpected exception
handler instead of inside g_malloc()? Sure, that's certainly better.
 
R

Randy Howard

You mean those application will abort in the unexpected exception
handler instead of inside g_malloc()? Sure, that's certainly better.

Is there some reason you have been appointed as glib's public defender?

;-)
 
Y

Yevgen Muntyan

Randy said:
Is there some reason you have been appointed as glib's public defender?

I don't like smart arses who know nothing except how to use
word "idiot" and its derivatives. Not that glib needs to be
defended of course, since it's something that actually works
in real world, not like those smarties' smart ideas.

My original intention really was just to point out that
Malcolm ideas are not something unusual or broken by definition.
But then I read the replies, and replied myself, and I should
stop right here!

Yevgen
 
I

Ian Collins

Yevgen said:
You mean those application will abort in the unexpected exception
handler instead of inside g_malloc()? Sure, that's certainly better.

No, I mean the application can choose where in the call chain to catch
memory allocation failures and take appropriate action. It can also use
appropriate techniques not available to a C application to manage the
lifetime of allocated memory, reducing the risk of leaks leading to
premature memory exhaustion.
 
Y

Yevgen Muntyan

Ian said:
No, I mean the application can choose where in the call chain to catch
memory allocation failures and take appropriate action.

Sweet theory. And where is that, main()? Or where do you call the
function/method which starts the main loop? And, more importantly,
which so-much-better other-toolkit applications actually do this?
And, what of the above is impossible in C with glib (since we talk
theory here, not what applications really do)?
It can also use
appropriate techniques not available to a C application to manage the
lifetime of allocated memory, reducing the risk of leaks leading to
premature memory exhaustion.

True. The Other Language does have nice things. But then the
Other Language is better (if better) not because of some glib,
but because of the language features ;)

Yevgen
 
R

Randy Howard

I don't like smart arses who know nothing except how to use
word "idiot" and its derivatives.

When you refer to someone that holds a different opinion than you do as
a "smart arse who knows nothing", how is that any better than calling
someone an "idiot"? I'm curious how you arrived at this distinction,
as well as how you determined what they don't know from afar.
Not that glib needs to be
defended of course, since it's something that actually works
in real world, not like those smarties' smart ideas.

That people have written and deployed applications with glib doesn't
mean that its design is good, or bad on its own. All it means is
somebody typed 'make' and hit the enter key and out popped a binary
which people use.
My original intention really was just to point out that
Malcolm ideas are not something unusual or broken by definition.

What this amounts to is one of those so-called "religious" disputes
about which programmers love to argue, yet nobody every gets
"converted". In a word, pointless. Well, there is an outside chance
that somebody that hasn't formed an opinion yet might learn something
from the debate that would help them come to a conclusion. However,
calling each other "idiot", or "smart arses" doesn't do much to help
that process along.
 
R

Randy Howard

Randy said:
Malcolm McLean wrote:
Here are six functions implemented on top of xmalloc(). No C programmer
should have any triouble providing the implemetations, though replace
and getquote are non-trivial.
[snip]
I've think we've got something quite powerful here, purely because none
of these functions can ever return null for out of memory conditions. It
massively simplifies string handling.
Take a look at glib,
http://library.gnome.org/devel/glib/2.14/glib-Memory-Allocation.html
glib is where bad ideas go to die. Now, if somebody just had the nerve to
tell them....

You gdon't glike ghaving gall gyour gvariables gprexfed gwith g?

Why, you don't like the following code?

#include <glib.h>

gint main (gint argc, gchar **argv)
{
gchar *s = g_strdup ("Hello there!");
g_print ("%s\n", s);
g_free (s);
}

Not particularly. Should I?
 
Y

Yevgen Muntyan

Randy said:
When you refer to someone that holds a different opinion than you do as
a "smart arse who knows nothing", how is that any better than calling
someone an "idiot"? I'm curious how you arrived at this distinction,
as well as how you determined what they don't know from afar.

You have it quoted. If you mean that one may talk "idiocy"
but I shouldn't refer to him as "smart arse" (since as
a glib user I conclude from his words that I am an idiot
who screws applications over), then I will disagree.

That people have written and deployed applications with glib doesn't
mean that its design is good, or bad on its own. All it means is
somebody typed 'make' and hit the enter key and out popped a binary
which people use.

Yep. A binary which does what it intends to do, "works".
What this amounts to is one of those so-called "religious" disputes
about which programmers love to argue, yet nobody every gets
"converted". In a word, pointless. Well, there is an outside chance
that somebody that hasn't formed an opinion yet might learn something
from the debate that would help them come to a conclusion. However,
> calling each other "idiot", or "smart arses" doesn't do much to help
> that process along.


Sorry, you mean that that someone could learn something from
this:

"""
That's why seeing an app crash on a linux box is "no big surprise"
anymore. It's also one of the reasons I don't run Linux anymore
except when I absolutely have to
"""

or

"""
One wonders how many applications they've screwed over with that
bit of asinine idiocy.
"""

but he won't be able to because of my "smart arse"? I apologize
for my rude (or whatever you don't like here) language then.
 
W

William Ahern

Yevgen Muntyan said:
This is very very wrong. A typical GUI application does not do a
switch like
switch (problem_to_handle)
{
...
}
to which you could add
case ALLOC_FAILED:
It's usually different, you got the main loop which
got to spin, you got those controls you got to draw,
and you got those callbacks which actually do the
job. And the callbacks do one thing at a time, they
do not handle dozens of exceptional conditions at once,
they do not handle exceptional conditions at all
in fact.

Is that why applications crash when, using a file dialog box, I attempt to
save a file into a directory I don't have write permission to?

To my mind, there's no difference in effort required to handle a NULL return
from fopen(), than a NULL return from malloc(). Maybe more typing. This is
just a resource acquisition issue, and even if you had infinite memory it's
a pattern you still have to deal with.

As to main loops, I'm very familiar with these. I write event based async-io
network software, using an event dispatcher exactly like a GUI application
might. I create and use more callback interfaces than I probably should.
When I accept a connection, I might--though, try not to--do dozens of
allocations. I try to write my code so any allocation failure is handled
gracefully. I don't need a gigantic switch statement, or special language
constructs. One designs the code to deal with such a circumstances as a
matter of course. You minimize dependencies, isolate access to shared data,
postpone commiting to a particular state wrt to that context until you've
acquired a minimal set of resources, etc. Any non-trivial application
usually has multiple contexts within which such intermediate failures can be
contained, with practical benefit.

Granted, I've not done much work with X11 applications, or GUI applications
in general. But, I fail to understand how a caveat wrt to X11
justifies--absent other reasons--exiting when a string cannot be allocated.
Of course I am talking about "small" allocations here, not
about stuff like allocating memory to load an image file (for
those g_malloc() is simply not used).

There's absolutely no qualitative difference between small and large
allocations without reference to other circumstances (number of allocations,
etc). If I have 4GB of memory, what does it matter that a 10MB allocation is
checked but not a 12B allocation? When the application approaches the limit
its not likely that one will be more susceptible to failure than the other.
The choice is then arbitrary and almost absurd. Better, for consistency, to
not bother at all.
 
C

CBFalconer

Kelsey said:
.... snip ...

Ah, right, so if it can't allocate enough memory to process that
large select you just issued, it makes perfect sense to crash and
die, taking with it any unstored data, rather than report back
that there's insufficient memory. Yeah, well, not like *data*
matters, not in a *database* app.

Are you writing performance specifications for Microsoft?
 
Y

Yevgen Muntyan

William said:
Is that why applications crash when, using a file dialog box, I attempt to
save a file into a directory I don't have write permission to?


No idea, ask the developers of that buggy application.
Failed fopen() is not an exceptional condition. Failed
malloc() is. Or we are using different vocabularies.

To my mind, there's no difference in effort required to handle a NULL return
from fopen(), than a NULL return from malloc(). Maybe more typing.


Then you are just really good. Because it's enormously more
typing. And more than that, it's more design questions too:
"what do I do in this situation, which I can't even possibly
test?" All this apart from real problems you have to solve.
Yes, *real*. No, g_malloc() aborting an application is not
a real problem. Not for a regular desktop application.
This is
just a resource acquisition issue, and even if you had infinite memory it's
a pattern you still have to deal with.


Except you don't open files twenty times in a row in every function
in your application. Memory is quite a different kind of resource.
Different in how you use it, you know.

As to main loops, I'm very familiar with these. I write event based async-io
network software, using an event dispatcher exactly like a GUI application
might. I create and use more callback interfaces than I probably should.
When I accept a connection, I might--though, try not to--do dozens of
allocations. I try to write my code so any allocation failure is handled
gracefully. I don't need a gigantic switch statement, or special language
constructs. One designs the code to deal with such a circumstances as a
matter of course. You minimize dependencies, isolate access to shared data,
postpone commiting to a particular state wrt to that context until you've
acquired a minimal set of resources, etc. Any non-trivial application
usually has multiple contexts within which such intermediate failures can be
contained, with practical benefit.

Granted, I've not done much work with X11 applications, or GUI applications
in general. But, I fail to understand how a caveat wrt to X11
justifies--absent other reasons--exiting when a string cannot be allocated.


So you click Save button then click Close. The application failed to
process Save click because it failed to allocate memory for the event
structure to put into the event queue, but then it successfully handled
Close because at the same time yet another document was closed and
some memory returned to the malloc pool. You may not just lose events
like that. *Everything* must be done in order, or the application is
doomed, and the best it can do is to try to exit as nicely as it can
(like save data or whatever). It can't just pretend nothing happened.

There's absolutely no qualitative difference between small and large
allocations without reference to other circumstances (number of allocations,
etc). If I have 4GB of memory, what does it matter that a 10MB allocation is
checked but not a 12B allocation? When the application approaches the limit
its not likely that one will be more susceptible to failure than the other.
The choice is then arbitrary and almost absurd. Better, for consistency, to
not bother at all.

All allocations are checked. It's what you do when they fail is
different. If malloc(12) failed, then you are screwed because
all your code wants memory. No memory => application isn't working.
So you just don't try to handle (that is do something and not exit
the application) possible malloc() failure when you are concatenating
two strings to make up a string to display. Absurd, fine, I'll be
delighted to see an application which handles malloc() failure
when it draws a menu label (it *is* possible, it just doesn't
make sense).
 
C

CBFalconer

Randy said:
.... snip ...

What this amounts to is one of those so-called "religious"
disputes about which programmers love to argue, yet nobody every
gets "converted". In a word, pointless. Well, there is an
outside chance that somebody that hasn't formed an opinion yet
might learn something from the debate that would help them come
to a conclusion. However, calling each other "idiot", or "smart
arses" doesn't do much to help that process along.

Well, that depends on your definition of 'that process'. If it is
'an efficient method of engendering flamewars', I think it has been
admirably assisted.
 
C

CBFalconer

Yevgen said:
William Ahern wrote:
.... snip ...


Then you are just really good. Because it's enormously more
typing. And more than that, it's more design questions too:
"what do I do in this situation, which I can't even possibly
test?" All this apart from real problems you have to solve.
Yes, *real*. No, g_malloc() aborting an application is not
a real problem. Not for a regular desktop application.

Oh? Do you detect a major difference in typing between:

ptr = xmalloc(sizeof *ptr);
and
if (!(ptr = malloc(sizeof *ptr))) fixit(sizeof *ptr);

and you can actually select the appropriate fixit function!!
 
I

Ian Collins

Yevgen said:
Sweet theory. And where is that, main()? Or where do you call the
function/method which starts the main loop? And, more importantly,
which so-much-better other-toolkit applications actually do this?

That I can't answer, it's been too long since I've worked with that code.
And, what of the above is impossible in C with glib (since we talk
theory here, not what applications really do)?
Using exceptions to handle allocation failure at a point where something
sensible can be done. This may be at the point of allocation, or it may
be many calls away. This removes the necessity to check each call for
failure.
 
Y

Yevgen Muntyan

CBFalconer said:
Oh? Do you detect a major difference in typing between:

ptr = xmalloc(sizeof *ptr);
and
if (!(ptr = malloc(sizeof *ptr))) fixit(sizeof *ptr);

and you can actually select the appropriate fixit function!!

Do you suggest that this toy example is scalable? No, in this
toy example there isn't much more typing.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,904
Latest member
HealthyVisionsCBDPrice

Latest Threads

Top