A solution for the allocation failures problem

I

Ian Collins

Ed said:
I'm open to alternatives. :)

OK, I've got a tough skin...

For hosted applications that use a lot of dynamic memory I use TOL...

For embedded applications (my main use of C) I either

A) ban dynamic memory use

or

B) abort on failure as these systems are sized for the application and
any malloc failure is inevitably due to a leak.

My preference is for A. I've implemented the allocator on every
embedded product I've worked over the past 20 years, which has enabled
me to track usage and leaks. But I still prefer A!
 
E

Ed Jensen

Kenneth Brody said:
In situations like this, I typically do something like:

int retval = SUCCESS;
char *p0, *p1, *p2;

p0 = malloc(100);
p1 = malloc(200);
p2 = malloc(300);
if ( p0 == NULL || p1 == NULL || p2 == NULL )
{
retval = FAILURE;
goto ret;
}

/* do stuff */

ret:

free(p0);
free(p1);
free(p2);
return retval;

Of course, this is a simple case where nothing happens between the
mallocs, and failure handling doesn't matter which one failed. But,
that's not entirely rare.

I like it. That's a nice shortcut. So is this:

char p0[100];
char p1[200];
char p2[300];

/* Etc. */

OK, I'm kidding, but only half kidding. What got me thinking about
this is the minor skirmishes re: ggets(), fgetline(), etc. in some
other threads.

Sometimes what makes the most sense is just picking a reasonable
buffer size in the first place.
Yes, I do use "goto" here, even though it could be avoided by placing
"do stuff" within an "else" in this case. Sometimes there are other
possible failures within "do stuff" which wouldn't be handled so
cleanly without the "goto".

In my opinion, there's nothing wrong with using goto appropriately,
and I think you used it appropriately.

For simple cases, sometimes I use this pattern:

do
{
if ((p0 = malloc(...)) == NULL)
{
retval = ENOMEM;
break;
}

/* Etc */
}
while (0);

/* Clean-up code goes here */

It avoids the goto, though I'm not convinced it makes the code any
more clear, especially when the contents of the do-while loop may go
beyond the height of your editor window, whereas "goto ExitFunction;"
or something similar is very clear.
 
E

Ed Jensen

Willem said:
Ed wrote:
) Theoretically, the free market will decide which development practices
) are most cost effective.

Hiring the cheapest possible developers and pushing out code that
is 'just about good enough' would seem most cost effective, then.

Actually, I think the market is finding out the exact opposite, slowly
but surely. Slowly, because executives don't want to admit when
they've made a bad decision, so they try to make their decision to
offshore or offshore outsource seem like it's truly saving money.
 
E

Ed Jensen

Ian Collins said:
OK, I've got a tough skin...

For hosted applications that use a lot of dynamic memory I use TOL...

What's TOL?
For embedded applications (my main use of C) I either

A) ban dynamic memory use

or

B) abort on failure as these systems are sized for the application and
any malloc failure is inevitably due to a leak.

My preference is for A. I've implemented the allocator on every
embedded product I've worked over the past 20 years, which has enabled
me to track usage and leaks. But I still prefer A!

Sounds perfectly reasonable.
 
M

Malcolm McLean

Morris Dovey said:
Ok - I think I understand, but don't recall seeing anything in
the Standard that /requires/ this reservation. Personally, I'd
hope that free()d memory would be re-absorbed by the OS (if
present) into global pool(s) so that it could be made available
to other processes/threads - and it seems to me inappropriate for
a programming language standard to dictate host memory resource
management in such a way. I guess my mileage varies on this
issue.
That's a grey area.
There would be little or no point in calling free() at all if memory was
still hogged by the malloc() system. The question is whether the program is
allowed or required to hog it.
In practise we cannot forbid programs from hogging resources until exit,
because of the limitations of some operating systems. However it seems more
in keeping with the ethos of multi-tasking to yeild memory you no longer
require.
 
R

Richard Tobin

Randy Howard said:
Just give them time. It won't be long before someone argues that
checking return values fopen() is impossible to do correctly, so don't
even try.

But in fact fopen() failure is far more likely to be due to user error
(notably, typing in the wrong file name), and thus more likely to be
recoverable.

-- Richard
 
M

Malcolm McLean

Richard Heathfield said:
Ed Jensen said:

In most applications, information comes in via streams and goes out via
streams, a lot (otherwise where are you getting your data and where are
you putting the results?). But nobody is bothering to propose a "solution
for the read/write failures problem" (and rightly so, because there isn't
such a problem). The fact that malloc is called a lot is not an excuse for
calling it badly.
Memory is subtly different from other resources.

Let's say there arises a requirement to find the shortest path through a
maze. We are not sure whether this is computable in reasonable time or not,
but we are confident enough to commit to writing a function that finds "a
short path" and executes in reasonable time.
Since we don't have the function, we can define an interface. Then we work
out the shortest path for a few test cases by hand, and we've got a plugin
that works on the test cases, and gives the developers of the rest of the
program something to play with.

Now does our algorithm require dynamic memory or not? Can it return an out
of memory fail condition?
What about the callers of the function. Can they return OOM fail conditions
or not?
Maybe we'd better say that the algorithm does use dynamic memory, even if at
the end of the day it does not, just to be on the safe side.

Memory is different from all other resources.
 
I

Ian Collins

Richard said:
But in fact fopen() failure is far more likely to be due to user error
(notably, typing in the wrong file name), and thus more likely to be
recoverable.
But it could be due to the system not being able to allocate memory for
buffers. Wait a minute, wasn't there a recent thread about allocation
failures?
 
M

Malcolm McLean

Ian Collins said:
As I said before, it's very hard and therefore error prone to write
exception safe code with the mechanisms C provides for all of the
reasons it's very hard to use multiple points of return. Square that
when exceptions can escape out of functions.
I'd agree here.
Exceptions simplify the callee at the expense of more work in the caller. In
fact they often make the interfaces too difficult to use.
However if you have a chain of out-of-memory returns you are effectively
handcoding the C++ excpetion mechanism, so you lose the syntaxtical
conveneience as well.
 
R

Richard Tobin

But in fact fopen() failure is far more likely to be due to user error
(notably, typing in the wrong file name), and thus more likely to be
recoverable.
[/QUOTE]
But it could be due to the system not being able to allocate memory for
buffers. Wait a minute, wasn't there a recent thread about allocation
failures?

Perhaps I should have been more explicit.

There is a greater return on programming effort in trying to recover
from fopen() failures than malloc() failures.

The fact that fopen() could fail because malloc() fails does not
change that.

-- Richard
 
M

Malcolm McLean

Ed Jensen said:
Theoretically, the free market will decide which development practices
are most cost effective. Just look at successful software companies
and determine how much peer review they do. (Of course, some software
companies may cheat by illegally leveraging their monopoly, so you'd
have to pick and choose which companies to review very carefully.)
This is always the irony.
Programming gurus recommending techniques, typing into text editors running
under operating systems not designed according to the methods they are
advocating.
 
I

Ioannis Vranos

Malcolm said:
I'd agree here.
Exceptions simplify the callee at the expense of more work in the
caller. In fact they often make the interfaces too difficult to use.
However if you have a chain of out-of-memory returns you are effectively
handcoding the C++ excpetion mechanism, so you lose the syntaxtical
conveneience as well.

That's why compile-time exception checking is needed in C++ too, but
this is completely off topic here.
 
Y

ymuntyan

(e-mail address removed) said:



Oh, I see - fair enough. Yes, the test is against 0, and doesn't handle
malloc failure separately. It ought to. That's a weakness in the program
(not the library), which I ought to remedy. In fact, I was just looking
over the code again, and spotted it's not the only way in which emgen
needs fixing. When I get some time, I'll do something about that.


I don't see why you think a reasonable complaint about my shoddy code would
land you in my killfile. As for "the incompetence of others", incompetence
can be rooted in either (or both) of two problems - ignorance (not having
learned yet) or stupidity (refusing to put into practice the things one
ought to have learned). Ignorance is correctable (or corrigible, perhaps)
and forgivable. Stupidity is harder to correct and less forgivable.
Unfortunately for me, the emgen code shows definite signs of stupidity on
my part - so yes, I ought to go fix that code.

But when I do so, I will fix it to deal with allocation failures properly -
"crash and burn" should be a last resort, not a first response.

So you proved that your talking about checking malloc() failure,
"easy" if(pm != NULL) and stuff is just that: talking. Perhaps
you got wiser and better after you have written that code.
Or perhaps it was written by RH-human, who doesn't bother to check
return of fwrite or fclose (something that can actually happen,
unlike malloc() failure right after main() start), and who checks
malloc() result out of habit without any serious thinking about
what should be done in case of actual error; and the talking here
is done by RH-Teh-Good-Programmer. I suspect it's the latter.

So please, just let people who suck in writing bug-free programs
talk about ways to have less bugs. You know, let those who does
not pretend he is Good, talk about how to write better programs.
Without your wise comments about something easy to type.

Yevgen

P.S. Written using groups.google.com. May I please get back
into your killfile?
 
R

Richard Heathfield

(e-mail address removed) said:
So you proved that your talking about checking malloc() failure,
"easy" if(pm != NULL) and stuff is just that: talking.

Wrong. Such failures *are* checked. I have accepted that they are checked
in a suboptimal manner and that the handling could be improved, and I will
do so when I get some time to do that. But *some* checking is better than
*none*. It has been claimed that it is *impossible* to check all
allocation requests. The fact that you've found some checking which could
be done better does not mean you've found an absence of checking.
Or perhaps it was written by RH-human, who doesn't bother to check
return of fwrite or fclose (something that can actually happen,

Right, I should check fclose too. Well spotted.

unlike malloc() failure right after main() start),

No, that can happen too. You keep oscillating between right and wrong,
almost as if you were human or something.
May I please get back into your killfile?

What I read is up to me. What you read is up to you. If you don't like what
you read, it's your job to change your killfile, not my job to change
mine.
 
M

Malcolm McLean

Richard Heathfield said:
(e-mail address removed) said:



No, that can happen too. You keep oscillating between right and wrong,
almost as if you were human or something.
Isn't there some guarantee that a hosted program will have 64K of heap
space? Or is it just a figment of my imagination?
 
M

Malcolm McLean

Ian Collins said:
But it could be due to the system not being able to allocate memory for
buffers. Wait a minute, wasn't there a recent thread about allocation
failures?
It could be, but it won't be. I frequently have fopen() fail on me. Every
single time it has been due to the path being wrong, never to memory
exhaustion.
 
R

Richard Heathfield

Malcolm McLean said:
Isn't there some guarantee that a hosted program will have 64K of heap
space? Or is it just a figment of my imagination?

It's a figment, but it has a basis in reality. C90 guarantees that you can
construct at least one object at least 32767 bytes in size. (C99 increases
that to 65535 bytes.) C doesn't define a "heap", though.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,265
Messages
2,571,071
Members
48,771
Latest member
ElysaD

Latest Threads

Top