A solution for the allocation failures problem

K

Kelsey Bjarnason

This is a lie!

No kidding. One wonders if they feel the same way about, oh, opening
files, or establishing network connections - is it impossible to check
every call to fopen, every call to connect? If not, why is memory
allocation so magically difficult to check?

Where in the standard is a garbidge collector mentioned?

Nowhere, but the whole issue is really nothing more than one of simple
laziness, and whatever issues GC may have, good or bad, it tends to
reduce the effort involved in managing memory. (Until it doesn't, then,
well, have fun.)
Releasing such buffer may not effect the ability of malloc() to return
or not memory at all.

In theory, at least, it should - as the definition of free explicitly
says it releases the memory for subsequent allocation, which means
(presumably) that a conforming implementation cannot hand it back to the
OS - it wouldn't be available for allocation if it's being used elsewhere.
Who says that the application is able to print to stdout? Any GUI app on
my OS has neither stdin, stdout nor stderr pointing to something known
as (virtual) device.

Granted, but if you're gonna dump diagnostic messages, it's hard to beat
this as a more or less standard approach.
Crappy because a simple return NULL to the caller of the function
calling malloc() is the only manageable solution to recover from that
failture as only the caller or its caller will be able to win enough
memory to continue successfully.

Dealing with the failure "at point of failure" isn't inherently evil.
For example, if the calling code is trying to, oh, send a packet across
the network and the connection has failed, it might be nice if it tried
to re-establish the connection before giving up. Depends on the app, but
it's not a wholly unreasonable concept.

I *think* this is the guiding logic here; try to allocate, if it doesn't
work, try to recover, if you can't, bail.

In terms of dealing with things "at point of failure", where you probably
have the best bet of actually coping with the problem, it's not so bad a
concept.

Where it becomes bad, for me, is that the failure strategy is to simply
throw in the towel, without giving the caller any chance to do anything
about the situation.

In an example I used elsewhere, if my word processor can't allocate
enough memory to load a fourth document, I have at least two options:
crash and burn, or simply accept I can only work with three documents
right now.

Crashing and burning is not so good if I have three documents open with
edits I'd like to keep; I'd much prefer simply getting a failure
indicator - such as NULL instead of a valid pointer - and coping with it
in the calling code. If nothing else, I can save the existing documents
before bailing.

No, it can not because loss of human life, loss of other goods may be
the consequence of simpy exit(). The only who knows to handle the
unabiltity of malloc() is the caller or its parent of malloc().
....

exit() is forbidden because it will left much devices in an unmaintend
state, like
switches open, signals set, pumps pumping, rockets firing, gates opened,
filters passing, firing the 3. WWW up, .....

"Yes, the nuclear plant melted down and spewed radiation across three
counties, all because we couldn't allocate enough memory for another
week's worth of status logs. We could have simply returned NULL and let
the caller free the previous week's logs, freeing up the space for the
new data, but it was easier to just abort, leaving valves opened,
temperatures and pressures unmonitored, that sort of thing. So for your
own safety, would you mind not breathing until, oh, 2150? Thank you." :)

And all that because the programmer was to braindead to write a single
if.

You really have to wonder. Do these people actually just blindly assume
every file open works? Every file read or write works? Every network
connection establishes, every packet transmits?

I suspect not; I suspect they do, in fact, check these things as a matter
of habit. Memory allocation, though, invokes some weird sort of black
magic which cannot, for some reason, be dealt with in the same way.

I don't get it, and it sounds like you don't, either. Maybe one of 'em
will explain it in terms that involve some sort of consistency: if you
check one sort of resource acquisition as a matter of habit or design,
why is another sort of resource acquisition treated so much differently?
 
K

Kelsey Bjarnason

[snips]

Kelsey Bjarnason said:

The claim that it is not possible to check every malloc result within
complex software reminded me of a project in the mid-1990s where I was
fortunate enough to be in at the start, so I was able to have my say in
setting up project coding standards. Just about the first thing I
suggested was: "let's ensure that all the code clean-compiles at level
4" (this is the highest warning level in Visual Studio 1.5, which we
were using for development before porting up to the mainframe for
testing).

One of the others said, in a wonderfully broad Chicago accent, "Level
FOUR? That's IMPOSSIBLE!"

This came as news to me, since I had routinely been getting clean
compiles at level 4 for some years.


Indeed. While I'll grant that some of MS's headers - particularly MFC
headers - do or did tend not to compile clean, that aside if there's a
warning, I want to know _why_. Maybe - *maybe* - it's okay and there's
nothing you can do about it (it's happened once or twice) but in general,
crank the sucker and code clean.

I've largely given up arguing the point as pertains to allocation
failure, though. I don't know if it's that some people *want* to write
bad code, or simply can't be bothered to write good code, but either way
the result is the same and they seem to have an almost religious
adherence to their view, with about as much basis for holding it as most
religious views seem to have.
 
H

Herbert Rosenau

1:
It is not possible to check EVERY malloc result within complex software.

This is a lie!
2:
The reasonable solution (use a garbage collector) is not possible for
whatever reasons.

Where in the standard is a garbidge collector mentioned?
3:
A solution like the one proposed by Mr McLean (aborting) is not
possible for software quality reasons. The program must decide
if it is possible to just abort() or not.

Solution:

1) At program start, allocate a big buffer that is not used
elsewhere in the program. This big buffer will be freed when
a memory exhaustion situation arises, to give enough memory
to the error reporting routines to close files, or otherwise
do housekeeping chores.

Releasing such buffer may not effect the ability of malloc() to return
or not memory at all.
2) xmalloc()

static int (*mallocfailedHandler)(int);
void *xmalloc(size_t nbytes)
{
restart:
void *r = malloc(nbytes);
if (r)
return r;
// Memory exhaustion situation.
// Release some memory to the malloc/free system.
if (BigUnusedBuffer)
free(BigUnusedBuffer);
BigUnusedBuffer = NULL;
if (mallocfailedHandler == NULL) {
// The handler has not been set. This means
// this application does not care about this
// situation. We exit.
fprintf(stderr,
"Allocation failure of %u bytes\n",
nbytes);
fprintf(stderr,"Program exit\n");

Who says that the application is able to print to stdout? Any GUI app
on my OS has neither stdin, stdout nor stderr pointing to something
known as (virtual) device.
exit(EXIT_FAILURE);
}
// The malloc handler has been set. Call it.
if (mallocfailedHandler(nbytes)) {
goto restart;
}
// The handler failed to solve the problem.
// Exit without any messages.
exit(EXIT_FAILURE);
}

Crappy because a simple return NULL to the caller of the function
calling malloc() is the only manageable solution to recover from that
failture as only the caller or its caller will be able to win enough
memory to continue successfully.
4:
Using the above solution the application can abort if needed, or
make a long jump to a recovery point, where the program can continue.

No, it can not because loss of human life, loss of other goods may be
the consequence of simpy exit(). The only who knows to handle the
unabiltity of malloc() is the caller or its parent of malloc().
The recovery handler is supposed to free memory, and reallocate the
BigUnusedBuffer, that has been set to NULL;

No, as the exactly one useable recovery handler is only rechable
through the return chain of the calling sequence ending with seeing
out of memory.

exit() is forbidden because it will left much devices in an unmaintend
state, like
switches open, signals set, pumps pumping, rockets firing, gates
opened, filters passing, firing the 3. WWW up, .....

And all that because the programmer was to braindead to write a single
if.

Anything that can fail will fail. Ctch it and do the appropiate action
on the failture. You'll save howers over howers on maintenance, save
month on consts on stallstind of the factory, work, mashines, .....
only because the braindead programmer was too fishy to mak his error
checking right.

--
Tschau/Bye
Herbert

Visit http://www.ecomstation.de the home of german eComStation
eComStation 1.2R Deutsch ist da!
 
R

Richard Heathfield

Kelsey Bjarnason said:
No kidding. One wonders if they feel the same way about, oh, opening
files, or establishing network connections - is it impossible to check
every call to fopen, every call to connect? If not, why is memory
allocation so magically difficult to check?

The claim that it is not possible to check every malloc result within
complex software reminded me of a project in the mid-1990s where I was
fortunate enough to be in at the start, so I was able to have my say in
setting up project coding standards. Just about the first thing I
suggested was: "let's ensure that all the code clean-compiles at level 4"
(this is the highest warning level in Visual Studio 1.5, which we were
using for development before porting up to the mainframe for testing).

One of the others said, in a wonderfully broad Chicago accent, "Level FOUR?
That's IMPOSSIBLE!"

This came as news to me, since I had routinely been getting clean compiles
at level 4 for some years.
 
R

Richard Heathfield

Kelsey Bjarnason said:

I've largely given up arguing the point as pertains to allocation
failure, though. I don't know if it's that some people *want* to write
bad code, or simply can't be bothered to write good code, but either way
the result is the same and they seem to have an almost religious
adherence to their view, with about as much basis for holding it as most
religious views seem to have.

I once took part in a long and highly detailed technical argument in
alt.comp.lang.learn.c-c++, on the subject of strncpy, in its role as the
"safe"haha version of strcpy. My opponent was courteous and well-informed
- a delightful combination. The argument (and it really was an *argument*,
not a row or a fight) raged, very politely, for several days.

The eventual outcome of the debate was that my opponent ***changed his
mind***.

Shock! Horror!

You'd think it would be happening all the time, wouldn't you? Well, often,
anyway. But IME it is quite rare for someone to change an entrenched
opinion as a result of reasoning about the subject. Nevertheless, it is
always something to hope for.

Of course, for such an argument to occur, it is essential that each person
taking part respects his opponent(s). I agree with you that, when things
have descended to the "checking malloc is dumb, you smell of bat guano,
and so does your warthog" level, it is time to stop.
 
K

Kenneth Brody

Richard Heathfield wrote:
[...]
setting up project coding standards. Just about the first thing I
suggested was: "let's ensure that all the code clean-compiles at level 4"
(this is the highest warning level in Visual Studio 1.5, which we were
using for development before porting up to the mainframe for testing).

One of the others said, in a wonderfully broad Chicago accent, "Level FOUR?
That's IMPOSSIBLE!"

"It's not impossible. I used to bullseye womp rats in my T-16 back
home, they're not much bigger than two meters."

--
+-------------------------+--------------------+-----------------------+
| Kenneth J. Brody | www.hvcomputer.com | #include |
| kenbrody/at\spamcop.net | www.fptech.com | <std_disclaimer.h> |
+-------------------------+--------------------+-----------------------+
Don't e-mail me at: <mailto:[email protected]>
 
A

Antoninus Twink

The eventual outcome of the debate was that my opponent ***changed his
mind***.

Shock! Horror!

You'd think it would be happening all the time, wouldn't you? Well,
often, anyway.

You certainly would - I mean, given that Heathfield is Mr Perfect, right
about every detail, how could people possibly persist in disagreeing
with him?

I wonder how many other hapless newbies have been browbeaten by
Heathfield into "changing their minds" and falling into step with his
singular worldview.
But IME it is quite rare for someone to change an entrenched opinion
as a result of reasoning about the subject. Nevertheless, it is always
something to hope for.

Look, not a trace of irony here...
Of course, for such an argument to occur, it is essential that each
person taking part respects his opponent(s). I agree with you that,
when things have descended to the "checking malloc is dumb, you smell
of bat guano, and so does your warthog" level, it is time to stop.

....nor here. Unbelievable.
 
B

Bartc

opened, filters passing, firing the 3. WWW up, .....

And all that because the programmer was to braindead to write a single
if.

Anything that can fail will fail. Ctch it and do the appropiate action

Not everything is so critical. And if it was then perhaps C (which doesn't
even check array indexing for example) might not be the best language to
use.

Anyway part of this thread wasn't so much about automatically using exit()
on malloc() failure, but offloading the checks elsewhere. Then that might
leave the code clean enough to catch actual *bugs*.
 
J

jacob navia

Bartc said:
Not everything is so critical. And if it was then perhaps C (which doesn't
even check array indexing for example) might not be the best language to
use.

Anyway part of this thread wasn't so much about automatically using exit()
on malloc() failure, but offloading the checks elsewhere. Then that might
leave the code clean enough to catch actual *bugs*.

This is obvious for anyone reading what I proposed. Mr Rosenau can't
read apparently. Polemic without end.
 
H

Herbert Rosenau

In theory, at least, it should - as the definition of free explicitly
says it releases the memory for subsequent allocation, which means
(presumably) that a conforming implementation cannot hand it back to the
OS - it wouldn't be available for allocation if it's being used elsewhere.

In practise it does not - because immediately after free() has oned
its work the sheduler gives another thread the CPU and that thread has
nothing better to do than to eat the released memory for its own job.
Dealing with the failure "at point of failure" isn't inherently evil.
For example, if the calling code is trying to, oh, send a packet across
the network and the connection has failed, it might be nice if it tried
to re-establish the connection before giving up. Depends on the app, but
it's not a wholly unreasonable concept.

No, because at that point the app knows nothing than the handle that
holds not enough information reopen the session. You'll have to inform
some higher instance that the current connection is broken and let it
do the reconnect on the sybolic name as the break may occure because
something in the path may have changed and even the IP is out of order
now.
I *think* this is the guiding logic here; try to allocate, if it doesn't
work, try to recover, if you can't, bail.

In terms of dealing with things "at point of failure", where you probably
have the best bet of actually coping with the problem, it's not so bad a
concept.

No, mostenly at point of failture you only knows that there is a
failture but you lacks off information how get it repaired. Your
caller or even its caller will have more knowledge about the
abstracts, so it will be able to recover.
Where it becomes bad, for me, is that the failure strategy is to simply
throw in the towel, without giving the caller any chance to do anything
about the situation.

It's much simpler to undo anything you've done in the failing action,
go back to the caller with the error you failed and let it find its
own strategy up the chain until a more intelligent function knows
better what to do on the current situation.
In an example I used elsewhere, if my word processor can't allocate
enough memory to load a fourth document, I have at least two options:
crash and burn, or simply accept I can only work with three documents
right now.

No, there is only one option: accept that you can NOT open another
document, tell the user the fact of 'currently too low on resource
xxx' and let the user decide what he likes to do - including another
try to get what he want.
Crashing and burning is not so good if I have three documents open with
edits I'd like to keep; I'd much prefer simply getting a failure
indicator - such as NULL instead of a valid pointer - and coping with it
in the calling code. If nothing else, I can save the existing documents
before bailing.

Crashing and burning is no idea I would even evaluate. Let the user
decide what action may give you more free resources. He may save a
document (temporarey) to give you a bit more resource to get the job
he want, he may decide to shutdown the procram completely to come back
later when he thinks that the mashine has more free resources, he may
simply establish any change he had done now to get the undo buffers
free, he may shutdown one or more other applications or other tasks of
your app.......
You really have to wonder. Do these people actually just blindly assume
every file open works? Every file read or write works? Every network
connection establishes, every packet transmits?

I suspect not; I suspect they do, in fact, check these things as a matter
of habit. Memory allocation, though, invokes some weird sort of black
magic which cannot, for some reason, be dealt with in the same way.

I suspect clearly they do because they must save each possible cent on
developement and will decline maintenance too.
I don't get it, and it sounds like you don't, either. Maybe one of 'em
will explain it in terms that involve some sort of consistency: if you
check one sort of resource acquisition as a matter of habit or design,
why is another sort of resource acquisition treated so much differently?
Look, my customers are scold me that I'm pedantic - but on other hand
they come back again with new jobs because I'm pedantic.

--
Tschau/Bye
Herbert

Visit http://www.ecomstation.de the home of german eComStation
eComStation 1.2R Deutsch ist da!
 
M

Morris Dovey

Kelsey said:
Scheduler or no, it seems to me than an implementation which does this
may in fact be non-conforming.

Eh? Please give a few hints as to your thought process here. I'm
not following (finding) your logic...
 
K

Kelsey Bjarnason

[snips]

In practise it does not - because immediately after free() has oned its
work the sheduler gives another thread the CPU and that thread has
nothing better to do than to eat the released memory for its own job.

Scheduler or no, it seems to me than an implementation which does this
may in fact be non-conforming.
No, because at that point the app knows nothing than the handle that
holds not enough information reopen the session.

As I said, it depends on the nature of the failure. If the code at that
point _does_ have the info, it can in principle at least try. If nothing
else, when giving up, it should do just that: give up. Not take the
entire freaking application with it.
No, mostenly at point of failture you only knows that there is a
failture but you lacks off information how get it repaired.

Again, not necessarily. Suppose I'm writing an allocator such as
Malcolm's xalloc. Nothing wrong with his basic notion of "signal user"
or some equivalent (it could be passed a pointer to a notification
function which it could call, which in turn might try to free resources)
before trying again; it's just that if this step doesn't work, it dies
instead of reporting a failure. *That* part I have issues with.
It's much simpler to undo anything you've done in the failing action, go
back to the caller with the error you failed and let it find its own
strategy up the chain until a more intelligent function knows better
what to do on the current situation.

Or functions, plural. No reason that if you're 9 levels down the call
tree, seven of them might not have things to do in the recovery process.
No, there is only one option: accept that you can NOT open another
document, tell the user the fact of 'currently too low on resource xxx'
and let the user decide what he likes to do - including another try to
get what he want.

No, there are two options: do exactly that, which is what I suggested, or
do what Malcolm et al seem bent on doing: crashing and burning.

It really is an option. Careless, sloppy, ill-conceived and potentially
disastrous, but it is still an option.
Crashing and burning is no idea I would even evaluate.

Unless one has actually exhausted the alternatives: no UI, so you can't
alert the user. No disk space left, so you can't save. Etc. At some
point you do have to simply give up; I just can't agree with it being the
_first_ choice, rather than the _last_.
I suspect clearly they do because they must save each possible cent on
developement and will decline maintenance too.

Maybe, but good goat, talk about a case of penny wise, pound foolish.
Look, my customers are scold me that I'm pedantic - but on other hand
they come back again with new jobs because I'm pedantic.

Pedantically speaking, no they don't, they come back again because you
get the job done effectively. :)
 
I

Ioannis Vranos

Well, it may sound somehow unusual, but since C and C++ are something
like siblings, why not just adopt a subset of the C++ exception
mechanism? Like throwing specific structs and catch them?

For those not familiar with C++ something like:


void some_function(void) try
{
/* ... */
}

catch(bad_alloc)
{
/* Error handling code here */
}



void somefunc(void)
{
try
{
int *p= malloc(sizeof(*p));
}

catch(bad_alloc)
{
/* Error handling code here */
}

/* ... function code continues... */
}


My complete proposal is to adopt the namespace mechanism and the
exception handling mechanism of C++, in a way they will be a subset of
C++ equivalent functionalities (that is only for functions, struct
definitions and objects, and variables) - so there can be compatibility
between C and C++ interfaces, making it easier for the implementers to
provide both C and C++ compilers, and making the already existing C++
mechanisms available to C easily and quickly.
 
M

Morris Dovey

Kelsey said:
As I read the description of free(), it releases memory for subsequent
allocation. Bear in mind, C has no concept of multitasking and the like,
thus "subsequent allocation" *must* be for the application.

So, if my first program action is to allocate a gig of ram, which I then
free immediately, that gig is still "reserved" for my application to
reallocate. Failure to "reserve" it means it is *not* being made
available for subsequent allocation; it is being handed off to something
else - the OS - instead, in direct contradiction of what it's supposed to
do.

Ok - I think I understand, but don't recall seeing anything in
the Standard that /requires/ this reservation. Personally, I'd
hope that free()d memory would be re-absorbed by the OS (if
present) into global pool(s) so that it could be made available
to other processes/threads - and it seems to me inappropriate for
a programming language standard to dictate host memory resource
management in such a way. I guess my mileage varies on this
issue.
 
K

Kelsey Bjarnason

Eh? Please give a few hints as to your thought process here. I'm not
following (finding) your logic...

As I read the description of free(), it releases memory for subsequent
allocation. Bear in mind, C has no concept of multitasking and the like,
thus "subsequent allocation" *must* be for the application.

So, if my first program action is to allocate a gig of ram, which I then
free immediately, that gig is still "reserved" for my application to
reallocate. Failure to "reserve" it means it is *not* being made
available for subsequent allocation; it is being handed off to something
else - the OS - instead, in direct contradiction of what it's supposed to
do.
 
R

Richard Tobin

Kelsey Bjarnason said:
As I read the description of free(), it releases memory for subsequent
allocation. Bear in mind, C has no concept of multitasking and the like,
thus "subsequent allocation" *must* be for the application.

Malloc() can use any strategy it likes, and can prefectly well return
NULL even if it could somehow find sufficient memory if it used a
different strategy. So you can't tell whether it fails because it's
released the memory to other processes or is (for example) using it
for internal bookkeeping.
So, if my first program action is to allocate a gig of ram, which I then
free immediately, that gig is still "reserved" for my application to
reallocate. Failure to "reserve" it means it is *not* being made
available for subsequent allocation; it is being handed off to something
else - the OS - instead, in direct contradiction of what it's supposed to
do.

I don't believe such an interpretation was intended. It would be
utterly ludicrous, and if you really believe that's what the standard
implies you should send a defect report.

-- Richard
 
I

Ian Collins

Kelsey said:
As I read the description of free(), it releases memory for subsequent
allocation. Bear in mind, C has no concept of multitasking and the like,
thus "subsequent allocation" *must* be for the application.
The standard uses the term "made available for further allocation". It
doesn't say to whom.
 
S

santosh

Ioannis said:
Well, it may sound somehow unusual, but since C and C++ are something
like siblings, why not just adopt a subset of the C++ exception
mechanism? Like throwing specific structs and catch them?

<snip>

You may be already aware but I think the "lcc-win32" compiler implements
exceptions similar to C++. The author also claims to have implemented
operator overloading as in C++.
 
I

Ian Collins

santosh said:
<snip>

You may be already aware but I think the "lcc-win32" compiler implements
exceptions similar to C++. The author also claims to have implemented
operator overloading as in C++.
The author appears to turn purple at the very mention of C++! His
operator overloading implementation is nothing like C++.
 
O

Old Wolf

I propose implementing something combining both signal handling and
unwinding features.

sza=(1<<31)-1;

1 << 31 causes undefined behaviour (if int is 32-bit).

On a normal 2's complement system, one would expect
1 << 31 to generate INT_MIN -- and then you cause
further undefined behaviour by subtracting 1 from that value.

Perhaps you were looking for INT_MAX, or SIZE_MAX ?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,537
Members
45,021
Latest member
AkilahJaim

Latest Threads

Top