A solution for the allocation failures problem

M

Morris Dovey

jacob said:
Richard Heathfield wrote:

This is not possible for each allocation at each step without
introducing too many bugs, as the proponents of the other side
have repeatedly pointed out.

This is purely a statement of opinion. I've worked with large
real-time systems where allocation error detection and
transparent error recovery has been an absolute requirement. A
statement that it isn't possible is no more than a declaration
that the programmer is either unwilling to put forth the effort
to meet requirements, or a declaration of personal/team
incompetence.
When an allocation fails it is better to code an exception
that jumps to a recovery point. This simplifies the code
and makes for LESS bugs. Obviously this solution is not
meant for geniuses like you and the other people here
that boast of their infallible powers when programming.

I would suggest that you discover the genius of 10 - 12 ordinary
(but competent) programmers providing peer review - and eagerly
looking for any weakness in your code. It is very much _not_ a
matter of individual genius or infallible powers!
You never (of course) make any mistake when writing the
hundreds of lines of recovery code. My solution is meant for
people that do make mistakes (like me).

I understand that you are attempting to provide a means by which
semi- or incompetent programmers can avoid both the effort
required to learn and the effort required to produce high quality
code.
This is not rocket science. Those who struggle with if() will

This is just rubbish. Nobody "struggles with if". But it is
a proved fact that the more lines of code you have to write
the more the possibility of making mistakes.

*This* is correct. You have just made the best of cases for peer
review, the exercise of diligence, and the need for test
strategies that detect and diagnose errors so that they can be
corrected before release to the first user.
 
I

Ioannis Vranos

Ioannis said:
You can retrieve information about what failed from the exception itself.


More specifically you may apply try-catch to each of the malloc
statements, you can apply try-catch to the whole block of malloc
statements, you can apply try-catch to the entire function body, or you
can apply try-catch inside a caller function.

You do not need to return failure values from called functions to
callers, it is done automatically if you do not catch the exception.
 
R

Richard Heathfield

Ioannis Vranos said:
Exceptions can be used for all kinds of resource allocations.

Well, we don't actually have exceptions in C, so no, they can't. And we're
not likely to get them, either.

This whole thread seems to be based on the fallacy that if(mp != NULL) is
harder to type than if(fp != NULL).
 
J

jacob navia

Richard said:
Ioannis Vranos said:


Well, we don't actually have exceptions in C, so no, they can't. And we're
not likely to get them, either.

In your C maybe, who knows.

In modern C, several compilers provide this facility, for instance
MSVC. Compilers for embedded systems sometimes do this, and when
I ported lcc-win into a DSP try/catch was a requirement, even if the
software did not support floating point.
This whole thread seems to be based on the fallacy that if(mp != NULL) is
harder to type than if(fp != NULL).

It has been explained to you thousand times that this is not the
case, but you think that repeating the same stupid story will make
it true.
 
I

Ioannis Vranos

Richard said:
Well, we don't actually have exceptions in C, so no, they can't. And we're
not likely to get them, either.


I meant under my proposal. The situation is this (and it is a good thing
to do). C++ gets all the nice features from C (have a look at "C++0x"
and what it gets from C99) while C doesn't get any good feature from C++
although they are siblings.


Wouldn't it be useful if C got the namespace mechanism from C++? I think
it would be useful for C programmers. I think the same thing applies
from C++ exceptions. A minor run-time cost is added only when an
exception is thrown, while you can handle the error in the level of the
function calls you desire, instead of writing and writing error handling
code for every error value a function may return, or returning an error
returned value from function to function which I think usually means
this kind of error handling is not done completely and all the time in
real world situations.

Exceptions is an elegant, high-level way of handling errors, while the
try blocks do not add any run-time costs except only a minor one, when
and only when an exception is thrown.

I think C99 has got the wrong path since I first saw it, and I think I
predicted in clc that it would not be fully implemented except perhaps
in GCC. However even GCC hasn't implemented it completely, which means
it's a total failure. It reminds me the Pascal case, where two standards
exist, and more or less Pascal is dead.

I am using C++ now, but I think if C adopted a few good things from C++
it wouldn't harm it.


But the bottom line is I think C99 is dead. But that's off topic.
 
I

Ioannis Vranos

Ioannis said:
I meant under my proposal. The situation is this (and it is a good thing
to do). C++ gets all the nice features from C (have a look at "C++0x"
and what it gets from C99) while C doesn't get any good feature from C++
although they are siblings.


Wouldn't it be useful if C got the namespace mechanism from C++? I think
it would be useful for C programmers. I think the same thing applies
from C++ exceptions. A minor run-time cost is added only when an
exception is thrown, while you can handle the error in the level of the
function calls you desire, instead of writing and writing error handling
code for every error value a function may return, or returning an error
returned value from function to function which I think usually means
this kind of error handling is not done completely and all the time in
real world situations.

Exceptions is an elegant, high-level way of handling errors, while the
try blocks do not add any run-time costs except only a minor one, when
and only when an exception is thrown.

I think C99 has got the wrong path since I first saw it, and I think I
predicted in clc that it would not be fully implemented except perhaps
in GCC. However even GCC hasn't implemented it completely, which means
it's a total failure. It reminds me the Pascal case, where two standards
exist, and more or less Pascal is dead.


Actually there may be a couple of compilers that support C99 fully.
 
S

santosh

Ioannis Vranos wrote:

But the bottom line is I think C99 is dead. But that's off topic.

Why is discussion of C99 status and future off-topic? We have had such
threads before and apart from annoyance at some of jacob's comments, I
don't remember anyone saying that they were OT.
 
S

santosh

Ioannis said:
Ioannis Vranos wrote:



Actually there may be a couple of compilers that support C99 fully.

Yes. Comeau C++ and IBM's VisualAge claim to fully implement C99.

<snip>
 
R

Robert Latest

Ioannis said:
Wouldn't it be useful if C got the namespace mechanism from C++? I think
it would be useful for C programmers. I think the same thing applies
from C++ exceptions.

The one fundamental question that you proposalists are continuously
failing to answer is: If you want features that an otherwise very C-like
language X offers, why don't you simply switch to X instead of bugging C
users about the deficiencies of the language of their choice?
I am using C++ now,

Oh. You already did. Well, good for you.

robert
 
J

jacob navia

Morris said:
This is purely a statement of opinion. I've worked with large
real-time systems where allocation error detection and
transparent error recovery has been an absolute requirement.

Those applications can't use any solution I have proposed.
I have told this countless times, and here I go again:

"There are applications where a GC with a catch/throw mechanism
for memory management is just not usable for security/real time/
or other requirements"


Happy?
A
statement that it isn't possible is no more than a declaration
that the programmer is either unwilling to put forth the effort
to meet requirements, or a declaration of personal/team
incompetence.

Of course I am unwilling to put the effort. That's what I am talking about.

Why should I put that effort when BETTER solutions exist?

Do you go running to your job?

Or you take a public transportation system or you drive?

What?

You DRIVE!

You are just someone that is unwilling to put the effort needed to
get to the job!

I always RUN to my job, I arrive late, and it takes almost the whole
work day but I AM WILLING TO DO THE EFFORT you see?

I would suggest that you discover the genius of 10 - 12 ordinary
(but competent) programmers providing peer review - and eagerly
looking for any weakness in your code. It is very much _not_ a
matter of individual genius or infallible powers!

I think that peer review is a useful thing. But, as with all efforts,
it is VERY expensive (10-12 programmers doing peer review!)

And instead of reviewing the APPLICATION code, they review the
memory management code that is sprawled across the whole
application in hundreds of lines instead of focusing in a
single place where it takes place.

Why burden all applications with all that code? There is NO NEED
to do that that way.
I understand that you are attempting to provide a means by which
semi- or incompetent programmers can avoid both the effort
required to learn and the effort required to produce high quality
code.

Yes, by driving instead of running to the job, an otherwise not
very sportive person can arrive at the job in time. With LESS
effort than the guy running that is always late :)
*This* is correct. You have just made the best of cases for peer
review, the exercise of diligence, and the need for test
strategies that detect and diagnose errors so that they can be
corrected before release to the first user.

*This* is correct. You have just made the best cases for not
wasting your time and your fellow programmer's time in useless
code but to review THE APPLICATION and not the memory management.

When I do an application I want to write THE APPLICATION and
not the memory management part for the NTH time!
 
S

santosh

jacob navia wrote:

When I do an application I want to write THE APPLICATION and
not the memory management part for the NTH time!

You do realise that even with a try/catch statement code for responding
to memory failures must still be written, don't you? I'm not even sure
if it would be all that easier that the traditional method.

You do have point though if one were to use a GC.
 
M

Morris Dovey

jacob said:
Those applications can't use any solution I have proposed.
I have told this countless times, and here I go again:

"There are applications where a GC with a catch/throw mechanism
for memory management is just not usable for security/real time/
or other requirements"

Happy?

It's not about my being happy. I'm perfectly content for you to
implement your systems using whatever methods produce the level
of quality you're after so long as they don't adversely affect
other peoples' lives.

QOI becomes an issue when _any_ method drops a 911 phone call,
contributes to a mid-air crash, trashes an important file, causes
my TIVO box to forget my favorite program, etc, etc.
Of course I am unwilling to put the effort. That's what I am talking about.

Why should I put that effort when BETTER solutions exist?

Ah! Here is where my lack of understanding becomes aparent.
Perhaps you can help me by explaining what is better than
fail-proof (not the same as "error-free") implementations with
transparent recovery.

My difficulty is that I keep envisioning mission-critical systems
(and even a set-top box or cell phone falls into this catagory)
where the recovery needs to be specific to the failure and where,
for example, the _context_ of an allocation failure is key to the
specific recovery action required.
I think that peer review is a useful thing. But, as with all efforts,
it is VERY expensive (10-12 programmers doing peer review!)

You're absolutely right - it /is/ expensive, and should be
scheduled and staffed appropriate to the cost of service/function
loss. I'll agree that the resources invested in development
should not be more than the losses they're intended to prevent.
There's a real need to maintain a sense of what's appropriate.
And instead of reviewing the APPLICATION code, they review the
memory management code that is sprawled across the whole
application in hundreds of lines instead of focusing in a
single place where it takes place.

It /can/ be sprawled across an entire application - or not. I
think it's a design choice. I tend to isolate POSIX-specific code
into separate translation units and that doesn't seem to present
any particular problems. I also tend to isolate error recovery
code in the same way so that changes in error recovery
requirements can be more easily addressed - even though the error
detection is done where the error can be first detected.
Why burden all applications with all that code? There is NO NEED
to do that that way.

Because I seem to have always found myself in situations where
that approach produced the highest-quality system - and because
that QOI was what my employer/client wanted and was willing to
pay for.
Yes, by driving instead of running to the job, an otherwise not
very sportive person can arrive at the job in time. With LESS
effort than the guy running that is always late :)

I'm glad you put a smiley on that one - I'm fairly proud of the
fact that in nearly a half-century of programming, I've been
dependably on (or ahead of) time, within budget, and have met
(and usually exceeded) specifications. More important to me is
that I've managed to deliver really reliable systems.

BTW, it's a lot less about how you get to the job than it is
about how you do it.
*This* is correct. You have just made the best cases for not
wasting your time and your fellow programmer's time in useless
code but to review THE APPLICATION and not the memory management.

When I do an application I want to write THE APPLICATION and
not the memory management part for the NTH time!

I hear you. <g>
 
I

Ioannis Vranos

santosh said:
jacob navia wrote:



You do realise that even with a try/catch statement code for responding
to memory failures must still be written, don't you? I'm not even sure
if it would be all that easier that the traditional method.

You do have point though if one were to use a GC.


How can you do this easily under current C?


void somefunc(void) try
{
int *p= malloc(100*sizeof(*p));

/* ... *.
}

catch(bad_alloc)
{
static no_of_failures= 0;

no_of_failures++;

if(no_of_failures== 10)
return EXIT_FAILURE;

somefunc();
}
 
I

Ioannis Vranos

Minor code correction:

Ioannis said:
How can you do this easily under current C?


void somefunc(void) try
{
int *p= malloc(100*sizeof(*p));

/* ... *.
}

catch(bad_alloc)
{
==> static int no_of_failures= 0;
 
R

Richard Harter

Ioannis Vranos said:


If you want C++, you know where to find it.

The solution to the "allocation failures problem" is, believe it or not,
*not* to lock our code into an untrusted proprietary solution but rather
to check that our allocation requests succeeded and deal with them if they
didn't. This is not rocket science. Those who struggle with if() will
struggle even more with try/catch.

Writing "check for errors and writing response code" for each
call to malloc is, of course, a viable strategy. However there
are issues to be considered. Three such are (a) the
unreliability of "fixup" code, (b) the non-uniformity of error
response, and (c) code littering.

(A) Unreliability: The code in these "failure to allocate"
clauses isn't easy to test properly, and, in the absence of an
allocator wrapper, isn't easy to test at all. This may not
matter if the only action is to write a message to stderr and
call exit, but it definitely is an issue for more elaborate
responses such as request size retries and alternate algorithms.

In my view, reliable programs (and if we are not interested in
reliable programs why bother to test at all) are developed and
maintained with test harnesses that test the alternatives.

(B) Non-uniformity: Since each "failure to allocate" test clause
is individually written there is no guaranteed uniformity of
response to error conditions.

When I develop software in C my preference is to include an error
management module tailored to the program. I find that using a
"good" error management module means that it is easier to get
meaningful responses when errors happen. Obviously this is not a
universal solution.

(C) Code littering: These little "failure to allocate" tests
litter the code, i.e., they break into the readable flow of
action of functions with tangential tests. Granted, this is not
a major issue; none-the-less such littering makes code harder to
read to some degree and requires more code writing.

All of that said, what is one to do? My suggestion is to use a
wrapper for malloc for almost all storage allocation requests.
The "failure to allocate" test and the subsequent error response
(e.g., write an error message and call exit) is in one place
rather than being scattered as multiple copies throughout the
code.

But what, the skeptic says, do we do want to do something special
such as retrying with a smaller size or using an alternative less
memory intensive algorithm? One answer is simple and obvious;
don't use the wrapper in these special cases. A better answer is
to have a second entry into the wrapper package that does return
0 on a failure to allocate, the point being that the wrapper can
have a back door to force failure, a feature very useful in a
test harness. We all test our code in test harnesses, don't we?

Some have argued that a wrapper that terminates the program when
there is an error is a "crash and burn" strategy that can lead to
a large loss of results, i.e., losing files, data, or
calculations. This is pretty much of a strawman. Programs can
crash at any time for reasons beyond the control of the program;
if preserving results is critical, there are well known
strategies such as checkpointing and journaling. More than that
using wrappers and an error management package is not a "crash
and burn" strategy, it is a "black box" strategy.






Richard Harter, (e-mail address removed)
http://home.tiac.net/~cri, http://www.varinoma.com
Save the Earth now!!
It's the only planet with chocolate.
 
K

Keith Thompson

Herbert Rosenau said:
On Fri, 8 Feb 2008 00:34:14 UTC, Kelsey Bjarnason


Chapter and verse please wher the standard requires that free()d
memory can't given back to the OS.
[...]

C99 7.20.3.2:

The free function causes the space pointed to by ptr to be
deallocated, that is, made available for further allocation.

I'm not necessarily claiming that that's the answer to your question,
merely that if there is an answer, that's it.

The question, of course, is "further allocation" by what? Common
sense tells us that it's perfectly reasonable for the OS to reclaim
free()d memory and make it available for other
programs/processes/whatever. But since the C standard only barely
acknowledge the existence of other programs, it's not entirely
unreasonable to assume that the space must be made available for
further allocation by the same program. The standard, for the most
part, describes behavior of a single executing program. Interaction
with the environment is described in its own subsection, C99 7.20.4
(abort, atexit, exit, _Exit, getenv, system).

Making the space available for further use by the current program is
meaningful in the context of the C standard; making it available for
any other use is perhaps beyond the scope of the standard.

On the other hand, there's no way for a portable program to tell
whether free() makes the space available for the current program;
malloc() can fail for any reason, regardless of what's been free()d.
This:

void *p = malloc(100);
if (p != NULL) {
free(p);
p = malloc(100);
if (p == NULL) {
printf("OOPS!\n");
}
}

may legally print "OOPS".

In any case, free() is certainly not *required* to make the space
available for other programs (since there may not be any other
programs). And if the free()d space doesn't happen to be at the end
of the memory space reserved for the program, it's likely to be
difficult to return it to the OS anyway (depending on the OS's memory
management system).
 
R

Randy Howard

Right. It is certainly the case that nobody writes perfect code, but
there's nothing particularly special about malloc - it's just a way of
requesting a resource, very similar in that respect to fopen. I don't hear
anyone arguing that it's impossible to deal with fopen failures.

Just give them time. It won't be long before someone argues that
checking return values fopen() is impossible to do correctly, so don't
even try.
 
B

Bartc

Richard Harter said:
Writing "check for errors and writing response code" for each
call to malloc is, of course, a viable strategy. However there
are issues to be considered. Three such are (a) the
unreliability of "fixup" code, (b) the non-uniformity of error
response, and (c) code littering.
(C) Code littering: These little "failure to allocate" tests
litter the code, i.e., they break into the readable flow of
action of functions with tangential tests. Granted, this is not
a major issue; none-the-less such littering makes code harder to
read to some degree and requires more code writing.

It's difficult to put this point across to those who think programmers must
be superhuman beings who cannot make mistakes or be distracted by untidy or
overscrupulous code.

The strategy to strew low-level malloc() calls all over the place complete
with complex error-handling seems totally wrong.

I presented a little part-solution elsewhere in this thread -- offload
trivial memory allocations to a heap manager using a preallocated block so
that it cannot ever fail, except for program error (indicating a memory leak
that can then be fixed).

The alternative, to use malloc() everywhere, would I think *increase* the
chance of failure, by leaving the program open to serious malfunction due to
memory problems, and may do so for a trivial use of malloc() in a place
illsuited to dealing with such a problem.
 
R

Randy Howard

I think that peer review is a useful thing. But, as with all efforts,
it is VERY expensive (10-12 programmers doing peer review!)

Quite the opposite. It's very cost-effective. You just need to
contrast that cost with what it takes to fix, repair and support
hundreds, thousands or millions of customers in the field that are
suddenly experiencing bugs and demanding fixes, and perhaps even
threatening legal action.
 
Y

ymuntyan

Ioannis Vranos said:



Well, we don't actually have exceptions in C, so no, they can't. And we're
not likely to get them, either.

This whole thread seems to be based on the fallacy that if(mp != NULL) is
harder to type than if(fp != NULL).

And so I went to look for best practices, got some C code from
http://www.cpax.org.uk/prg/portable/c/libs/emgen/index.php#download

It handles malloc() failure pretty well: the function which calls
malloc() returns NULL, and its caller happily uses some default
value instead of what it's supposed to do. But it does "handle"
malloc() failure.

Sure, from such a toy program you shouldn't expect much intelligence
(though it could fail instead of saying "all is well"), but we
are talking here "easy to type", aren't we? Or do I miss something,
and it's really not about handling errors but about typing an
"if(mp != NULL)"?

A bonus question: what happens when malloc() fails inside fgetline()
(my hypothesis: nothing, we pretend we hit EOF. But we don't "crash
and burn", which is Good)

Yevgen
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,780
Messages
2,569,611
Members
45,270
Latest member
TopCryptoTwitterChannels_

Latest Threads

Top