J
Johan Tibell
Keith said:It is now.
Then quoting my question in your post as if your responding to some
claim I made makes no sense at all.
Keith said:It is now.
The only time I used a debugger in 35 years of
programming, was when an assembler decoding
subroutine failed to cross a 64K barrier properly.
(I did not write that one.).
To me, it smells like "let the system catch my errors"
while you should try to avoid them in the first place.
Sloppy typing can cause errors which are not
found by your compiler/debugger.
Keith Thompson said:[...]Richard Heathfield said:The response to a failure depends on the situation. I've covered this in
some detail in my one-and-only contribution to "the literature", so I'll
just bullet-point some possible responses here:
* abort the program. The "student solution" - suitable only for high school
students and, perhaps, example programs (with a big red warning flag).
* break down the memory requirement into two or more sub-blocks.
* use less memory!
* point to a fixed-length buffer instead (and remember not to free it!)
* allocate an emergency reserve at the beginning of the program
* use virtual memory on another machine networked to this one (this
is a lot of work, but it may be worth it on super-huge projects)
Out of curiosity, how often do real-world programs really do something
fancy in response to a malloc() failure?
I've seen suggestions that, if a malloc() call fails, the program can
fall back to an alternative algorithm that uses less memory. How
realistic is this? If there's an algorithm that uses less memory, why
not use it in the first place? (The obvious answer: because it's
slower.) Do programmers really go to the effort of implementing two
separate algorithms, one of which will be used only on a memory
failure (and will therefore not be tested as thoroughly as the primary
algorithm)?
Depends on the situation. For example, if the program is a word
processor, then I would hope that the response to a malloc() failure
that occurs when trying to paste a humungous graphic would be to put up
a message saying "sorry, no memory to paste this picture" and keep the
rest of the document in a workable, savable condition, and not to throw
up your hands and die, trashing the still usable existing document.
Checking, yes. Failure strategies are another matter.Richard said:Skarmander said:
<grin> No, there's no harumphing going on over here. But basic resource
acquisition checking is as fundamental to robustness as steering so as not
to bump the kerb is to driving.
Worse, a dead customer is a lost customer whose relatives will probablyAbsolutely. And a dead customer is a lost customer, right?
That's not what I meant. A program that deals with memory exhaustion byOr not even that. We're back to undefined behaviour.
Yes, this (make sure you're in a consistent state before bailing out) is theIf the objective is to gather enough moisture to survive until the rescue
helicopter arrives - that is, if the objective is to complete the immediate
task so that a consistent set of user data can be saved before you bomb out
- then it's a sensible approach.
My bad, I was thinking in terms of a global approach to memory allocationI was assuming just the one, actually, local to the function where the
storage is needed.
jacob said:Johan Tibell a écrit :
Using the second form allows you to easily see the return value in
the debugger.
Johan said:I've written a piece of code that uses sockets a lot (I know that
sockets aren't portable C, this is not a question about sockets per
se). Much of my code ended up looking like this:
if (function(socket, args) == -1) {
perror("function");
exit(EXIT_FAILURE);
}
I feel that the ifs destroy the readability of my code. Would it be
better to declare an int variable (say succ) and use the following
structure?
int succ;
succ = function(socket, args);
if (succ == -1) {
perror("function");
exit(EXIT_FAILURE);
}
What's considered "best practice" (feel free to substitute with: "what
do most good programmers use")?
programs targeted at home computers or servers can assume that you'll
have a 99.99% success rate on a functioning system when allocating
memory < 1Kb.
Because that tiny percentage is the difference between
p = malloc (sizeof *p * q);
and
p = malloc (sizeof *p * q);
if (rv = !!p)
{
/* Rest of function here. */
}
Those ifs nest up and it becomes a pain to manage them.
Keith said:Thank you, that's an excellent example.
In general, a batch computational program will perform a series of
operations, and if any of them fails it's likely (but by no means
certain) that the best you can do is throw out the whole thing. An
interactive program, on the other hand, performs a series of tasks
that don't necessarily depend on each other, so it makes more sense to
abort one of them and continue with the others.
Andrew said:What exactly /would/ be the way to do such a thing? I ask you because
you don't like multiple returns or the break statement, both of which
would be a typical response.
Being as every other post was pretty much exactly as insulting as this,
I'd say that I wasn't not wrong on any minor point! I'm glad that I haven't
had the chance to make these foolhardy changes to my actual code yet.
I've written a new interface to my error library so that it will be able
to handle memory failures gracefully, log to a runtime-determined file,
check for bad files or memory, and ensure that a proper message reaches
the user if it can't go on.
goose said:jacob navia wrote: [snip]Using the second form allows you to easily see the return value in
the debugger.
Unless the enthusiastic compiler optimised the variable
away (its not really needed in the above example, unless
it gets tested again).
goose said:Andrew Poelstra wrote:
Because that tiny percentage is the difference between
p = malloc (sizeof *p * q);
and
p = malloc (sizeof *p * q);
if (rv = !!p)
{
/* Rest of function here. */
}
Those ifs nest up and it becomes a pain to manage them.
Then don't; when your number[1] is up and the malloc(10)
call fails, rather exit immediately than have the program
behave unpredictably.
#define FMALLOC(ptr,size) (ptr=malloc (size) ? ptr : exit (-1))
...
char *p; FMALLOC(ptr, sizeof *p * q)
...
Tracking UB is a bloody nightmare for the maintainer!!!
Being unable to reproduce the bug is morale-killer.
[1] When your 0.02% or whatever finally comes up.
Yes, this (make sure you're in a consistent state before bailing out) is
the sense in which robustness applies to memory exhaustion.
Your initial post was confusing because you made it look as if exiting
with an error was unacceptable, as the "high school solution", while what
you meant (I take it) was that a program should respond by working towards
a consistent state, and then (as it eventually will have to) bail out.
("Wait for more memory to become available" may be an approach for a very
limited set of circumstances, but it's not recommendable in general at
all, as it'll probably lead to deadlocks.)
I've seen suggestions that, if a malloc() call fails, the program can
fall back to an alternative algorithm that uses less memory. How
realistic is this? If there's an algorithm that uses less memory, why
not use it in the first place? (The obvious answer: because it's
slower.) Do programmers really go to the effort of implementing two
separate algorithms, one of which will be used only on a memory
failure (and will therefore not be tested as thoroughly as the primary
algorithm)?
I'd write that as:
#define FMALLOC(ptr, size) ((ptr)=malloc(size) ? (ptr) : exit (EXIT_FAILURE))
EXIT_FAILURE really *is* what I should've used. However I'll change
the above to:
#define FMALLOC(ptr,size) ((ptr)=malloc(size) ? (ptr) : exit
(EXIT_FAILURE))
(you may not leave a space between ptr and size, the preprocessor
stops parsing at the first whitespace and uses the text it found till
the
first whitespace as the macro)
goose said:EXIT_FAILURE really *is* what I should've used. However I'll change
the above to:
#define FMALLOC(ptr,size) ((ptr)=malloc(size) ? (ptr) : exit
(EXIT_FAILURE))
(you may not leave a space between ptr and size, the preprocessor
stops parsing at the first whitespace and uses the text it found
till the first whitespace as the macro)
Keith said:Nope.
The lack of whitespace between FMALLOC and the '(' is significant
(with whitespace, it would be a macro taking no arguments, and the '('
would be the first token of the expansion). But there's no rule about
whitespace between "ptr," and "size".
Keith Thompson (The_Other_Keith) (e-mail address removed) <http://www.ghoti.net/~kst>
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.