R
Randy Howard
Given that your system runs out of memory about every month, how many times
is it likely to need hardware repairs in 25,000 months?
You've raised hand-waving to a new level of art form.
Given that your system runs out of memory about every month, how many times
is it likely to need hardware repairs in 25,000 months?
Yes, but normally your system will run out of memory when a large amount,
say half a megabyte to hold ten seconds of audio samples, is requested. It
will fail on a request for twenty bytes only once in every 25,000 cases,
assuming both allocation requests are made and the system always fails on
one of them.
Given that your system runs out of memory about every month, how many times
is it likely to need hardware repairs in 25,000 months?
Maybe that should be made clearer. If there is realistic chance of the
allocation failing, then the failure path should be considered a normal part
of program logic. When the request is for twenty bytes on a syustem with 2GB
installled, however, this becomes less sensible.
Malcolm said:.... snip ...
/*
failsafe malloc drop-in
Params: sz - amount of memory to allocate
Returns: allocated block (never returns NULL even on block 0)
*/
void *xmalloc(int sz)
{
void *answer = 0;
assert(sz >= 0);
if(sz == 0)
sz = 1;
CBFalconer said:Oh? Try the following:
#include <stdio.h>
#include <stdlib.h>
#define SZ 40
int main(void) {
unsigned long count;
void *ptr;
count = 0;
while (ptr = malloc(SZ)) count++;
printf("Failed after %lu tries\n", count);
return 0;
}
Richard said:Ben Bacarisse said:
.... snip ...
Oops, so you did. I even read it, actually - but I have a memory
like a... like a... sort of bowl-shaped, you use it to sift, um,
well, cooky stuff, goes in bread...
It depends if the video editor has deliberately tailored his request to theuser923005 said:What happens if some other application (running at the same time as
your small RAM requestor) succeeds in getting RAM for video editing,
and then you ask for your small packet of RAM?
Not checking the return of malloc() is simply incompetent.
Assuming that something which can fail will always work is not a good
idea, and especially when checking is an ultra-simple operation.
BTW, 99.873% of all statistics about malloc() failure percentage are
made up out of thin air.
It depends if the video editor has deliberately tailored his request to the
amount of memory in the machine. If he has, all bets are off. If he hasn't,
then the chance of failing on the small packet rather than in the video
editor is quite small.
Typically something between 33% to 50% of code will be there to handle
malloc() failures, in constructor-like C functions. If malloc() fails they
destroy the partially-constructed object and return null to caller.
Typically caller has to simply terminate, or destroy itself and return null
to a higher level, which terminates.
Try to follow the logic.user923005 said:If you think the odds are quite small that other applications which
are not intentionally trying to allocate all of RAM can make your
malloc() fail are quite small, then I can assure you that you are
mistaken. It is not at all uncommon for people to run lots of
applications at the same time. Sometimes the applications are long-
running applications. Sometimes these applications have resource
leaks. Memory allocation failure for a large or small memory request
is not a rare occurence. I think it is very careless to assume that
any memory allocation will succeed without testing it.
The question is, where will the failure occur? In a big allocation, or a
little allocation? With some simplifying assumptions, the answer is very
easy to calculate, and so you can work out the size of allocation that is
orders of magnitude less likely to fail than a hardware failure, that is
to says the error-handling code is vanishingly unlikely to be executed.
It depends on whether the code says "returns NULL on failure" or not. If itRichard Heathfield said:Malcolm McLean said:
So whose fault is this? The customer's fault, for running the code on a
fifty-million line log file? That depends on whether the shipped code >
had a big notice on the box saying "warning - this lib was written by >
overly optimistic developers who haven't properly tested it in a real
environment".
It depends on whether the code says "returns NULL on failure" or not. If
it does then, clearly, it is lying, and it is the fault of the library
vendor.
Most structures consist of nested arrays, sizes to be calculated at runtime.Richard Bos said:That's a massive overestimate, probably because you're going about > it
the wrong way. _First_ you ask for the memory, _then_ you start
assigning the values. That way, when malloc() fails, all you have to do
is return an error to the caller.
That's true. You used to get this problem with Midi files - it was extremelyRichard Heathfield said:Malcolm McLean said:
Granted. But the important point is this: that the allocation was for a >
few lousy bytes, and yet the failure occurred very quickly - it did not
take the many thousands of hours that you were claiming it would
take.
A 20 byte allocation doesn't seem like a lot, until you do it in a loop.
So you really are claiming that in some circumstances it's acceptable
to call malloc() and blindly assume that it succeeded, if you've
confirmed that the probability of failure is sufficiently low.
Malcolm McLean said:That's true. You used to get this problem with Midi files - it was extremely
tempting to put the notes into a linked list, but it would run a late
eighties vintage PC out of memory.
Nowadays it isn't a problem, of course,
Malcolm McLean said:Most structures consist of nested arrays, sizes to be calculated at runtime.
christian.bau said:2. If your environment is such that any access to null pointers will
crash the program, and a crashing program is harmless:
christian.bau said:2. If your environment is such that any access to null pointers will
crash the program, and a crashing program is harmless:
I would say that a properly written wrapper that ends up calling
exit(EXIT_FAILURE) (or even abort()) on allocation failure is a better
solution in this case than letting the null pointer dereference crash
the program.
I can only suppose that one of us must have amazingly unusual tasks to
handle.
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.