When to check the return value of malloc

H

Herbert Rosenau

xmalloc() will never return null.
Now say that, on a fairly small desktop machine, we want to allocate some
tens of megabytes of memory for a large image. If the machine won't support
an allocation that size, there is no way that xmalloc() can return. It must
terminate the program. Which is almost certainly not what you want, because
it is quite forseeable that user will demand an image larger than you can
handle, and almost certainly you wnat a message such as "image requested too
large, please try a smaller one".

On the othe rhnad if a request for twenty bytes fails, we are in deep
trouble. Quite possibly the only realistic thing to do in such circumstances
is to terminate. That's what xmalloc does by default. Remember, this will
almost never happen - once in a trillion years of operation for the example
you described, under rather different assumptions, just enough for a few
users to experience it occasionally.
No. It is suitable for when custom error-handling code adds more cost than
it is worth. Which is often when allocating trivial amounts of memory in
medium-critical applications. It is not for when you have a reasonable
expectation that the memory request will be impossible to fulfil.
Bullshit!

When you are trained in writing failsafe programs then you will act
always in failing requests of 1 byte or 100 MB allocated memory
without letting the program crash or simply exit().

When such mistake may cost lot of money, getting databases or even
simple files destroyed, leaving something in inderteminate state or
simply human live you'll catch each and any theroretical error.
Failing malloc ist one of them.

Yes, it costs carefully programming, yes, it costs a bit time to catch
each NULL from the malloc family - but you'll get a program that works
as it has to do without unexpected failture.

You does not know what to do when out of memory? It's so simple:
return the error to the caller to give it a chance to undo the action
he is in middle of and let it return to its caller to give it the
chance to undo ........ until the complete action is undone. Clean up
evering and restart the action again. As at this point the whole
system has changed the chance to get the action now complete is
realistic high.

However when you have to write failsve code you should make a complete
design of the whole project, each single action, every function and
not hack around. Really the starting point of coding will delay
significantly - but the coding phase shrinks significantly too.




--
Tschau/Bye
Herbert

Visit http://www.ecomstation.de the home of german eComStation
eComStation 1.2R Deutsch ist da!
 
H

Herbert Rosenau

I'd want

int user_input = get_user_input();
/* if I was sensible, sanity check here, but this is a sloppy program */
int s = user_input * sizeof(record_t);
/* at this point we've got an overflow, so compiler issues "arithmetic
overflow" and terminates. */

Unfortunately we don't get this yet. Almost no compilers will handle oveflow
nicely. However if the result is negative, which is a sporting chance, at
least we can pick it up in the call to xmalloc(). If we are in a loop with
essentially random values being typed in by user, the chance of a negative
becomes statistically certain.
xmalloc() is completely errornous because it will ruitn all the 413
files open for update by letting them always in an unrecoverable
state! Irt will destroy some other data as well as it gives some
pending actions no chance to complete (un)successfull onlyx because it
gives no chance to to fall back to a save state.

By that: it is quite easy to catch overflow on unsigned operations. It
is even much easier as to handle overflow on signed operands. The only
a programmer nedds for is brain 1.0. However a programmert who owns
not brain 1.0 shoud go as road sweeper instead to produce programs who
are buggy in design.

Programmers using signed values where the language requests unsigned
ones are braindead and should be fireed on sight.

--
Tschau/Bye
Herbert

Visit http://www.ecomstation.de the home of german eComStation
eComStation 1.2R Deutsch ist da!
 
M

Martien Verbruggen

There's a strong case for making programs correct. There is also a strong
case for taking code that will almost certainly never be executed out of
functions.
malloc() failure is higher than the chance of the computer breaking down, it
is worthwhile checking the return value of malloc()".
We can calculate it - it took only a few minutes with calculator to work out
that Flash's example - computer running out of memory every month, 2GB of
memory, a non-loop allocation of 20 bytes means that we are talking of about
one failure of that allocation in every ten million years of operation. I
can cope with one irate phone call every ten million years.

I haven't tried to repeat your calculation, but I'm pretty sure there
are some assumptions in there which you haven't mentioned, because I
don't see how you could arrive at a number for probability without
making more assumtions than the ones you have mentioned.

More importantly, however, I believe you're talking about the chance of
failure for a single particular call to malloc, rather than the whole
group of unchecked mallocs that you have introduced and all invocations
of those particular mallocs.

It isn't at all important to answer that question, and I think most
other people here are trying to deal with another question: How likely
is it that an unchecked malloc (any one invocation of any one instance)
breaks a program or system?
But it isn't
always so easy to do the calculation.

Indeed. This is why statistics is a profession. It isn't always that
straightforward to transllate a real-world problem into a probability.
It isn't even always that straightforward to come up with the right
question. And you also need to make sure that you correctly take
interdependencies of events into account. Once a machine runs out of
memory, every process on it that requests memory will fail. That means
that each individual malloc-event is dependent in some way on all the
others.

You haven't shown us the calculation you followed to get to that number,
but it just doesn't seem at all right, especially with the commentary
you put around it.

Martien
 
U

Ulrich Eckhardt

Herbert said:
xmalloc() is completely errornous because it will ruitn all the 413
files open for update by letting them always in an unrecoverable
state!

Wait, you didn't understand its scope I'm afraid: if the only reasonable
answer to OOM is to terminate (and only then!), xmalloc() and friends are
the right choice. If you need, you can still implement cleanup using
atexit() handlers to some extent or on implementations like Malcolm's use a
callback before terminating. Otherwise, you need brain 1.0 or higher and
decide that xmalloc() simply does not do what you need done and pass the
error up the whole callchain. It is a tool with limited applicability,
competent programmers can choose it when it is useful.
By that: it is quite easy to catch overflow on unsigned operations. It
is even much easier as to handle overflow on signed operands.

You cannot portably handle signed overflow, because signed overflow causes
undefined behaviour. Therefore, compilers are free to assume that you (the
programmer) already made sure it doesn't happen and assume that an
operation like 'n*sizeof x' will never yield anything below zero and
optimise out any check if the result is below zero! So, you need to check
against overflow before it happens.

I'm not sure if that was what you meant, I'd just like to point out that
after-the-fact checks for signed overflow are broken by design.
The only
a programmer nedds for is brain 1.0. However a programmert who owns
not brain 1.0 shoud go as road sweeper instead to produce programs who
are buggy in design.

Programmers using signed values where the language requests unsigned
ones are braindead and should be fireed on sight.

....as should be people who unnecessarily insult others, because a lack of
social skills makes it impossible to integrate them in a team. Sorry, while
I agree with you technically, I find your statement unnecessarily rude.

Uli
 
K

Keith Thompson

Malcolm McLean said:
The objection to xmalloc() is that since it takes an int rather than a
size_t, it might break in nasty way on large allocations. Which is
true. However it isn't intended for large allocations which have a
realistic chance of failing. It is meant for situations where the
chance of failure is too low for it to be worth writing custom
error-handling code.

Does the documentation for xmalloc() make this sufficiently clear?

BTW, I question your implicit assumption that the chance of failure
depends only on the size of the requested allocation.
 
K

Keith Thompson

Eric Sosman said:
Please cite the research that measured this surprising figure.

When I first read the above (insufficiently carefully), I thought
Malcolm was claiming that 33% to 50% of code *in the program* would be
there to handle malloc() failures. I think he actually meant to 33%
to 50% of the code *in a constructor-like C function*.

Given that a constructor-like function is typically going to be fairly
simple (allocate memory, initialize members, and return), 33% to 50%
doesn't seem implausible to me.
 
K

Keith Thompson

Malcolm McLean said:
Keith Thompson said:
But that's not what you wrote upthread. What you wrote was:

[...]
The chance of the computer breaking during this period is so so
much higher, there is in this case no point checking the malloc().

You seemd to be talking about calling malloc() and blindly assuming
that it succeeded, not about using xmalloc() to abort the program if
malloc() fails.

Perhaps that's not really what you meant.
There's a strong case for making programs correct. There is also a
strong case for taking code that will almost certainly never be
executed out of functions.
xmalloc() is an attempt to solve both issues.
However the literal answer to the OP's question is "when the chance of
a malloc() failure is higher than the chance of the computer breaking
down, it is worthwhile checking the return value of malloc()".
We can calculate it - it took only a few minutes with calculator to
work out that Flash's example - computer running out of memory every
month, 2GB of memory, a non-loop allocation of 20 bytes means that we
are talking of about one failure of that allocation in every ten
million years of operation. I can cope with one irate phone call every
ten million years. But it isn't always so easy to do the calculation.

So you really are claiming that in some circumstances it's acceptable
to call malloc() and blindly assume that it succeeded, if you've
confirmed that the probability of failure is sufficiently low.

I strongly disagree. Performing the calculation is just a waste of
time; I don't see how you can have any confidence in the result of
such a calculation. Consider that a malloc() failure can be caused by
something outside the control of your program. Suppose there's plenty
of memory available when your program starts, then some other program
on the system allocates all the remaining memory, then your program
tries to allocate 32 bytes and fails.

It took you a few minutes with a calculator to compute the alleged
probability. How long would it have taken you to write the code to
check the result and abort the program if it fails? Considering that
you already have your xmalloc() function, would you really spend
several minutes performing an unreliable calculation so you could save
the time it would have taken to type 'x'?
 
M

Malcolm McLean

Keith Thompson said:
Does the documentation for xmalloc() make this sufficiently clear?
The prototype says "int".

BTW, I question your implicit assumption that the chance of failure
depends only on the size of the requested allocation.
Imagine a lot of sticks stuck together at random. Then placed against a
measuring rod, and the sticks marked at the point where the rod ends.
Clearly, the chance of receiving the mark is proportional to the size of
each stick.
Of course real programs aren't completely random. The OS will gobble a chunk
of memory before any applications programs even launch, for instance. But
the model is close enough to reality to be useful.
 
M

Malcolm McLean

Keith Thompson said:
Malcolm McLean said:
Keith Thompson said:
But that's not what you wrote upthread. What you wrote was:

[...]
The chance of the computer breaking during this period is so so
much higher, there is in this case no point checking the malloc().

You seemd to be talking about calling malloc() and blindly assuming
that it succeeded, not about using xmalloc() to abort the program if
malloc() fails.

Perhaps that's not really what you meant.
There's a strong case for making programs correct. There is also a
strong case for taking code that will almost certainly never be
executed out of functions.
xmalloc() is an attempt to solve both issues.
However the literal answer to the OP's question is "when the chance of
a malloc() failure is higher than the chance of the computer breaking
down, it is worthwhile checking the return value of malloc()".
We can calculate it - it took only a few minutes with calculator to
work out that Flash's example - computer running out of memory every
month, 2GB of memory, a non-loop allocation of 20 bytes means that we
are talking of about one failure of that allocation in every ten
million years of operation. I can cope with one irate phone call every
ten million years. But it isn't always so easy to do the calculation.

So you really are claiming that in some circumstances it's acceptable
to call malloc() and blindly assume that it succeeded, if you've
confirmed that the probability of failure is sufficiently low.

I strongly disagree. Performing the calculation is just a waste of
time; I don't see how you can have any confidence in the result of
such a calculation. Consider that a malloc() failure can be caused by
something outside the control of your program. Suppose there's plenty
of memory available when your program starts, then some other program
on the system allocates all the remaining memory, then your program
tries to allocate 32 bytes and fails.

It took you a few minutes with a calculator to compute the alleged
probability. How long would it have taken you to write the code to
check the result and abort the program if it fails? Considering that
you already have your xmalloc() function, would you really spend
several minutes performing an unreliable calculation so you could save
the time it would have taken to type 'x'?
Unfortunately xmalloc() isn't standard.
In Baby X, my mini X windows toolkit, xmalloc() and xrealloc() will be used
exclusively, because each object allocates a small structure and maybe a few
other small objects, and there is no decent way of handling failures, so it
simply nags for more memory until users gives it or kills something else.
In the other fucntions, unfortunately it introduces a dependency. That mkaes
the code so much less unusable.

I am not in any sense tied to the xmalloc() error-handling method. If anyone
can come up with a better idea, please post. It must not return null unless
no memory is to be found, and it must be easier for caller to use than
regular malloc().
 
B

Bill Reid

Malcolm McLean said:
I nodded. I meant "salaries". Which somewhat reduces the force of what I am
You've forgotten half of the challenge. To save the state, then destroy the
object.

OK, for every non-NULL pointer, write the data to...disk?...as part
of the free_workforce()?...only if freeing on failure?...I dunno...
That's why the example was slightly contrived to hold salaries in a parallel
array.

Hmmmm, but as I said, not THAT contrived, because I actually DO
stuff like that all the time.

But here's the problem: for me, in just about all the cases I can think
of right off the bat, I've GOT the data somewhere else already: it's coming
in off the net or from a disk file or something. Where, in your infinite
contrivance, is "Nemployees" coming from? Remember, we're just
talking about MALLOC here, not REALLOC, so we aren't really
dealing with any dynamically-created data, we just need to set aside
some memory for some data that we already know (or can guess)
an initial size for.

Once again, I use something like that for reading some pretty
complicated files for initializing and configuring my program(s) and
program modules, disk-saved data tables, database descriptors
with a pointer to the dynamically-allocated database table I'm
going to parse, etc. In all cases, if I should fail to get the memory
and the user or the default handler is to "give up", the program
is designed to "fail back" to the previous successful program
point at the least, but the DATA ITSELF is still safe and snug
on disk or on an FTP/Web site or sumpin', ready for another
attempt at another time.

And the amount of memory needed is either known EXACTLY
(in the case of initialization/data tables, etc.) because the file
contains the number of elements, at the tippy-top of the file, with
every one of up to a dozen or more sub-elements also explicitly
numbered (this is also a requirement to even be able to READ
the file, let alone assign the data to structures), or I (the program)
makes an educated "guess" as to size that is almost certainly
larger than what is needed (in the case of some database table
reads).
You need to save the names. They are precious data.

Where did they come from in the first place? Why can't we just
get them again later?
However you don't have
any salaries.

Yeah, like everybody else I complain about my pay...
So you've got to devise some code for "salaries missing", and
dump that to the state file.

OK, that can be done pretty easily in the context of my code
as I suggested...still wondering where this data came from that
I can't get it again, or even why I'm getting some of it before
completing the allocation for all of it (well, that part I understand
a little better)...
Then you've got to test that the save and
recovery code works, for all possible combinations of missing names, missing
salaries, missing entire structures.
OK.

The fact that you didn't spot this, despite being an experienced
programmer,

Wrong. I'm not a programmer at all, at least by named profession.
I'm just an enthusiastic young person with a sixth-grade education
and a love of data processing...
was what I expected. It was not a trick question, but it was a tricky
question. I spoilt it a bit by the typing error. It is an utter nightmare
trying to save and recover partially-constructed objects to disk.

Oh, well, that's the "fun" of it, I 'spose...
 
R

Richard Tobin

CBFalconer said:
It is permitted for malloc(0) to return a pointer that may not be
dereferenced. On some (probably most) malloc systems this will eat
up some memory to keep track of the size etc. of the data
allocated. This may fail, resulting in a NULL return. So you
should ALWAYS check the result of any malloc call.

And just what are you supposed to do when malloc(0) returns NULL,
since it's allowed to do that even when it succeeds?

-- Richard
 
K

Keith Thompson

Malcolm McLean said:
The prototype says "int".

In other words, no.

Unless, of course, "int" conveys the idea of "situations where the
chance of failure is too low for it to be worth writing custom
error-handling code". What it conveys to me is that the author was
too lazy to use size_t.
Imagine a lot of sticks stuck together at random. Then placed against
a measuring rod, and the sticks marked at the point where the rod
ends.
Clearly, the chance of receiving the mark is proportional to the size
of each stick.
Of course real programs aren't completely random. The OS will gobble a
chunk of memory before any applications programs even launch, for
instance. But the model is close enough to reality to be useful.

And you think this justifies calling malloc() and blindly assuming
that it succeeded. Amazing.
 
C

CBFalconer

Gordon said:
.... snip ...

What's the difference between malloc(0) returning a pointer that
cannot be dereferenced (non-null) and returning a pointer that
cannot be dereferenced (null)?

The null can be used as an error signal. But not if it can be
returned from a successful call. Then you also have do silly
things like protecting use of every malloced pointer with lots of
complications. For example, using my ggets input function:

while (0 == ggets(&buf)) {
if (!(temp = malloc(sizeof *temp))) {
free(buf);
break;
}
temp->data = buf; temp->next = root;
root = temp;
}
/* A complete text file has been read into memory, in lines */
/* optional */ root = reverse(root);
while (root) {
temp = root;
puts(root->data;
free(root->data)
free(root);
root = temp;
}
/* prints the whole file */

and most possible malloc failure has been handled in ggets,
aborting further data input. No special cooking of data is needed.

BTW, your message totally omitted attributions for quoted
material. Please do not do this, as it seriously harms
readability. The attributions are the initial lines of the form
"joe wrote:" which connect lines with the appropriate number of '>'
markers to joe.
 
C

CBFalconer

Richard said:
And just what are you supposed to do when malloc(0) returns NULL,
since it's allowed to do that even when it succeeds?

The easiest thing is to prevent it. Use a caller mechanism, such
as:

void *domalloc(size_t s) {
if (!s) s++;
return malloc(s);
}
 
C

CBFalconer

Ben said:
The ones in your book take an int but don't check for negative
sizes. I think that is unwise.

Well, we now need a meaning for a -ve size. I suggest it means
that the allocation will absorb the first n (i.e. -size) bytes of
any write to it. If anything is left to be written that will cause
an overrun error.

On reading, nothing will be returned for the first n bytes. Since
there is nothing more stored, the whole read can be ignored.

If all agree on the meaning, we can then worry about
implementation.
 
M

Malcolm McLean

Keith Thompson said:
Malcolm McLean said:
Keith Thompson said:
[...]
The objection to xmalloc() is that since it takes an int rather than a
size_t, it might break in nasty way on large allocations. Which is
true. However it isn't intended for large allocations which have a
realistic chance of failing. It is meant for situations where the
chance of failure is too low for it to be worth writing custom
error-handling code.

Does the documentation for xmalloc() make this sufficiently clear?
The prototype says "int".

In other words, no.
/*
xmalloc.c
by Malcolm McLean

a failure-free malloc() wrapper.
On default, the system will exit if memory is not available.
This behaviour can be overriden with a call to setxmallocfail().
The system will repeatedly call the handler until memory
becomes available.
*/

/*
failsafe malloc drop-in
Params: sz - amount of memory to allocate
Returns: allocated block (never returns NULL even on block 0)
*/
void *xmalloc(int sz)
{
void *answer = 0;

assert(sz >= 0);
if(sz == 0)
sz = 1;
while( !(answer = malloc(sz)) )
{
(*emergency)(sz, ptr);
}

return answer;
}

Web Page:

xmalloc() is a malloc() that is guaranteed never to return NULL. Thus it
simplies code. If you are making a request for, say, a hundred or so bytes
of memory on a modern desktop machine, statistically it is probably more
likely that the computer will break than that memory will be unavailable.
However it is still necessary to check the return from malloc(), partly for
the sake of having a correct program, partly because an exploiter might find
a way hacking into your program by squeezing memory, have malloc() return
NULL, followed by an illegal memory write.

The problem is to know how should the programmer react when the system runs
out of memory. The approved method is to write an error report to stderr.
However if you are writing general-purpose routines you may not know what
sort of user interface your caller is providing. A video-games console
usually has no facilities at all for error reporting, for example. Windowing
systems often pop up windows with user messages. So general-purpose routines
have to pass "out of memory" conditions to callers, resulting in
considerable complexity in the caller, and often code that is very difficult
to test, all for an out of memory condition that is unlikely to ever happen.

Java and C++ solve this problem with exceptions. An out of memory condition
is passed up to what can be quite a high level, where code handles it
appropriately. The stack is automatically unwound and objects on it
destroyed. C cannot do this, at least without prohibitive complexity.

However we do not always want to crash out with an error message if malloc()
fails. Sometimes the user might want to have the option of closing down
other program. In debug mode we might want to print a stack trace. of we
might want to execute an emergency state dump to avoid losing work, or
release hoarded memory from a "gravity tank", warning the user that he is
perilously close to failure.

Ultimately if memory is not avialable, there is no way of providing it, so
in the last analysis a call to xmalloc() must exit if necessary, the idea of
the interface, however, is to place the decision when to exit upon the
caller.

Thus the xmalloc() package exposes another function, setxmallocfail(), which
passes an out of memory handler.
 
U

user923005

Malcolm McLean said:
Keith Thompson said:
But that's not what you wrote upthread.  What you wrote was:
   [...]
   The chance of the computer breaking during this period is so so
   much higher, there is in this case no point checking the malloc().
You seemd to be talking about calling malloc() and blindly assuming
that it succeeded, not about using xmalloc() to abort the program if
malloc() fails.
Perhaps that's not really what you meant.
There's a strong case for making programs correct. There is also a
strong case for taking code that will almost certainly never be
executed out of functions.
xmalloc() is an attempt to solve both issues.
However the literal answer to the OP's question is "when the chance of
a malloc() failure is higher than the chance of the computer breaking
down, it is worthwhile checking the return value of malloc()".
We can calculate it - it took only a few minutes with calculator to
work out that Flash's example - computer running out of memory every
month, 2GB of memory, a non-loop allocation of 20 bytes means that we
are talking of about one failure of that allocation in every ten
million years of operation. I can cope with one irate phone call every
ten million years. But it isn't always so easy to do the calculation.

So you really are claiming that in some circumstances it's acceptable
to call malloc() and blindly assume that it succeeded, if you've
confirmed that the probability of failure is sufficiently low.

I strongly disagree.  Performing the calculation is just a waste of
time; I don't see how you can have any confidence in the result of
such a calculation.  Consider that a malloc() failure can be caused by
something outside the control of your program.  Suppose there's plenty
of memory available when your program starts, then some other program
on the system allocates all the remaining memory, then your program
tries to allocate 32 bytes and fails.

It took you a few minutes with a calculator to compute the alleged
probability.  How long would it have taken you to write the code to
check the result and abort the program if it fails?  Considering that
you already have your xmalloc() function, would you really spend
several minutes performing an unreliable calculation so you could save
the time it would have taken to type 'x'?

An assumption that the probablity of failure for malloc() is low might
be OK for a machine with a single thread of execution. But I don't
think it makes sense when multiple programs can be active. Any code
that can be made robust ought to be made robust unless there is an
excellent argument against it. What is the argument against checking
the return of malloc()? The only one that I can imagine is "I am the
laziest programmer in the world." and it isn't a very good one.
 
F

Flash Gordon

Malcolm McLean wrote, On 21/01/08 10:57:
Keith Thompson said:
Malcolm McLean said:
[...]
The objection to xmalloc() is that since it takes an int rather than a
size_t, it might break in nasty way on large allocations. Which is
true. However it isn't intended for large allocations which have a
realistic chance of failing. It is meant for situations where the
chance of failure is too low for it to be worth writing custom
error-handling code.

Does the documentation for xmalloc() make this sufficiently clear?

The prototype says "int".

In other words, no.
/*
xmalloc.c
by Malcolm McLean

a failure-free malloc() wrapper.
On default, the system will exit if memory is not available.
This behaviour can be overriden with a call to setxmallocfail().
The system will repeatedly call the handler until memory
becomes available.
*/

This does not make it clear it is not intended for large allocations.
/*
failsafe malloc drop-in
Params: sz - amount of memory to allocate
Returns: allocated block (never returns NULL even on block 0)
*/
void *xmalloc(int sz)
{
void *answer = 0;

assert(sz >= 0);

Had you used size_t you could have allowed the user of your library to
specify the maximum amount allowed in any one allocation.
if(sz == 0)
sz = 1;
while( !(answer = malloc(sz)) )
{
(*emergency)(sz, ptr);
}

This does not allow for the possibility of requesting a smaller amount
of memory.
return answer;
}

Web Page:

xmalloc() is a malloc() that is guaranteed never to return NULL. Thus it
simplies code. If you are making a request for, say, a hundred or so
bytes of memory on a modern desktop machine, statistically it is
probably more likely that the computer will break than that memory will
be unavailable.

This is demonstrably false as I have more allocation failures than
hardware failures. This includes requests which are likely to be for
small amounts.

<snip>

Nothing stands out in what you quoted as saying that your routine is not
suitable for large allocations.
 
M

Malcolm McLean

Flash Gordon said:
Malcolm McLean wrote, On 21/01/08 10:57:

This is demonstrably false as I have more allocation failures than
hardware failures. This includes requests which are likely to be for small
amounts.
Yes, but normally your system will run out of memory when a large amount,
say half a megabyte to hold ten seconds of audio samples, is requested. It
will fail on a request for twenty bytes only once in every 25,000 cases,
assuming both allocation requests are made and the system always fails on
one of them.
Given that your system runs out of memory about every month, how many times
is it likely to need hardware repairs in 25,000 months?
Nothing stands out in what you quoted as saying that your routine is not
suitable for large allocations.
Maybe that should be made clearer. If there is realistic chance of the
allocation failing, then the failure path should be considered a normal part
of program logic. When the request is for twenty bytes on a syustem with 2GB
installled, however, this becomes less sensible.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,042
Latest member
icassiem

Latest Threads

Top