When to check the return value of malloc

M

Malcolm McLean

Richard Tobin said:
Well yes, ideally. Ideally we would find all bugs during development.
But since we don't, I'm not sure what your point is. Are you suggesting
that we should not try to detect mistaken assumptions later?
xmalloc() takes a signed int rather than a size_t as the size parameter.

Now it is perfectly understandable that some people will object to this
design decision. However, given that the argument is an int, the use of an
assert to ensure that it is not negative seems to me totally unexceptional.
Any argument against it is an argument against assert().
 
A

Al Balmer

xmalloc() takes a signed int rather than a size_t as the size parameter.

Now it is perfectly understandable that some people will object to this
design decision. However, given that the argument is an int, the use of an
assert to ensure that it is not negative seems to me totally unexceptional.
Any argument against it is an argument against assert().

It's an argument against assert() in a production program, not an
argument against assert() as a development tool.

In production, if you insist on allowing negative arguments, you can
still check and return a failure indication, without needing to abort
the program.
 
R

Richard Heathfield

Malcolm McLean said:

xmalloc() takes a signed int rather than a size_t as the size parameter.

Now it is perfectly understandable that some people will object to this
design decision. However, given that the argument is an int, the use of
an assert to ensure that it is not negative seems to me totally
unexceptional. Any argument against it is an argument against assert().

No, it's an argument against using an int when a size_t was the better
choice.
 
S

santosh

Malcolm said:
xmalloc() takes a signed int rather than a size_t as the size
parameter.

Now it is perfectly understandable that some people will object to
this design decision. However, given that the argument is an int, the
use of an assert to ensure that it is not negative seems to me totally
unexceptional. Any argument against it is an argument against
assert().

No. You could return a failure indicator and allow your program to flush
file streams and exit cleanly instead of abruptly aborting. There is no
sense in risking data loss when it could easily be avoided.
 
F

Flash Gordon

Richard Tobin wrote, On 23/01/08 17:59:
Well yes, ideally. Ideally we would find all bugs during development.
But since we don't, I'm not sure what your point is. Are you suggesting
that we should not try to detect mistaken assumptions later?

You should, but what you do with them might be different. On some
embedded systems the correct thing to do would be enter an "infinite"
loop which will make a watchdog reset the processor. On other systems
you might want to display some appropriate error message and exit. Or
you might want to invoke some alternative code that as written by a
different team to try and solve the same problem.
 
M

Malcolm McLean

santosh said:
No. You could return a failure indicator and allow your program to flush
file streams and exit cleanly instead of abruptly aborting. There is no
sense in risking data loss when it could easily be avoided.
The whole point of the function is to take error-handling code which will
never[1] be executed out of calling code. So if we return an error condition
we're back to square one. We might as well just call malloc().

[1] obviously the value of "never" is fuzzy. My view is that if you expect
an allocation failure, you should call malloc() and consider a null as part
of ordinary program flow, not as an error condition.
 
A

Al Balmer

The whole point of the function is to take error-handling code which will
never[1] be executed out of calling code. So if we return an error condition
we're back to square one. We might as well just call malloc().

Actually, in my opinion, it's *better* to call malloc.
 
C

CBFalconer

Malcolm said:
Yes. I didn't decide the assert() strategy, but it is the one
chosen by the standard library and is reasonable for some
applications, like video games, where it is better to continue
execution rather than admit programmer error.

As lot of compilers keep the asserts even in release mode. however.

No they don't. You do, by controlling the definition of NDEBUG.
Read the standard.
 
R

Richard Tobin

Richard Heathfield said:
I think of assertions as being rather like scaffolding. Whilst a building
is being built, the scaffolding is useful and indeed vital. But if the
"finished" building collapses when you take the scaffolding away, it
wasn't a very good building, was it?

That seems like an odd analogy, since assertions don't prevent a
program from failing; they tell you when it is. If you want a
building analogy, try those devices that measure distance to tell you
whether a crack is opening up.

Incidentally, assertions can help compilers produce better code, which
is equally useful after development :) If execution continues after
an assertion, the compiler can know that the assertion is true. For a
typical assert() implementation, this just requires knowing that
abort() doesn't return.

-- Richard
 
R

Richard Tobin

Now it is perfectly understandable that some people will object to this
design decision. However, given that the argument is an int, the use of
an assert to ensure that it is not negative seems to me totally
unexceptional. Any argument against it is an argument against assert().
[/QUOTE]
No, it's an argument against using an int when a size_t was the better
choice.

I've had several bugs (my own and others) where a miscalculation
resulted in an attempt to malloc() a negative size. If the size
parameter could be negative, you might get a more informative
error message. (Of course, after a while you get to recognise what
malloc failing with a an argument of 4294967292 means.)

-- Richard
 
R

Richard Tobin

Flash Gordon said:
You should, but what you do with them might be different. On some
embedded systems the correct thing to do would be enter an "infinite"
loop which will make a watchdog reset the processor. On other systems
you might want to display some appropriate error message and exit. Or
you might want to invoke some alternative code that as written by a
different team to try and solve the same problem.

Oh, I quite agree. In some cases you might even want to effectively
ignore the error (e.g. if it's just going to result in some degraded
image rendering). But there's a range of programs where abort()ing is
appropriate.

-- Richard
 
R

Richard Tobin

No they don't. You do, by controlling the definition of NDEBUG.

Surely it's obvious that Malcolm means that these compilers don't
themselves set NDEBUG in release mode?
Read the standard.

The standard doesn't constrain how you define macros externally.
It's perfectly conforming for using a particular compiler mode
to result in NDEBUG being defined.

-- Richard
 
K

Kelsey Bjarnason

[snips]

xmalloc() takes a signed int rather than a size_t as the size parameter.

Yes, it does, which leads to the immediate question "What benefit is
there to allocating -3 bytes?"

If there isn't a benefit, then one must ask the point to allowing
negative values for the requested size? Seems kinda silly.
Now it is perfectly understandable that some people will object to this
design decision. However, given that the argument is an int, the use of
an assert to ensure that it is not negative seems to me totally
unexceptional. Any argument against it is an argument against assert().

Except you *don't* ensure it is not negative. To be precise, you ensure
it is not negative *when the code is built without NDEBUG defined at the
point of including assert.h*.

Which, in many cases, falls flat on its face when the code is compiled in
release mode.

However, release or debug modes aside, it falls on its face in another
manner: it changes behaviour, fatally, if NDEBUG is defined by any means,
such as by passing it as a parameter at compile time.

If NDEBUG is not defined, the code operates in one manner: checking for
negative values and rejecting them. If NDEBUG is defined, however, it
ceases to check for negative values, with likely undesirable results.

Parameter validation is good; parameter validation which magically goes
away simply because the developer prefers to disable assertions is not
good. Your code does this.

I don't know what your compiler does. I do know I've worked with
compilers in the past which _did_ disable assertions (i.e. defined
NDEBUG) when compiling in non-debug modes. On those, your code would be
compiled, lose the assertion, lose the validation of the parameter and
the entire point to the assertion - parameter validation - is lost.

Of course, if you simply used a size_t, which is the only sensible type
to use for that parameter, you wouldn't need the assertion in the first
place - the value could never be zero.

'Course, that brings up other issues. If I'm using a size_t to represent
sizes - imagine that, using the right type! - it is entirely possible the
value in my size_t variable is beyond the limit of an int - which
certainly can mean that, when passed to your function, the value is seen
as negative. So your function, even if the assert is left in, can
merrily puke on perfectly valid code, because you persist in using the
wrong types.

It fails to work as intended if NDEBUG is defined; it fails to work as
expected if passed perfectly legitimate sizes. How this qualifies as a
"drop in for malloc", which has neither of these failings, is not clear.
 
K

Kelsey Bjarnason

[snips]

The whole point of the function is to take error-handling code which
will never[1] be executed out of calling code.

If the error handling code will never be called - because malloc will
never fail - then there's simply no need for your function, the whole
point of which is to cope with situations where malloc fails.
 
K

Kelsey Bjarnason

[snips]

Yes, but normally your system will run out of memory when a large
amount, say half a megabyte to hold ten seconds of audio samples, is
requested. It will fail on a request for twenty bytes only once in every
25,000 cases, assuming both allocation requests are made and the system
always fails on one of them.
Given that your system runs out of memory about every month, how many
times is it likely to need hardware repairs in 25,000 months?

I have no idea where you get this "25,000 months" nonsense from. At
least one piece of code I've worked on recently does relatively small
allocations - less than 200 bytes at a time, on average - but does
millions of 'em in the space of a couple minutes.

If the system is low on memory, the app will hit an out of memory
condition in seconds, and the app typically runs hourly.

Of course, "system runs out of memory" isn't even the only issue.
Consider a system where user or process limits can be imposed: the system
may have gigs of memory available, but the context in which the
application runs may only be allowed to allocate, say, 512MB.

The system may *never* run out of memory - the app might do so every
third run. If it's run hourly, that's every three hours, not once a
month, not once every 25,000 months.
 
R

Richard Heathfield

Richard Tobin said:
[...] assertions don't prevent a
program from failing; they tell you when it is. If you want a
building analogy, try those devices that measure distance to tell you
whether a crack is opening up.

Do you have such devices fitted in your house? None here, I assure you.

If I see an assertion failure dialog box pop up on a shipped program, my
immediate thought is "this wasn't tested properly; I will consider ceasing
to use it". Now, I agree that in this circumstance I am better off for
knowing that the program wasn't tested properly. But I'd far rather have a
program that *was* tested properly! :)

<snip>
 
K

Kelsey Bjarnason

[snips]

There's a strong case for making programs correct. There is also a
strong case for taking code that will almost certainly never be executed
out of functions.

No, there isn't.

There's a case to be made for removing code which *will never be
executed*. It's dead code, it serves no purpose.

If the best you can say about it is that it "will almost certainly never
be executed", you are admitting the possibility that it will be, at some
point - and the code in question exists for the express purpose of
dealing with those situations.

As a simple counterpoint to your view, I could discuss some of the
database code I've written. The code polls various devices, recording
the data into a database.

The DB server is on the same machine that the polling code runs on, and
it is launched, automatically, as part of the startup routines. In
essence, if the polling code is capable of being run, the DB server
should always also be up and running and available.

However, it isn't. A simple example is upgrading the DB server software;
if the polling code, which runs at scheduled times, happens to run while
the DB server is down, in the process of being upgraded, the polling code
cannot store the data to the server.

Since this is a rare case, I can, of course, simply ignore this, pretend
it all worked, pretend the data has been safely stored, and as a result I
can go ahead and zero the counters I've been polling, right?

Not blinkin' likely. For one thing, those counters represent *money*.
We record the data, we ensure it is recorded, or we *do not touch it*.
Period. End of question.

The fact that the DB being down when the polling code fires may be a rare
event is irrelevant; it is *not* an impossible event, and if it does
happen - and we use your design philosophy - we lose money. Simple as
that.

Except we don't use your design philosophy, we use one that actually
makes sense. We check for errors. We know they're rare, but we *also*
know that unless we can prove they will *never* occur, we must cope with
them - and that means leaving the error handling code in, despite the
fact it will "almost certainly never be executed". "Almost certainly" is
not "certainly".

Error handling exists for the express reason that "almost certainly
never" actually *does* happen, and if the code making that assumption
gets used widely - becomes part of a popular library, say, used by
millions of people every day - then "almost certainly never" becomes
"almost certainly, and probably damned fast".
 
K

Kelsey Bjarnason

[snips]

So you really are claiming that in some circumstances it's acceptable to
call malloc() and blindly assume that it succeeded, if you've confirmed
that the probability of failure is sufficiently low.

He's absolutely right, too. If the probability of failure is
sufficiently low - which, IMO, means "zero" - then it's acceptable to
assume malloc succeeded.

The question then becomes, in what cases can you prove the probability of
malloc failure is zero? Possibly in an embedded system where your app
has certain guarantees about memory available to it, but that's about it.
 
R

Richard Bos

Malcolm McLean said:
I'd have to check whether my Visual C++ 6.0 (now taken away from me by
Vista) defines NDEBUG in release mode or not.

I should hope not. That macro is for the user to define, not for the
implementation.

Richard
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,780
Messages
2,569,611
Members
45,276
Latest member
Sawatmakal

Latest Threads

Top