When to check the return value of malloc

R

Richard Tobin

Surely it's obvious that Malcolm means that these compilers don't
themselves set NDEBUG in release mode?
[/QUOTE]
What is release mode?

I am drawing the obvious inference that it is a compiler mode intended
for code that is ready to be released. The unix C compilers I use
don't have such a thing but the idea doesn't seem too outlandish.

-- Richard
 
R

Richard Tobin

[/QUOTE]
Two reasons. One, large systems build faster unoptimized. Two,
following a debugger through optimized code can be very difficult.

On the other hand, there are several common errors that are only
detected when optimisation is done, so I have it turned on from the
start. Occasionally I have to turn it off temporarily to debug
something.

-- Richard
 
C

CBFalconer

Army1987 said:
assert(test() && "message");

if (!NDEBUG && test()) {
fputs("message\n", stderr);
... whatever else ...
}

doesn't seem beyond the capabilities of the poorest C programmer.
 
M

Malcolm McLean

Richard Tobin said:
I'm certainly not suggesting that you should use int for a
general-purpose malloc() replacement. Rather, it would be an
argument for size_t being signed.
Really Ulrich is making an argument against fixed-size types. Unless he is
absolutely consistent in using either size_t or int in indexing that array,
either could come unstuck. size_t might be 16 bits and int 32.
If the limits were specified at declaration time, like in Ada, we wouldnt
have this problem. Nor would we if we took the Visual Basic approach of a
variable object, or the Perl approach of "it's both a string of digits and
an integer".
 
R

Randy Howard

Kelsey said:

And if nothing else, malloc doesn't lie about what it actually does,
and works on all legitimate size requests, if memory is, in fact,
available to meet the request.

Unfortunately, only on systems that do not overcommit memory.

malloc() will also fail on such systems. Don't believe it? Try and
malloc 2GB on a 32-bit box running one of these platforms. (you may
have a kernel mod to allow 3GB per proc, if so, extend the request size
accordingly).
 
R

Richard Heathfield

Ulrich Eckhardt said:

Some further notes:
1. I know that int is not 16 bits. ;)

Hmmm. How sure are you? I've certainly used systems where it /is/ 16 bits.
I've also used systems where it isn't.
 
U

Ulrich Eckhardt

Richard said:
I'm certainly not suggesting that you should use int for a
general-purpose malloc() replacement. Rather, it would be an
argument for size_t being signed.

If it was 32 bits large, why wouldn't I be allowed to allocate 3GiB?
Further, signed types don't behave in any particular way on overflow
either, they rather cause undefined behaviour and this behaviour shows[1]!
Lastly, you could argue that a malloc() replacement should take an argument
that is simply larger than size_t and then check if the conversion to
size_t is valid. Seriously, all you achieve is that allocations are topped
at a certain limit, but this limit is neither technically necessary nor is
it even configurable!

Uli

[1]
This code:

int s = count * sizeof (element);
if(s<0)
error("overflow");

is flawed, because the check is only possibly triggered when undefined
behaviour has already been caused by signed overflow. There are popular
compilers that use this to optimise out the check.
 
U

Ulrich Eckhardt

Malcolm said:
Really Ulrich is making an argument against fixed-size types.

Am I? In case you mean types that dynamically adapt themselves to the stored
value, i.e. types that are not limited like integers in C, then yes, those
would surely avoid overflows. I wouldn't want to pay the price though,
neither for their dynamic allocation on an embedded platform nor for
checking if said dynamic allocation of the last multiplication didn't
perhaps fail. This would present a really heavy burden on the required
error-handling code.
Unless he is absolutely consistent in using either size_t or int in
indexing that array, either could come unstuck. size_t might be 16 bits
and int 32.

I have yet to find a platform where size_t is smaller than int. That said, I
do use size_t instead of int to index through arrays, exactly because I
also store the size of the arrays in size_t (unless they are constants).
This allows me to raise the warning levels of the compiler without getting
too many warnings.
If the limits were specified at declaration time, like in Ada, we wouldnt
have this problem. Nor would we if we took the Visual Basic approach of a
variable object, or the Perl approach of "it's both a string of digits and
an integer".

You can well do that, though it will look extremely clumsy because C doesn't
allow you to overload operators. However, C's builtin types can't do that.

Uli
 
S

santosh

Ulrich said:
If it was 32 bits large, why wouldn't I be allowed to allocate 3GiB?

And we are back to Malcolm's stand that everone must move to 64 bit
ints.
Further, signed types don't behave in any particular way on overflow
either, they rather cause undefined behaviour and this behaviour
shows[1]! Lastly, you could argue that a malloc() replacement should
take an argument that is simply larger than size_t and then check if
the conversion to size_t is valid. Seriously, all you achieve is that
allocations are topped at a certain limit, but this limit is neither
technically necessary nor is it even configurable!

size_t is fine as long as you use it correctly. Or at least I have not
encountered any notable nuisances in it's usage. Yes loops are
sometimes a bit awkward with an unsigned type, and need more attention,
but that is a trivial issue.

There is no point in using a signed type for size values, since size can
never be negative. Malcolm does have a point, but it's not applicable
to C as it is.

[1]
This code:

int s = count * sizeof (element);
if(s<0)
error("overflow");

is flawed, because the check is only possibly triggered when undefined
behaviour has already been caused by signed overflow. There are
popular compilers that use this to optimise out the check.

Yes we have had long threads from time to time on how to preempt
overflow.
 
M

Malcolm McLean

Ulrich Eckhardt said:
If it was 32 bits large, why wouldn't I be allowed to allocate 3GiB?
A lot of machines won't let you install more than 2GB, despite being 32 bit.
Under the hood, tnhey use signed rather than unsigned integers.

It is nuisance for the policy that "everything shall be signed" - but
remember

If you are allocating 3GB on a 4GB machine, you are taking up virtually all
available memory. It is not unreasonable to be expected to code that
specially.
If you need 3GB now you'll most certainly need 5GB in a few month's time. So
this awkward window won't last long and soon you'll move to the glory of 64
bit pointers. And 64 bit ints if enough committee members read this to make
that recommendation.
 
S

santosh

Malcolm said:
A lot of machines won't let you install more than 2GB, despite being
32 bit. Under the hood, tnhey use signed rather than unsigned
integers.

It is nuisance for the policy that "everything shall be signed" - but
remember

If you are allocating 3GB on a 4GB machine, you are taking up
virtually all available memory. It is not unreasonable to be expected
to code that specially.
If you need 3GB now you'll most certainly need 5GB in a few month's
time. So this awkward window won't last long and soon you'll move to
the glory of 64 bit pointers. And 64 bit ints if enough committee
members read this to make that recommendation.

What is the ISO C Committee supposed to do? As hardware transitions from
32 to 64 bits, C compilers ought to move forward as well. Fixing int to
64 bits should break too much existing code. Instead use long.
 
R

Richard Tobin

I'm certainly not suggesting that you should use int for a
general-purpose malloc() replacement. Rather, it would be an
argument for size_t being signed.
[/QUOTE]
If it was 32 bits large, why wouldn't I be allowed to allocate 3GiB?

The existence of arguments one way doesn't preclude the existence of
arguments the other way.

-- Richard
 
M

Malcolm McLean

santosh said:
What is the ISO C Committee supposed to do? As hardware transitions
from 32 to 64 bits, C compilers ought to move forward as well. Fixing int
It will break libraries. It shouldn't break conforming code.
There's a one time cost recompiling every UNIX library, which will involve
using 32 / 64 bit types as appropriate where the C calls assembler. Then
we're in the 64 bit world, where humanity will remain for the foreseeable
future.
32 bits aren't quite enough to count everyone in the world.
 
U

Ulrich Eckhardt

If it was 32 bits large, why wouldn't I be allowed to allocate 3GiB?

The existence of arguments one way doesn't preclude the existence of
arguments the other way.[/QUOTE]

Ahem, Richard, in case you were trying to say anything it totally missed me.
Please be a bit more explicit.

Thanks

Uli
 
D

dj3vande

Kelsey said:

And if nothing else, malloc doesn't lie about what it actually does,
and works on all legitimate size requests, if memory is, in fact,
available to meet the request.

Unfortunately, only on systems that do not overcommit memory.

malloc() will also fail on such systems. Don't believe it? Try and
malloc 2GB on a 32-bit box running one of these platforms. (you may
have a kernel mod to allow 3GB per proc, if so, extend the request size
accordingly).

Or even just keep allocating until you fail:
--------
dj3vande@goofy:~/clc (0) $ cat test-overcommit.c
#include <stdio.h>
#include <stdlib.h>

int main(void)
{
size_t i;

for(i=0;malloc(1024*1024);i++)
;

printf("Malloc'd %lu mbytes before failing\n",(unsigned long)i);

return 0;
}
dj3vande@goofy:~/clc (0) $ gcc -W -Wall -ansi -pedantic -O test-overcommit.c
dj3vande@goofy:~/clc (0) $ ./a.out
Malloc'd 3056 mbytes before failing
dj3vande@goofy:~/clc (0) $ time ./a.out
Malloc'd 3056 mbytes before failing

real 0m0.019s
user 0m0.004s
sys 0m0.016s
dj3vande@goofy:~/clc (0) $
--------
That's basically all of the address space available to a single
process. The system this ran on has 2GB of physical RAM.


dave
 
C

CBFalconer

Malcolm said:
A lot of machines won't let you install more than 2GB, despite
being 32 bit. Under the hood, tnhey use signed rather than
unsigned integers.

Or, more likely, they simply reserve the memory with the most sig.
bit set for the use of the OS.
 
K

Kelsey Bjarnason

[snips]

The point is to have a program that is correct.

A program designed to handle errors which cannot happen cannot be
correct; it is based on false premises.
Let's say we've got a
function that sorts random data.


No, let's say we're calling an allocation function. Either it can fail,
or it can't. Since you assert the error handling code will never be
called, it follows the function - malloc - can never fail. The entire
purpose of your xmalloc is to handle cases where malloc fails.

Thus, either your function is a pointless waste of time, because malloc
will never fail, or malloc *can* fail, so the error handling code in the
caller *will* be called.

So which is it?
 
S

santosh

Kelsey said:
[snips]

The point is to have a program that is correct.

A program designed to handle errors which cannot happen cannot be
correct; it is based on false premises.
Let's say we've got a
function that sorts random data.


No, let's say we're calling an allocation function. Either it can
fail,
or it can't. Since you assert the error handling code will never be
called, it follows the function - malloc - can never fail. The entire
purpose of your xmalloc is to handle cases where malloc fails.

Thus, either your function is a pointless waste of time, because
malloc will never fail, or malloc *can* fail, so the error handling
code in the caller *will* be called.

So which is it?

I think I get what Malcolm is getting at. Basically he does not want to
implement a full error-detection and recovery scenario for small
allocations, which /he/ thinks have only a minute chance of failure.
For such allocations he uses xmalloc() which will either get you the
requested memory or call exit().

Personally I would prefer using a single function for this purpose. The
steps one would take upon allocation failure are usually the same for
most points in a typical program where the allocation is made. A custom
function that wraps around malloc() would be good for this purpose.

If really complicated things must be done for certain allocations at
certain points in the program, I see no way other than to write the
code needed.

In any case Malcolm's xmalloc() seems to me to simply be an
encapsulation of the simple code:

ptr = malloc(whatever);
if (!ptr) {
exit(EXIT_FAILURE);
}
/* ... */
 
M

Malcolm McLean

Kelsey Bjarnason said:
Thus, either your function is a pointless waste of time, because malloc
will never fail, or malloc *can* fail, so the error handling code in the
caller *will* be called.

So which is it?
You can have something that will never happen, yet could happen. Like a
monkey typing out the works of Shakespeare.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,811
Messages
2,569,693
Members
45,477
Latest member
IsidroSeli

Latest Threads

Top