Non-failure guarantied malloc/new

J

John Eskie

Lately I've seen alot of C and C++ code (not my own) which doesn't do any
checking if memory obtained by new or malloc is valid or if they return NULL
pointers.
Why does most people not care about doing this kind of error checking?

When I used to learn the language I was told always to check "if (p !=
NULL)" and return a error otherwise.
I still do that always but sometimes it's annoying the hell out of me if I
want to allocate memory inside a constructor and have no way to return an
error code to the caller to inform that something went wrong.

Thinking about this issue made me do some hacks like running memory
allocation in a while() loop such as:

blah *p = NULL;
while (!p)
{
p = malloc(...);
}

On top of the whole thing we got new operator which works differently on
each compiler implementation. Some compilers throw exceptions while others
give NULL pointers. Actually I prefer NULL pointers because it's so much
easier to handle in a while loop then doing some exception handling.

I hope I've reached my point now because OS's do run out of memory. Why
would I let my program crash if I can avoid it? Just for fun I recently
wrote a program that took all my memory in the OS and then started one
program which I knew did not have any error checking on new operator. Guess
what, it crashes in no-time.
In my opinion it's still better to run a while() loop and stall the program
then having it crash instead, no?

I am looking for suggestions or possible implementations of memory
allocation which is guarrantied not to fail.

Thanks in advance.
-- John
 
V

Victor Bazarov

John Eskie said:
Lately I've seen alot of C and C++ code (not my own) which doesn't do any
checking if memory obtained by new or malloc is valid or if they return NULL
pointers.
Why does most people not care about doing this kind of error checking?

Probably because memory is cheap and computers on which the programs made
from the code you've seen are supposed to have lots of it. Just a guess...
When I used to learn the language I was told always to check "if (p !=
NULL)" and return a error otherwise.

It's not a bad style.
I still do that always but sometimes it's annoying the hell out of me if I
want to allocate memory inside a constructor and have no way to return an
error code to the caller to inform that something went wrong.

Throw an exception. It's the usual way to indicate a failure to create
an object.
Thinking about this issue made me do some hacks like running memory
allocation in a while() loop such as:

blah *p = NULL;
while (!p)
{
p = malloc(...);
}

To do what?
On top of the whole thing we got new operator which works differently on
each compiler implementation. Some compilers throw exceptions while others
give NULL pointers.

Those that give NULL pointers are non-compliant.
Actually I prefer NULL pointers because it's so much
easier to handle in a while loop then doing some exception handling.

Then you need to use a "nothrow" form of the new operator.
I hope I've reached my point now because OS's do run out of memory. Why
would I let my program crash if I can avoid it? Just for fun I recently
wrote a program that took all my memory in the OS and then started one
program which I knew did not have any error checking on new operator. Guess
what, it crashes in no-time.

Sure. What else could it do?
In my opinion it's still better to run a while() loop and stall the program
then having it crash instead, no?

I am not sure what you mean by that. I get really annoyed by programs
that do not respond when I need them to, and have to kill them manually.
How is that better than crashing?
I am looking for suggestions or possible implementations of memory
allocation which is guarrantied not to fail.

No such thing. Since memory is a limited resource, some memory
allocations are bound to fail.

Victor
 
J

Jeff Schwab

John said:
Lately I've seen alot of C and C++ code (not my own) which doesn't do any
checking if memory obtained by new or malloc is valid or if they return NULL
pointers.
Why does most people not care about doing this kind of error checking?

Perhaps because "new" can throw exceptions?
When I used to learn the language I was told always to check "if (p !=
NULL)" and return a error otherwise.

Were you using C?
I still do that always but sometimes it's annoying the hell out of me if I
want to allocate memory inside a constructor and have no way to return an
error code to the caller to inform that something went wrong.

Throw an exception.
Thinking about this issue made me do some hacks like running memory
allocation in a while() loop such as:

blah *p = NULL;
while (!p)
{
p = malloc(...);
}

On top of the whole thing we got new operator which works differently on
each compiler implementation. Some compilers throw exceptions while others
give NULL pointers. Actually I prefer NULL pointers because it's so much
easier to handle in a while loop then doing some exception handling.

I hope I've reached my point now because OS's do run out of memory. Why
would I let my program crash if I can avoid it? Just for fun I recently
wrote a program that took all my memory in the OS and then started one
program which I knew did not have any error checking on new operator. Guess
what, it crashes in no-time.

Surprise, surprise. :)
In my opinion it's still better to run a while() loop and stall the program
then having it crash instead, no?

I suppose so, but I don't really like either of those options. If
availability of the process is not cricital, I usually print an error
message, then try to exit as gracefully as possible. Of course, if it's
essential that the program not die, the resources needed must be
specified very carefully, and the program can try to reserve them all on
startup.
I am looking for suggestions or possible implementations of memory
allocation which is guarrantied not to fail.

You could allocate all needed memory in static blocks. Of course, then
every run of your program will use as much memory as the worst case.
Your while-loop idea may be the next closest thing to a "fail-proof"
allocator.
Thanks in advance.
-- John

-Jeff
 
N

Nicholas Hounsome

John Eskie said:
Lately I've seen alot of C and C++ code (not my own) which doesn't do any
checking if memory obtained by new or malloc is valid or if they return NULL
pointers.
Why does most people not care about doing this kind of error checking?

Because:
1. with virtual memory it extremely uncommon for a program to run out of
memory.
2. new throws bad_alloc if there is no memory
3. The code for continually checking for null pointers is wasteful of
program memory and CPU and will cause more paging because there will be less
real code on each page.
4. It is not generally possible to do anything sensible if you do run out of
memory. In particular you cannot portably do standard I/O to tell anyone
about it and you certainly can't do any GUI stuff.
5. Whatever you do you cannot make the libraries that you call use the same
system and they will typically contain way more code than your part of the
application.
When I used to learn the language I was told always to check "if (p !=
NULL)" and return a error otherwise.

If you are writing control software for Araine 5 then its probably good
advice - for teh sort of software that most of us write I prepared to bet a
lot of money that you will hit your millionth real bug in production code
before you have a real bug report about crashing because of lack of memory.
I still do that always but sometimes it's annoying the hell out of me if I
want to allocate memory inside a constructor and have no way to return an
error code to the caller to inform that something went wrong.

new throws bad_alloc
Thinking about this issue made me do some hacks like running memory
allocation in a while() loop such as:

blah *p = NULL;
while (!p)
{
p = malloc(...);
}

On top of the whole thing we got new operator which works differently on
each compiler implementation. Some compilers throw exceptions while others
give NULL pointers. Actually I prefer NULL pointers because it's so much
easier to handle in a while loop then doing some exception handling.

I don't understand why but since you need exceptions for ctors anyway I
suggest you use them for memory allocation as well.

If you insist on checking for null even with uptodate implementations then
just use:
I hope I've reached my point now because OS's do run out of memory. Why
would I let my program crash if I can avoid it? Just for fun I recently
wrote a program that took all my memory in the OS and then started one
program which I knew did not have any error checking on new operator. Guess
what, it crashes in no-time.
In my opinion it's still better to run a while() loop and stall the program
then having it crash instead, no?
no


I am looking for suggestions or possible implementations of memory
allocation which is guarrantied not to fail.

Preallocate memory pools that you know will be sufficient and allocate from
them - but to do this across a whole application is ludicrously hard and
impossible if you use 3rd party libraries
 
N

Nicholas Hounsome

[snip]
I suppose so, but I don't really like either of those options. If
availability of the process is not cricital, I usually print an error
message, then try to exit as gracefully as possible.

I don't believe that this is gauranteed to work - show me where it says that
cout or even printf is gauranteed not to try to allocate memory.

also apps tend to be graphical these days and if you open a new dialog to
say you have run out of memory it WILL try to allocate memory
 
J

Jeff Schwab

Nicholas said:
[snip]
I suppose so, but I don't really like either of those options. If
availability of the process is not cricital, I usually print an error
message, then try to exit as gracefully as possible.


I don't believe that this is gauranteed to work - show me where it says that
cout or even printf is gauranteed not to try to allocate memory.

I reserve some memory in a static constructor to be used in just such a
case. Of course, I don't have any guarantee that the I/O functions
don't use direct system calls to get new memory. Even if I did, there
is no guarantee the system calls would not fail.
also apps tend to be graphical these days and if you open a new dialog to
say you have run out of memory it WILL try to allocate memory

Some apps do. Mine don't. Most of the stuff I write must have a
command-line interface, to make scripting feasible. IMHO, *almost all*
programs really ought to have CLI's, for this and other reasons. Even
if I were to provide GUI's, I don't think they would be used by most of
the folks running my programs, which tend to be run as batch jobs on big
servers.

And btw, I frequently see programs run out of memory, even on heavy-duty
machines. It's an unexpected situation that can't be ruled out; exactly
the aspect of programming C++ exceptions were meant to address.

-Jeff
 
J

Jeff Schwab

Jeff said:
Nicholas said:
[snip]
I suppose so, but I don't really like either of those options. If
availability of the process is not cricital, I usually print an error
message, then try to exit as gracefully as possible.



I don't believe that this is gauranteed to work - show me where it
says that
cout or even printf is gauranteed not to try to allocate memory.


I reserve some memory in a static constructor

Nit-prevention: I mean the constructor of a static variable.
 
J

John Eskie

Victor Bazarov said:
Throw an exception. It's the usual way to indicate a failure to create
an object.

So when I catch it, what am I suppose to do? Just print something and call
exit(1)?
To do what?
Perform memory allocation which surely won't fail. The time required to
perform such an allocation could be infinitely long though.
Those that give NULL pointers are non-compliant.

There are still some that do. Personally I use MS VC++.
Then you need to use a "nothrow" form of the new operator.

You're probably right but in my case it works either way.
Sure. What else could it do?
There are better ways to inform users then just crashes. One would be to
idle until it gets enough memory. After all Windows 2000 informed me at the
same time that it was running low on resources and graphics turned into
fewer colors (256 colors if I remember right).
If I write programs that are suppose to run in background without users
interfearing the best thing I can do is to make them taking care of
themselves. A crash will result a user having to restart it or whatever
should be done in such a situation.
I am not sure what you mean by that. I get really annoyed by programs
that do not respond when I need them to, and have to kill them manually.
How is that better than crashing?

It's better because it doesn't crash. Depending on the program a crash could
result in data loss. A idleing application which will resume when it gets
the needed memory later on will just keep running.
No such thing. Since memory is a limited resource, some memory
allocations are bound to fail.

Sure but it's unrealistic to think that because all memory is used right now
there won't be a few hundred bytes available in say 2 minutes from now.

-- John
 
J

John Eskie

Were you using C?

In past yes. But I've also used this construct for C++ applications.
Throw an exception.
As written in another post I am unsure how to write a proper handler for it.
After all I can recieve such a exception at any time so I will have no
possibilities to do proper cleanup of resources.
I suppose so, but I don't really like either of those options. If
availability of the process is not cricital, I usually print an error
message, then try to exit as gracefully as possible. Of course, if it's
essential that the program not die, the resources needed must be
specified very carefully, and the program can try to reserve them all on
startup.

Yes but if you got some open file handles or similar then you can't really
clean up the whole thing gracefully. I would say there is alot effort to do
that. This is probably also the reason why I dislike exceptions. Because you
end up somewhere in the program and have no way of going back. If you just
get a NULL pointer you can check for you will be able to return to the
caller and inform there is something wrong and the caller can do it's own
cleanup.
You could allocate all needed memory in static blocks. Of course, then
every run of your program will use as much memory as the worst case.
Your while-loop idea may be the next closest thing to a "fail-proof"
allocator.
While it's certainly possible it's very impractical. Sometimes you don't
know how much memory you will need. A simple example is when you read in a
file of size which you find out when the program runs.

-- John
 
N

Nicholas Hounsome

John Eskie said:
Victor Bazarov said:
"John Eskie" <[email protected]> wrote...
[..]
There are better ways to inform users then just crashes. One would be to
idle until it gets enough memory. After all Windows 2000 informed me at the
same time that it was running low on resources and graphics turned into
fewer colors (256 colors if I remember right).
If I write programs that are suppose to run in background without users
interfearing the best thing I can do is to make them taking care of
themselves. A crash will result a user having to restart it or whatever
should be done in such a situation.

Most computers now use VM.
Most computers now run lots of processes concurrently.
Most computers now have many Gb of Disk therefore most computers tend to
have very large VM.
Consequently I find that the only time you run out of memory is if some
process has run amok and is unitentionally using it all.
If it is your process then waiting around for more memory or even just
holding on to what you have got is actually screwing up all the other
(possibly more important) processes - NOTHING does any useful work - Nobody
can fix it because you can't even run the task manager or equivalent.
If it is not your process but another then it too will run out of memory and
there are then two possibilities:
(1) It exits - you can then get your memory but only because it didn't
follow the same approach as you do.
(2) It follows your approach and waits for more memory. All processes in the
system do this - NOTHING does any useful work - Nobody can fix it because
you can't even run the task manager or equivalent.
It's better because it doesn't crash. Depending on the program a crash could
result in data loss. A idleing application which will resume when it gets
the needed memory later on will just keep running.

If there is not enough memory to be able to run an app to kill a runaway app
then the only option is the big red button which will cause ALL running apps
to lose data.

More generally how does your app know that there isn't a really important
app running that desparately needs the memory you are now idly holding?
Sure but it's unrealistic to think that because all memory is used right now
there won't be a few hundred bytes available in say 2 minutes from now.

No - In my experience that is exactly what happens because the memory is
consumed by a runaway app that will NEVER give it up.
 
V

Victor Bazarov

John Eskie said:
if

So when I catch it, what am I suppose to do? Just print something and call
exit(1)?

How should I know? Initiate another garbage collection session, maybe.
Free up some unused memory. Swap something to a disk file. Maintain
your memory in good standing, don't wait until it runs out.
Perform memory allocation which surely won't fail. The time required to
perform such an allocation could be infinitely long though.

So, what's the point? There is no certainty that the memory will
become available. Your program gets STUCK in this loop and no way
to say what the heck it's doing... I see it as useless as crashing.
There are still some that do. Personally I use MS VC++.

That's a 7 year old, pre-Standard compiler. You're apparently stuck
in the past and don't want to admit it. Update your tools and enjoy
the right way.
You're probably right but in my case it works either way.

Yes, and I can bring water to my house by a bucket from a near-by
pond. Doesn't mean that I should be doing that when I have a pump
and a well.
There are better ways to inform users then just crashes. One would be to
idle until it gets enough memory. After all Windows 2000 informed me at the
same time that it was running low on resources and graphics turned into
fewer colors (256 colors if I remember right).

Well, if THAT's possible on your system (and the user doesn't mind
waiting), sure. But when OS is running low on memory, it probably
means that your program is not the only one trying to get it allocated.
There are NO guarantees that your program will _ever_ proceed running.
If I write programs that are suppose to run in background without users
interfearing the best thing I can do is to make them taking care of
themselves. A crash will result a user having to restart it or whatever
should be done in such a situation.

The best thing in such situation is to make use of the available
memory instead of requiring all or nothing. There are methods for
that (swapping with disk, etc.)
It's better because it doesn't crash. Depending on the program a crash could
result in data loss. A idleing application which will resume when it gets
the needed memory later on will just keep running.

Well, you didn't read what I said, apparently. However, I said it without
enough information from you. Now that you explained that your applications
run without user attention, I am retracting my question. It is probably
better (if it is at all possible and allowed by design) to wait than to
crash.
Sure but it's unrealistic to think that because all memory is used right now
there won't be a few hundred bytes available in say 2 minutes from now.

No, it's not unrealistic. Or, it's no more unrealistic than to expect
that memory will suddenly become available.

Besides, if you think of it, constant polling of the system to allocate
you some memory is a huge toll on the processor and the system.

A few hundred bytes, you say? If that's the margin you're operating on,
your users GOTTA upgrade their computers.

Anyway, I don't see a C++ language issue here, sorry.

Victor
 
J

Jeff Schwab

When I used to learn the language I was told always to check "if (p
In past yes. But I've also used this construct for C++ applications.
As written in another post I am unsure how to write a proper handler
for it. After all I can recieve such a exception at any time so I
will have no possibilities to do proper cleanup of resources.

If clean-up of resources is performed by destructors, it will be
performed automatically as the stack is unwound.
Yes but if you got some open file handles or similar then you can't
really clean up the whole thing gracefully.

The destructors should do it for you.
I would say there is alot effort to do that.

It depends on the program. I agree that writing a truly fail-safe
programs can be a lot of effort. :) By the time I've found a crash-proof
OS, bought the server and peripherals (UPS etc.), checked which library
functions I can use for I/O in case of emergency and which system calls
are guaranteed not to fail, and rented space in a flood-proof,
fire-proof machine room, I'll be darned sure the spec's of the system
are well-defined. In such a case, it's probably worth the extra effort
to do your allocations at the front of the program.
This is probably also the reason why I dislike exceptions. Because you
end up somewhere in the program and have no way of going back.

How did you get there in the first place? Throw information about the
call stack as part of the exception. I have done this by dividing a
program into "layers," and putting an exception handler around each
crossing of a boundary between layers. Each handler then may add
information about the call-stack to the exception. I should mention
that it is rare for me to deal with exceptions so catastrophic, and that
most of my programs do fail with a nasty error message when needed
memory is not available.
If you just get a NULL pointer you can check for you will be able to
return to the caller and inform there is something wrong and the
caller can do it's own cleanup.

You can do the same thing with an exception, but the exception will
allow you to return whatever sort of object is necessary to help the
caller deal with the failure. You don't have to encode everything in
"errno" or some other global variable. Also, by catching different
types of exceptions at different places along the call stack, the
calling code can deal with different types of errors in different
places. With the old "encode errors in the return value" trick common
in C, a low-level function might be responsible for handling some very
high-level errors, usually by passing the error back up the call stack.
Now, the intermediate code has been removed, and errors go straight to
the places you want to handle them. As a bonus over setjmp or "goto
error" approaches, the call-stack is still unwound properly.
While it's certainly possible it's very impractical. Sometimes you
don't know how much memory you will need. A simple example is when
you read in a file of size which you find out when the program runs.

You don't have to know how much you need to allocate it up front: you
have to know what the *worst case* is, and be prepared for it. If you
don't end up using all the memory you allocated, so be it. Just to be
clear, this is not an approach I use often, and it is one that may make
your program grossly inefficient in the general case. It's still better
than hanging for an "infinitely long" time, though.

-Jeff
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top