bad alloc

P

Paul

Op 30-Aug-11 11:14, Krice schreef:



What is so confusing about try-catch? Exceptions make dealing with
failed allocations a lot less tedious than in C.


That is the real question: what do you do when you have run out of
memory? The application could inform the user that the operation failed
because there was not enough memory and keep on running. If a program
has cached a lot of information would flush its caches to free some
memory and retry  the operation. But chances are that it will never get
to that point. My experience is that in a typical PC environment the
system tends to become non-responding or worse long before memory
allocation fails. Before that happens the user has probably restarted
his PC.

This also depends how the memory is fragmented. Say for example a
system has 1GB RAM with 500MG free, 500MB used. Even though you have
500MB free an attempt to allocate 40MB may fail because there is no
contiguous block of 40MB free memory.
 
A

Adam Skutt

There's a school of thought that says that allocation failure should
simply terminate everything. This is based on the notion that, once
there's no memory, world has ended for the code anyhow. This notion is
false in a significant amount of cases (and is not in the spirit of C
or C++; if it were, malloc or new would terminate()). Why is notion
wrong? Because normally, code goes like this: work, work, allocate
some resources, work (with those), free those, allocate more, work,
allocate, work, free, free etc. That is, for many-a-code-path, there's
a "hill climb" where resources are allocated while working "up", and
they are deallocated, all or at least a significant part, while going
"down" (e.g. perhaps calculation result is kept allocated. So once
code hits a brick wall going up, there's a big chance there will be
plenty of breathing room once it comes down (due to all those
freeing). IOW, there's no __immediate need__ to die. Go down, clean up
behind you and you'll be fine. It's kinda better to go back and say "I
couldn't do X due to OOM" is kinda better than dying at the spot,
wouldn't you say?

For most code, on most platforms, the two will be one and the same.
The OS cleans up most resources when a process dies, and most process
have no choice but to die.

Straightforward C++ on most implementations will deallocate memory as
it goes, so when the application runs out of memory, there won't be
anything to free up: retrying the operation will cause the code to
fail in the same place. Making more memory available requires
rewriting the code to avoid unnecessarily holding on to resources that
it no longer needs.

Even when there's memory to free up, writing an exception handler that
actually safely runs under an out-of-memory condition is impressively
difficult. In some situations, it may not be possible to do what you
want, and it may not be possible to do anything at all. Moreover, you
may not have any way to detect these conditions while your code is
running. Your only recourse may be to crash and crash hard.

Even when you can safely write the exception handler, it also means
your code must have a fairly strong exception guarantee so that you
can retry the failed operation. This is difficult to do and may not
possible, either.

If you can do all three of those things, then yes, you can attempt to
retry. The number of applications where this is possible is quite
small; the number of applications where this is prudent is even
smaller still. Otherwise, your only option is death in some fashion.
At this point, "immediate death" vs. "deferred death" may very well be
irrelevant.

In some languages, such as Java, you have no choice in the matter, as
you're not promised you'll be told about the allocation fail in such a
way that can enable a retry. Heck, you're not even promised you will
fail in such a way you can cleanup after yourself.

As a result of all this, the correct school of thought is your third
school:
Finally, there's a school of thought that says that allocations should
not be checked at all.

Properly written, exception-safe C++ code will do the right thing when
std::bad_alloc is thrown, and most C++ code cannot sensibly handle
std::bad_alloc. As a result, the automatic behavior, which is to let
the exception propagate up to main() and terminate the program there,
is the correct behavior for the overwhelming majority of C++
applications. As programmers, we win anytime the automatic behavior
is the correct behavior.

More importantly, this advice sensibly applies to a bevy of other
programming languages with exception handling semantics similiar to C+
+ (e.g., Python, Java, C#) so it's sensible programming advice beyond
mere C++.

Adam
 
A

Adam Skutt

There's a school of thought that says that allocation failure should
simply terminate everything. This is based on the notion that, once
there's no memory, world has ended for the code anyhow. This notion is
false in a significant amount of cases (and is not in the spirit of C
or C++; if it were, malloc or new would terminate()). Why is notion
wrong? Because normally, code goes like this: work, work, allocate
some resources, work (with those), free those, allocate more, work,
allocate, work, free, free etc. That is, for many-a-code-path, there's
a "hill climb" where resources are allocated while working "up", and
they are deallocated, all or at least a significant part, while going
"down" (e.g. perhaps calculation result is kept allocated. So once
code hits a brick wall going up, there's a big chance there will be
plenty of breathing room once it comes down (due to all those
freeing). IOW, there's no __immediate need__ to die. Go down, clean up
behind you and you'll be fine. It's kinda better to go back and say "I
couldn't do X due to OOM" is kinda better than dying at the spot,
wouldn't you say?

For most code, on most platforms, the two will be one and the same.
The OS cleans up most resources when a process dies, and most process
have no choice but to die.

Straightforward C++ on most implementations will deallocate memory as
it goes, so when the application runs out of memory, there won't be
anything to free up: retrying the operation will cause the code to
fail in the same place. Making more memory available requires
rewriting the code to avoid unnecessarily holding on to resources that
it no longer needs.

Even when there's memory to free up, writing an exception handler that
actually safely runs under an out-of-memory condition is impressively
difficult. In some situations, it may not be possible to do what you
want, and it may not be possible to do anything at all. Moreover, you
may not have any way to detect these conditions while your code is
running. Your only recourse may be to crash and crash hard.

Even when you can safely write the exception handler, it also means
your code must have a fairly strong exception guarantee so that you
can retry the failed operation. This is difficult to do and may not
possible, either.

If you can do all three of those things, then yes, you can attempt to
retry. The number of applications where this is possible is quite
small; the number of applications where this is prudent is even
smaller still. Otherwise, your only option is death in some fashion.
At this point, "immediate death" vs. "deferred death" may very well be
irrelevant.

In some languages, such as Java, you have no choice in the matter, as
you're not promised you'll be told about the allocation fail in such a
way that can enable a retry. Heck, you're not even promised you will
fail in such a way you can cleanup after yourself.

As a result of all this, the correct school of thought is your third
school:
Finally, there's a school of thought that says that allocations should
not be checked at all.

Properly written, exception-safe C++ code will do the right thing when
std::bad_alloc is thrown, and most C++ code cannot sensibly handle
std::bad_alloc. As a result, the automatic behavior, which is to let
the exception propagate up to main() and terminate the program there,
is the correct behavior for the overwhelming majority of C++
applications. As programmers, we win anytime the automatic behavior
is the correct behavior.

More importantly, this advice sensibly applies to a bevy of other
programming languages with exception handling semantics similiar to C+
+ (e.g., Python, Java, C#) so it's sensible programming advice beyond
mere C++.

Adam
 
P

Paul

I actually have real world experience of doing something similar as I
was on the Telephony Application Call Control team for two different
smartphones.  If the user exhausts their memory running apps Telephony
would still need to be able to create a call object if there was an
incoming call so the call objects were created up-front rather than
on-demand.  This may be an over-simplification but you get the idea.
I can understand a scenario where you might want to reserve some
memory for a critical part of the program. But I don't see many
programs that would be able to predict their memory requirements up
front. Take your phone example, it would be impossible to predict
every possible user scenario and there just wouldn't be enough memory
to create all the objects required for every possible scenario or even
a fractional representation of every possible scenario.

I do not question your being part of this "Telephony Application Call
Control team". What I do doubt is that you had a full understanding of
the overall project and were in any way in control of that projects
design or implementation.
I picture you more in "Call Centre" role. :)
 
P

Paul

On 30/08/2011 14:26, Paul wrote:
or do people just not care and allow the program to crash if
it runs out of memory?
Yes, that's what I do and I think many others. It's because
try-catch syntax is confusing and because 4Gb+ computers
rarely run out of memory. And if they do, how are you going
to recover from the situation with try-catch if you really
need to allocate that memory? The only time it's reasonable
to use try-catch is when something is left out from
the program (scaling).
I think that pre-STL it was pretty much standard practise to check for
memory allocation failures for example:
float* m1 = new float[16];
if(!m1){
//output an exit msg.
exit();
}
Larger programs would have memory handling routines that took care of
all the allocations so this code was not required all over the place.
However since the boom of STL I think people just assume that
std::vector handles all that and they don't need to worry about it.
Perhaps it was often taught in such a way to promote using the STL
containers that the error checking was deliberately ignored to make
them appear more straight forward to use and simpler to code.
I guess what I am really wondering is if it's the case that many
people are not making a conscious decision to igore error checking but
are instead overlooking it altogether?
Personally I don't mind the try-catch syntax and its alot better than
if's and else's. I think it should be used more as standard practise
when coding with STL containers and not as an exception(no pun) to the
norm, which seems to often be the case.
When I learned C++ I was taught that the return from new should always
be checked for null. This was put across as an important point but it
was usually said that for simplicity any further examples would skip
the error checking. Should a similar point not apply for using STL
containers and try-catch blocks?
The only sane remedy for most cases of allocation failure is to
terminate the application which is what will happen with an uncaught
exception.
I don't agree with this.

Of course you don't agree with this due to your obvious lack of real
world experience.
And real world experience means that you think a memory allocation
failure means there is no option but to crash out?
This will either happen automatically or can be done inside an
overloaded operator new.


Define "terminated properly" when nothing can be guaranteed as we have
run out of memory.
How do you know you have run out of memory, this is not the only cause
of allocation failure.
But if this was the case you could have the decency to inform the user
that his/her 2GB of RAM isn't enough to run your incredibly useless
program and therefore it was closing on them. Perhaps even give a msg
such as "try again when you got more RAM loser".
Again your lack of experience is showing.
How does this address the question...What is the point of a throw if
its not being caught?
 
P

Paul

For most code, on most platforms, the two will be one and the same.
The OS cleans up most resources when a process dies, and most process
have no choice but to die.

Straightforward C++ on most implementations will deallocate memory as
it goes, so when the application runs out of memory, there won't be
anything to free up: retrying the operation will cause the code to
fail in the same place.  Making more memory available requires
rewriting the code to avoid unnecessarily holding on to resources that
it no longer needs.

Even when there's memory to free up, writing an exception handler that
actually safely runs under an out-of-memory condition is impressively
difficult.  In some situations, it may not be possible to do what you
want, and it may not be possible to do anything at all.  Moreover, you
may not have any way to detect these conditions while your code is
running. Your only recourse may be to crash and crash hard.

Even when you can safely write the exception handler, it also means
your code must have a fairly strong exception guarantee so that you
can retry the failed operation.  This is difficult to do and may not
possible, either.

If you can do all three of those things, then yes, you can attempt to
retry.  The number of applications where this is possible is quite
small; the number of applications where this is prudent is even
smaller still.  Otherwise, your only option is death in some fashion.
At this point, "immediate death" vs. "deferred death" may very well be
irrelevant.

In some languages, such as Java, you have no choice in the matter, as
you're not promised you'll be told about the allocation fail in such a
way that can enable a retry.  Heck, you're not even promised you will
fail in such a way you can cleanup after yourself.

As a result of all this, the correct school of thought is your third
school:


Properly written, exception-safe C++ code will do the right thing when
std::bad_alloc is thrown, and most C++ code cannot sensibly handle
std::bad_alloc.  As a result, the automatic behavior, which is to let
the exception propagate up to main() and terminate the program there,
is the correct behavior for the overwhelming majority of C++
applications.  As programmers, we win anytime the automatic behavior
is the correct behavior.

More importantly, this advice sensibly applies to a bevy of other
programming languages with exception handling semantics similiar to C+
+ (e.g., Python, Java, C#) so it's sensible programming advice beyond
mere C++.
Sensible programming advice?
I can't believe what I am seeing, this just seems to confirm my
suspicions are correct. That is that a large amount of programmers are
of the opinion that allocations should go unchecked and the program
just allowed to crash out.
Although I have only seen 3 people supporting this crazy reasoning. It
seems to me like an excuse for laziness. Admit it, the truth is that
you can't be bothered to worry about exception handling and/or you
don't care about it and rather than deliver a quality program you
would rather hack out a pile of tripe in half the time and half the
effort.

Since you have no qualms about delivering a program that is pretty
much guaranteed to crash now and then, may I ask you what you consider
important when you approach a programming project?
 
P

Paul

You don't know what you are talking about.
Oh yes I do.
A phone has limited memory. You cannot pre-allocate all the objects a
user *may* require to run all apps on his/her phone.

video jukebox, photo editor, soundblaster master etc etc.
Correct; no one man knows all the fine details of all the components
that make up a smartphone SW project; however I was quite familiar with
the areas that I was responsible for.  I was C++ developer for over four
years for a major mobile phone company being responsible for software
design; implementation and maintenance.  I was a member of the tiger
team for the first ever mobile phone to be called a "smartphone"; I will
let you be the judge of whether that counts as being in "control" of
design or implementation.
The "tiger team"?
I think you probably had frosties every morning before going to work
in a call centre.
Interestingly EPOC32 and then Symbian OS (what we were using) did not
support C++ exceptions at all instead we had to rely on a more intrusive
and ugly mechanism.
Ah so now you are telling us you designed, implemented and maintained
the Symbian OS?
 
P

Paul

It is quite obvious that you don't


Why are you so dense?  Where did I say that?  I said the Telephony App
could create all the objects it needs up front not other less critical apps.




Obviously these apps are not critical so wouldn't be required to create
any objects up front.
But according to your programming style, if these apps did not check
for allocation errors the phone would crash regularly thus incoming
calls could be missed during phone reboots.
SO what was the point of preallocating call objects in case of
incoming calls?
 
N

Noah Roberts

Properly written, exception-safe C++ code will do the right thing when
std::bad_alloc is thrown, and most C++ code cannot sensibly handle
std::bad_alloc.  As a result, the automatic behavior, which is to let
the exception propagate up to main() and terminate the program there,
is the correct behavior for the overwhelming majority of C++
applications.  As programmers, we win anytime the automatic behavior
is the correct behavior.

What if you're within a try block well before main in the call stack?
What if the catch for that block attempts to do something like save
what the user's working on, or do some other seemingly reasonable
attempt at something that seems reasonable?

A lot of people will catch std::exception. I don't see a big problem
with this but I do see potential in what you're talking about to cause
problems with that. If you get a bad_alloc and then attempt to do
something like write to disk or whatnot as a response to that
exception, you may get the same problem again only now you're writing
to disk, something that's inherently unreversible from the program's
point of view. This could result in corruption of important user data
in some cases.
 
N

Noah Roberts

But according to your programming style, if these apps did not check
for allocation errors the phone would crash regularly thus incoming
calls could be missed during phone reboots.

I'm confused now. I thought that was YOUR argument. Is this an
example of a self-induced reductio ad absurdum?
 
A

Adam Skutt

What if you're within a try block well before main in the call stack?

Doesn't change anything and why would it?
What if the catch for that block attempts to do something like save
what the user's working on, or do some other seemingly reasonable
attempt at something that seems reasonable?

It will almost certainly fail. But that's OK, what that could happen
anyway. Consider entry due to another exception, std::bad_alloc being
thrown somehow during the write-out process. You're stuck with the
exact same result: the attempt to save the state fails.

If anything, this is why catch blocks (especially for root exceptions)
shouldn't try to do much at all. In many cases, the exception lacks
the information to determine what behaviors are safe after the
exception has occurred. Determining what behavior is safe is often
quite difficult. std::bad_alloc is not unique in this regard.
A lot of people will catch std::exception.  I don't see a big problem
with this but I do see potential in what you're talking about to cause
problems with that.  If you get a bad_alloc and then attempt to do
something like write to disk or whatnot as a response to that
exception, you may get the same problem again only now you're writing
to disk, something that's inherently unreversible from the program's
point of view.  This could result in corruption of important user data
in some cases.

Who's to say I wasn't already in the process of writing to disk when
bad_alloc was initially raised? The argument 'catch blocks can have
side effects too' is not an argument against letting bad_alloc bubble
up.

Adam
 
G

Goran

Well reserve would cause the vector to allocate space. Thus you only
need to put the reserve operation in the try-catch block and if this
doesn't throw you know the allocation was successfull and the program
can continue.

Not necessarily. For example:

// Adds stuff to v, return false if no memory
bool foo(vector<string>& v)
{
try { v.reserve(whatever); }
catch(const bad_alloc&) { return false; }
// Yay, we can continue!
v.push_back("whatever");
return true;
}

The above is a hallmark example of wrong C++. Suppose that reserve
worked, but OOM happened when constructing/copying a string object:
foo does not return false, but throws. IOW, foo lies about what it
does.

The problem? Programmer set out to guess all failure modes and failed.
My contention is: programmer will, by and large, fail to guess all
possible failure modes. Therefore, programmer is best off not doing
that, but thinking about handling code/data state in face of
unexpected failures (that is, apply exception safety stuff).

BTW, given that programmer will fail to guess all failure modes,
programmer could wrap each function into a try/catch. That, however:

1. is a masive PITA
2. will fail to propagate good information of what went wrong, and
that is almost just as bad as not reporting an error at all.

Goran.
 
G

Goran

For most code, on most platforms, the two will be one and the same.
The OS cleans up most resources when a process dies, and most process
have no choice but to die.

I disagree (obviously). Here's the way I see it: it all depends on the
number of functions program executes. A simple programs who only does
one thing (e.g. a "main" function with no "until something external
says stop" in the call chain) in C++ benefits slightly from "die on
OOM" approach (in C, or something else without exceptions, benefit is
greater because there, error checking is very labor-demanding). In
fact, it benefits from "die on any problem" approach.

Programs that do more than one function are at a net loss with "die on
OOM" approach, and the loss is bigger the more the functions there are
(and the more important they are. Imagine an image processing program.
So you apply a transformation, and that OOMs. You die, your user loses
his latest changes that worked. But if you go back the stack, clean
all those resources transformation needed and say "sorry, OOM", he
could have saved (heck, you could have done it for the user, given
that we hit OOM). And... Dig this: trying to do the same at the spot
you hit OOM is a __mighty__ bad idea. Why? Because memory, and other
resources, are likely already scarce, and an attempt to do anything
might fail do to that.

Or imagine an HTTP server. One request OOMs, you die. You terminate
and restart, and you cut off all other concurrent request processing
not nice, nor necessary. And so on.
Straightforward C++ on most implementations will deallocate memory as
it goes, so when the application runs out of memory, there won't be
anything to free up: retrying the operation will cause the code to
fail in the same place.  Making more memory available requires
rewriting the code to avoid unnecessarily holding on to resources that
it no longer needs.

That is true, but only if peak memory memory use is actually used to
hold program state (heap fragmentation plays it's part, too). My
contention is that this the case much less often that you make it out
to be.
Even when there's memory to free up, writing an exception handler that
actually safely runs under an out-of-memory condition is impressively
difficult.

I disagree with that, too. First off, when you actually hit the top-
level exception handler, chances are, you will have freed some memory.
Second, OOM-handling facilities are already made not to allocate
anything. E.g. bad_alloc will not try to do it in any implementation
I've seen. I've also seen OOM exception objects pre-allocated
statically in non-C++ environments, too (what else?). There is
difficulty, I agree with that, but it's actually trivial: keep in mind
that, once you hit that OOM handler (most likely, some top-level
exception handler not necessarily tied to OOM), you have all you might
need prepared upfront. That's +/- all. For a "catastrophy", prepare
required resources upfront.
Properly written, exception-safe C++ code will do the right thing when
std::bad_alloc is thrown, and most C++ code cannot sensibly handle
std::bad_alloc.  As a result, the automatic behavior, which is to let
the exception propagate up to main() and terminate the program there,
is the correct behavior for the overwhelming majority of C++
applications.  As programmers, we win anytime the automatic behavior
is the correct behavior.

Yeah, I agree that one cannot sensibly "handle" bad_alloc. It can
sensibly __report__ it though. The thing is though, a vaaaast majority
of exceptions, code can't "handle". It can only report them, and in
rare cases, retry upon some sort o operator's reaction (like, check
the network and retry saving a file on a share). That makes OOM much
less special than any other exception, and less of a reason to
terminate.

Goran.
 
G

Goran

Doesn't change anything and why would it?


It will almost certainly fail.  But that's OK, what that could happen
anyway.  Consider entry due to another exception, std::bad_alloc being
thrown somehow during the write-out process.  You're stuck with the
exact same result: the attempt to save the state fails.

Why would bad_alloc be thrown while writing to disk? Guess: because
writing is made in such a way as to modify program state. Doesn't seem
all that logical.

(Repeating myself) there's two factors that play:

1. walking down the stack upon OOM (or other exception) normally frees
resources.
2. top-level error is not a place to do anything resource-sensitive,
exactly due to OOM possibility.

It's not as complicated as you make it out to be. Going "nice" in case
of OOM might not be worth it in all cases, but is not an and-all
response to all concerns.

Goran.
 
A

Asger-P

Hi Paul

Please, please, please, please, please, please
delete some of all that unneeded quotation it is so much
waste of time all the scrolling...

P.s. You are of course not the only one.

Thanks in advance
Best regards
Asger-P
 
N

none

Why would bad_alloc be thrown while writing to disk? Guess: because
writing is made in such a way as to modify program state. Doesn't seem
all that logical.

(Repeating myself) there's two factors that play:

1. walking down the stack upon OOM (or other exception) normally frees
resources.
2. top-level error is not a place to do anything resource-sensitive,
exactly due to OOM possibility.

It's not as complicated as you make it out to be. Going "nice" in case
of OOM might not be worth it in all cases, but is not an and-all
response to all concerns.

I absolutely agree with Goran and disagree that terminate on OOM is
*always* the best approach. There may be programs where it is the
best approach but it is far from always the case.

A concrete example:

Network server using a standard pattern of one listener/producer and
multiple worker/consumer threads. The listener receives and job
request and hands done the processing of the job to one of the worker
thread.

It is a very much possible that processing one particular job might
actually require too much memory for the system. The correct thing to
do in that case is to stop processing this one oversized job, release
all the resources acquired to process this job, mark it as error and
continue processing further jobs.

Since this is a persistent server than needs to be on and alive 24/7,
it would be totally innapropriate to permanently terminate the server.
Even if there was an additional monitoring process that restart the
server if it dies, this would not be a good thing because you would
loose current progress in the currently running worker threads and
would also kill current external client connections.

Yan
 
N

none

This also depends how the memory is fragmented. Say for example a
system has 1GB RAM with 500MG free, 500MB used. Even though you have
500MB free an attempt to allocate 40MB may fail because there is no
contiguous block of 40MB free memory.

Google "Virtual Memory". The two of you are talking totally unrelated
things. And given that all modern OS use virtual memory, the describe
situation will never happen as such. But once the app starts
swapping, things get very slow.
 
P

Paul

On Aug 30, 6:38 pm, Paul <[email protected]> wrote:
I'm confused now.  I thought that was YOUR argument.  Is this an
example of a self-induced reductio ad absurdum?- Hide quoted text -
My argument was that bad_alloc exceptions should be handled. I don't
see a reason to ignore them when all the mechanics are in place to
catch such exceptions and handle them in some appropriate way.
 
P

Paul

Google "Virtual Memory".  The two of you are talking totally unrelated
things.  And given that all modern OS use virtual memory, the describe
situation will never happen as such.  But once the app starts
swapping, things get very slow.- Hide quoted text -
I was talking about fragmentation here.

If a large allocation fails it is often not the case that the system
is completely OOM, there will usually be a few free bytes scatterred
around in fragmented memory.

For example if a 1KB allocation fails , this doesn't mean a 8byte
allocation will fail. Also four allocations of 250bytes will not
necessary fail.

Example u=used, f=free:
[uuuuuuuuffffuuuuuuuuffffuuuuffff]
Above memory shows 12bytes free but any allocation over 4bytes will
fail.

allocate(8)//will throw exception.

allocate(4)//ok 12 bytes free
allocate(4)
allocate(4)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top