bad alloc

P

Paul

On 30/08/2011 14:54, Paul wrote:
On 30/08/2011 14:26, Paul wrote:
I think that pre-STL it was pretty much standard practise to check
for memory allocation failures for example:
float* m1 = new float[16];
if(!m1){
//output an exit msg.
exit();
}

which is exactly what not handling a bad_alloc exception would do...

How does this address the question...What is the point of a throw if
[it's] not being caught?

to invoke destructors. Go and look up RAII.

And what good would invoking any destructor do?
Your program is about to crash you want to call destructors but not
catch the exception? That doesn't make any sense.
 
P

Paul

Crash the system?  Again: Does a crashing Windows app crash Windows?
No.  Why would a phone be any different?  Try engaging your brain.
You have no idea what will happen if you don't catch the exception and
close the program safely.
Many an application has crashed a system or caused a BSOD.
 
A

Adam Skutt

Nonsense; C++ exceptions are not just for unrecoverable fatal errors;

And I've never said otherwise. It doesn't change the fact that if you
raise an exception, the default behavior of the program will be to
terminate unless you explicitly take action to prevent it, i.e., write
a catch handler. Since that's the default, it's perfectly reasonable
to assume the proper response to an arbitrary exception X is to
terminate, especially absent directions on how to properly handle the
exception!

There are plenty of exception handling schemes, and languages that
implement them, that provide other behavior such as forcing the
programmer to explicitly handle some or all exceptions. C++ doesn't
use them, and I don't believe it doesn't use them by accident.
if that were the case we wouldn't need try/catch or even exceptions as we
would just call abort() instead.

However, even if exceptions were for unrecoverable fatal errors, this
doesn't ipso facto follow. Even abort() has a mechanism (albeit a
crude one) for performing processing after it is called.

Adam
 
G

Goran

So you're resolved to contradict yourself in the same paragraph now?
You're simply factually wrong: doing I/O always involves a change in
the state of the process, period.  If it doesn't, then the Haskell
folk have a lot of explaining to do.

Please explain how e.g.

write(f, some_char_ptr, strlen(some_char_ptr));

change program state. Or fprintf(...), for that matter (all sorts of
conversions possible there).
Funny, the people who've actually written C++ logging APIs sure as
hell don't seem to agree with you.

But they are not in the business of writing nothrow code, nor they
shouldn't be. They are in the business of writing rich infrastructure
for logging. That's bound to have all sorts of failure points.
I think it's a way out for all applications, and as your reliability
needs go up, your odds of terminating at any sort of failure go up
substantially as well.  The safest way to proceed after a normal
condition is to restart from a known state, and the safest way to do
that is to restart the process, possibly even the whole computer.
This is what many life-critical and safety-critical systems do.


Then you're hopelessly ignorant.

Surely you meant: "In presence of a compelling argument, kids often go
for on insults?"
I'd love to see a generalized algorithm for removing freestore
allocations from programs, but I'm quite confident you will not be
providing it.

I never, not once, claimed that there should be such a thing. I have
no idea why you are fixating on this.
No it doesn't, it only requires it to know what is an expensive
operation that is undesirable to repeat.  That's reasonably easy most
of the time.




What "transient peaks"?  

See that temporary string produced by operator+? There's one. And they
are __everywhere__.
That's not even C++ and there's not enough
context to see what the hell you're talking about.

Yes, there is, but you are reduced to being argumentative.
I don't see any
memory there that could be freed by an OOM handler to enable the
operation to be retried.  

I never, not once said that operation should be retried. I never, not
once, said that OOM handler should try to free any memory. Go back and
check it out.
You have to show the presence of allocated
memory that is not necessary to complete whatever computation you're
attempting.

I don't have to show that, because I never, in any way, shape or form,
implied that I wanted to complete attempted computation. On the
contrary, I claimed in my first post, that operation should be rolled
back and that it is __likely__ that said rollback (stack unwind) will
have freed some resources. See that image processing that failed? The
thing is, because it was expensive, there's a massive chance it has
been allocating for a while, failed somewhere in the middle, likely
while trying to allocate quite a chunk off the heap. When bad_alloc
was thrown, all that was freed. When catch was reached, chances are,
there's plenty of heap. And you want do kill the process. Yeah, that's
some way to go.
Given that's the default behavior, it's a damn good starting
assumption!

If you rely on terminate(), you lose useful error info that was in
exception object (well, should be, at least; I suspect that's not how
you write your exceptions). Not nice. And, bar C++, no language
runtime terminates e.g. a console program by a hard terminate.
If it weren't the thinking of language developers, they'd follow the
exception handling semantics of Eiffel and similar languages.

You know, I never much liked Don Quixote when I read it in school, so
I think I have no interest in conversing with you further, Mr.
Quixote.

Shame. It's a good book. Not related to this though.

Goran.
 
P

Paul

You don't know what you are talking about; for a clue see:

http://en.wikipedia.org/wiki/User_space

This is a crap attempt to justify your support for sloppy programming
techniques.
If you allow expections to go uncaught you have no control over your
program and any other resources it may be using, these resources are
not limited to userspace. And since when was C++ unsuitable for kernal
mode programming?
 
P

Paul

Again you don't know what you are talking about as you don't appear to
know what an "application" is.  I will tell you: on most modern
platforms an "application" is a program that runs in userland.  An
application crashing in userland should not "crash the system".

C++ is of course suitable for kernel mode programming but we are not
talking about kernel mode programming.
Your vey own words prove how incompetent you are.
You say an application "should not" crash the system but in reality
you have no clue whether it will crash the sytem or not.

We are talking about C++ , we are not talking about C++ *only* in a
usermode archetectUER. Your attempt to confine the bounds of this
discussion to USERMODE shite are overwhelmingly splattered in
bullshit.
Get a life and wake up , a C++ program that is *expected* to crash
because the programmer is too noob to implement exception handling is
a pile of shit and if that's the sort of progams you are producing
then they aren't worth the filespace they occupy.
 
P

Paul

Again you don't know what you are talking about as you don't appear to
know what an "application" is.  I will tell you: on most modern
platforms an "application" is a program that runs in userland.  An
application crashing in userland should not "crash the system".

C++ is of course suitable for kernel mode programming but we are not
talking about kernel mode programming.
Bollocks, we are taliking about C++ programs in general.

ANd even if we were just considering applications that run in some
user space on a given OS it is still not guaranteeed what will happen
when an application crashes which is probably why you use the term
"should not" crash the system. In reality you don't have a clue what
will happen.
 
J

James Kanze

On 30/08/2011 14:26, Paul wrote:
[...]
The only sane remedy for most cases of allocation failure is to
terminate the application which is what will happen with an uncaught
exception.

That's probably true for most applications, but not all. And
even if you do terminate the application, you might want some
sort of clean-up involved. Most of the applications I've worked
on have replaced the new_handler, so it calls exit(), rather
than aborting (and thus, destructors of static variables are
called, along with any functions registered with atexit).
Alternatively, a single try/catch in main can be used.
Rule of thumb for C++ exception aware code: liberally sprinkle
throws but have very few try/catches.

Agreed. Even if you have to continue when out of memory occurs
(e.g. because it can occur because a user request is too
complex), you'll still catch bad_alloc at a very high level, and
abort the request. Except (as I'm sure you know), catching
bad_alloc isn't always reliable. If the "request" which causes
you to run out of memory is a result of trying to grow the
stack, for example, there's no system on which you can catch it,
and on some widespread systems, the system will tell you that
there is still memory when there isn't, and then crash the
program when it tries to access that memory.
 
J

Joshua Maurice

No, that doesn't guarantee you a thing, especially if your goal is to
retry the operation you failed in the first place.  You'll simply
reallocate the same amount of memory, again, and fail in the same
place, again.  The OOM handler must go out of its way to ensure
additional memory gets freed if a retry is desirable (generally, if
anything other than termination is desirable).

Again, I don't think anyone is suggesting a naive retry of a failed
job on OOM. We're suggesting logging the error first, and possibly
more sophisticated schemes of retry.

[...]
Turning off overcommit doesn't ensure your process stays alive.  Even
when it does, you may have traded the survival of your process for the
loss of the whole system.


fork()/exec() is not the primary reason for overcommit support in
Linux and other operating systems.  Overcommit comes about because
many application do use allocation strategies that ask for more memory
from the operating system then they ever use.  fork()/exec() is really
a corner case.

Interesting. I've tried to google for more information on this topic,
but there's not much I can find offhand about the actual initial
motivations for overcommit, and the current motivations for leaving
overcommit in.

Furthermore, the current situation is even more broken than I thought.
It seems there's no good way to limit (virtual) memory usage on a per-
user basis in Linux (and presumably other unix-like OSes (?)). Thus,
your options are 1- overcommit and the OOM killer on a relatively
random process, or 2- no overcommit and an OOM failure return code or
exception from malloc et al in a relatively random process. This is
completely broken. I often wonder how such states can persist for so
long. It's not just me, right? Other people do see how this is broken,
right? Why is no one fixing this? It can't be that hard to implement
per-user limits.

With this in mind, I give slightly more credence to your approach, or
at least I'm more open minded as my approach has significant
limitations as well.
No, that does not follow at all. You need to consider the goal of the
exception handler: succeed no matter what state the application is in
vs. the goals of the rest of the program: succeed only if possible.
The latter means no memory allocation.

Why do you claim that those are their goals? Has a stakeholder ever
given you such requirements?
And a non-starter, especially for a C++ newsgroup!

The distinction is that you were discussing strategies for other
languages, which is why I rightly noted that it's irrelevant to the
conversation and the newsgroup. I will now perhaps acknowledge that
you were trying to shed some light on the current situation by
referencing how other languages do it (in an omitted quote). I was
possibly incorrect for jumping to "off topic".

However, in the above quote, I talked about using non-portable
mechanisms to achieve some degree of reliability, which is relevant to
C++ and this newsgroup. This newsgroup is not purely C++ standard -
that's comp.std.c++. This newsgroup does have a strong bias towards
portable programming, but if the only reasonable solution is to use
POSIX or win32 APIs, then it is appropriate to suggest that here.
 
J

Joshua Maurice

Interesting. I've tried to google for more information on this topic,
but there's not much I can find offhand about the actual initial
motivations for overcommit, and the current motivations for leaving
overcommit in.

Furthermore, the current situation is even more broken than I thought.
It seems there's no good way to limit (virtual) memory usage on a per-
user basis in Linux (and presumably other unix-like OSes (?)). Thus,
your options are 1- overcommit and the OOM killer on a relatively
random process, or 2- no overcommit and an OOM failure return code or
exception from malloc et al in a relatively random process. This is
completely broken. I often wonder how such states can persist for so
long. It's not just me, right? Other people do see how this is broken,
right? Why is no one fixing this? It can't be that hard to implement
per-user limits.

With this in mind, I give slightly more credence to your approach, or
at least I'm more open minded as my approach has significant
limitations as well.

Heck, I don't know what the fix is. I do know this is broken, and it
seems there's got to be something better than the current situation.
 
J

James Kanze

[...]
Providing logging as a no-throw operation is a logical impossibility
unless it is swallowing the errors for you. I/O can always fail,
period. Even when you reserve the descriptor and the buffer.
Moreover, it's generally impossible to detect failure without actually
performing the operation!

If you can't write your output, then logging will fail. But
that's a different problem from running out of memory.
Sure, if you're using read(2) and write(2) (or equivalents) and have
already allocated your buffers, then being out of memory won't require
any additional allocations on the part of your process. Of course,
performing I/O requires more effort than just the read and write
calls, and many (most?) people don't write code that uses such low-
level interfaces. Those interfaces frequently do not (e.g., C++
iostreams) make it easy or even possible to ensure that any given I/O
operation will not cause memory allocation to occur.

It's very simple to ensure that a write to an ostream doesn't do
any allocations, if you design your streambuf correctly.
Nevermind that data is often stored in memory in a different format
from how it is stored on disk, converting between these formats often
requires allocating memory.

Yes, but if you're logging an out of memory condition, you don't
need any of those conversions (or you know which ones you need,
and you can use static or pre-allocated memory for them).
If you truly believe the fact that
read(2) and write(2) do no allocations is somehow relevant in this
discussion, then you are truly clueless.

Either that, or he knows how to implement robust logging.
Although most applications don't need it, I have worked on one
or two where we had to return an "insufficient resources" error
on OOM, and continue handling further requests. The most
difficult problem was ensuring that the OOM didn't cause a stack
overflow (this was on a single threaded Unix system), and not
logging the error or handling further requests.

[...]
What transient peaks?

Those due to handling a specific request. One obvious example
is parsing filters in LDAP; the filter can contain an
arbitrarily complex expression, which must be represented in
memory. If you run out of memory to represent it, you abort the
request (freeing the memory) with an "insufficient resources"
error. That doesn't mean that you can't handle more reasonable
requests.

[...]
If the operating system's virtual memory allows for memory allocation
by other processes to cause allocation failure in my own, then
ultimately I may be forced to crash anyway. Many operating systems
kernel panic (i.e., stop completely) if they reach their commit limit
and have no way of raising the limit (e.g., adding swap automatically
or expanding an existing file). Talking about other processes when
all mainstream systems provide robust virtual memory systems is
tomfoolery.

All mainstream systems except Linux (and I think Windows, and
some versions of AIX, and I think some versions of HP/UX as
well), you mean. The default configuration of Linux will start
killing random processes when memory gets tight (rather than
returning an error from the system request for memory).
 
J

Joshua Maurice

All mainstream systems except Linux (and I think Windows, and
some versions of AIX, and I think some versions of HP/UX as
well), you mean.  The default configuration of Linux will start
killing random processes when memory gets tight (rather than
returning an error from the system request for memory).

I agree this sounds nice in theory, but in current practice this
doesn't work out from what I understand IMHO. I have made a post or
two about it else-thread. You have no guarantee that the "misbehaving
process" no the "process which is doing a complex LDAP thing" is the
one that is going to get the out of memory malloc return failure code
NULL. Another process, like an important system process, may also try
right then to allocate memory, and thus fail, which is bad for the
entire OS and all processes running. It's the same problem. An abusive
process can cause an OOM killer on another process with overcommit on,
and that same abusive process can cause a malloc failure in another
process with overcommit off. For most processes which are not the ones
being "abusive" but merely innocent bystanders, including system
processes, I suspect they will behave similarly. With overcommit on,
they will be killed with great prejudice. With overcommit off, when
they get the malloc error, most will respond just the same and die a
quick death.

To cut off a pre-emptive argument, I don't think it would work in
practice to say "Oh, critical components need to pre-allocate memory",
as that is unreasonable and will not actually happen. We need a
different solution.

PS: The obvious solution to me appears to be per-user virtual memory
limits, but I'm not sure if that would actually solve anything in
practice. I need more information and more time to consider.
 
A

Adam Skutt

If you can't write your output, then logging will fail.  But
that's a different problem from running out of memory.

Not particularly. Our goal here is to never fail. It's by definition
impossible.
It's very simple to ensure that a write to an ostream doesn't do
any allocations, if you design your streambuf correctly.

Writing my own streambuf is more work than I ever want to do, and
really more work than anyone should ever have to do, in order to
handle an exception. Which was my whole point initially. C++ doesn't
provide the facilities out of the box for doing anything other than
some sort of termination on an OOM condition.
Yes, but if you're logging an out of memory condition, you don't
need any of those conversions (or you know which ones you need,
and you can use static or pre-allocated memory for them).

Yes, but that's not the only behavior that was advocated in an OOM
condition. Attempting to save program state was advocated, and that
may well require converting state. In fact, in context, it started as
discussion about trying to save after std::bad_alloc was thrown, not
just merely log the OOM condition.
Those due to handling a specific request.  One obvious example
is parsing filters in LDAP; the filter can contain an
arbitrarily complex expression, which must be represented in
memory.  If you run out of memory to represent it, you abort the
request (freeing the memory) with an "insufficient resources"
error.  That doesn't mean that you can't handle more reasonable
requests.

It doesn't mean you can, either. And it hardly justifies the effort
involved in handling the OOM condition.
All mainstream systems except Linux (and I think Windows, and
some versions of AIX, and I think some versions of HP/UX as
well), you mean.  The default configuration of Linux will start
killing random processes when memory gets tight (rather than
returning an error from the system request for memory).

I'm not sure how what you wrote has anything whatsoever to do with
what I said.

Adam
 
A

Adam Skutt

Again, I don't think anyone is suggesting a naive retry of a failed
job on OOM.

That's precisely what you suggested when you claimed that all we had
to do is rollback down the stack!
We're suggesting logging the error first, and possibly
more sophisticated schemes of retry.

Then you need to define these "more sophisticated schemes", and define
such a scheme that's actually worth it to implement most of the time.
Interesting. I've tried to google for more information on this topic,
but there's not much I can find offhand about the actual initial
motivations for overcommit, and the current motivations for leaving
overcommit in.
Furthermore, the current situation is even more broken than I thought.
It seems there's no good way to limit (virtual) memory usage on a per-
user basis in Linux (and presumably other unix-like OSes (?)).

Yes, there is. You can set the maximize size of the VAS per-process,
and the maximum number of processes, which provides a hard upper
bound. If you want finer grained controls, you have to patch the
Linux kernel; there are various patches out there that can accomplish
what you want. Some other UNIX systems (e.g., Solaris) provide finer
grain controls out of the box.

There are other ways, such as containers and virtualization, to
accomplish similar feats. They're rarely worth it.
Thus,
your options are 1- overcommit and the OOM killer on a relatively
random process, or 2- no overcommit and an OOM failure return code or
exception from malloc et al in a relatively random process. This is
completely broken.

Compared to what alternative? The traditional alternative is a kernel
panic, and that's not necessarily better (nor worse). For better or
ill, we've become accustomed to the assumption we can treat memory as
an endless resource. Most of the time, that assumption works out
pretty well. When it falls apart, it's not shocking the resulting
consequences are pretty terrible.
I often wonder how such states can persist for so
long. It's not just me, right? Other people do see how this is broken,
right? Why is no one fixing this? It can't be that hard to implement
per-user limits.

Because per-user limits don't fix the problem, unless you limit every
user on the system in such a fashion to never exceed your commit
limit. Even then, you're still not promised the "memory hog" is the
one that's going to be told no-more memory. That's why we don't
bother: identifying the "memory hog" is much too hard for a
computer.

It may well be a broken situation, but there's also not a good
solution.
Why do you claim that those are their goals? Has a stakeholder ever
given you such requirements?

Yes, plenty of people here seem to be convinced handling OOM (by not
crashing) is a requirement for writing "robust" software; that means
being able to perform some sort of action (such as logging) and
continue onward after the OOM condition. However, some people here
also seem to believe that performing I/O doesn't effect program state,
so their requirements are probably not worth fulfilling.

Still, this plays into my larger point: even if you have a situation
where you can respond to an OOM condition in some meaningful fashion
other than termination, you're still not assured of success. Put more
plainly, handling OOM doesn't ipso facto ensure additional
robustness. You need to be able to handle the OOM condition and have
a reasonable assurance that your response will actually succeed. It
may be an overstatement on my part to say, "Succeed no matter what",
but making the software more "robust" certainly means tending closer
to that extreme than the opposite.

As a concrete example, writing all this OOM handling code does me no
good if when my process finally hits a OOM condition, my whole
computer is going to die anyway. For many applications, this is
one of two reasons why they'll ever see an OOM condition.

Of course, the reality is that all of this rarely makes software more
robust, because actual robust systems generally are pretty tolerant of
things like program termination. As I've said before, frequently it's
even preferable to terminate, even when it would be possible to
recover from the error. This makes the value proposition of handling
OOM even less worthwhile.
However, in the above quote, I talked about using non-portable
mechanisms to achieve some degree of reliability, which is relevant to
C++ and this newsgroup.

Yes, I know. And my whole point from the start is that handling OOM
is too much of a pain to bother. Having to give up writing portable
code definitely falls under "too much of a pain to bother" for lots of
code and lots of programmers. None of this should be controversial in
the least or require this much discussion.

Adam
 
I

Ian Collins

Yes, plenty of people here seem to be convinced handling OOM (by not
crashing) is a requirement for writing "robust" software; that means
being able to perform some sort of action (such as logging) and
continue onward after the OOM condition. However, some people here
also seem to believe that performing I/O doesn't effect program state,
so their requirements are probably not worth fulfilling.

Still, this plays into my larger point: even if you have a situation
where you can respond to an OOM condition in some meaningful fashion
other than termination, you're still not assured of success. Put more
plainly, handling OOM doesn't ipso facto ensure additional
robustness. You need to be able to handle the OOM condition and have
a reasonable assurance that your response will actually succeed. It
may be an overstatement on my part to say, "Succeed no matter what",
but making the software more "robust" certainly means tending closer
to that extreme than the opposite.

I agree. On a descent hosted environment, memory exhaustion is usually
down to either a system wide problem, or a programming error. If you
require an application to be "robust" then you can use an external
entity to manage and if necessary, restart it. This is common practice
for system processes.

I have done this in the past where an application ran in a resource
limited Solaris zone and was monitored from outside of the zone. On a
general purpose system, it is difficult and often impossible for an
application to determine how much free memory the system will be able to
provide. On systems with over commit any guess will be too optimistic
and on system with processes then release memory on demand, too pessimistic.

In a resource constrained environment, critical processes can perform
all their allocations up front before any other processes start.
As a concrete example, writing all this OOM handling code does me no
good if when my process finally hits a OOM condition, my whole
computer is going to die anyway. For many applications, this is
one of two reasons why they'll ever see an OOM condition.

I agree.
Of course, the reality is that all of this rarely makes software more
robust, because actual robust systems generally are pretty tolerant of
things like program termination. As I've said before, frequently it's
even preferable to terminate, even when it would be possible to
recover from the error. This makes the value proposition of handling
OOM even less worthwhile.

Indeed, robustness goes beyond a single application.
 
J

Joshua Maurice

Yes, there is.  You can set the maximize size of the VAS per-process,
and the maximum number of processes, which provides a hard upper
bound.

Which is almost useless for basically anything. Any user or job is
going to have a small number of very large processes, and a bunch of
small processes. Setting a per-process limit and a number-of-process
limit doesn't work.
If you want finer grained controls, you have to patch the
Linux kernel; there are various patches out there that can accomplish
what you want.  Some other UNIX systems (e.g., Solaris) provide finer
grain controls out of the box.

There are other ways, such as containers and virtualization, to
accomplish similar feats.  They're rarely worth it.


Compared to what alternative?  The traditional alternative is a kernel
panic, and that's not necessarily better (nor worse).  For better or
ill, we've become accustomed to the assumption we can treat memory as
an endless resource.  Most of the time, that assumption works out
pretty well.  When it falls apart, it's not shocking the resulting
consequences are pretty terrible.


Because per-user limits don't fix the problem, unless you limit every
user on the system in such a fashion to never exceed your commit
limit.  Even then, you're still not promised the "memory hog" is the
one that's going to be told no-more memory. That's why we don't
bother: identifying the "memory hog" is much too hard for a
computer.

It may well be a broken situation, but there's also not a good
solution.

You could reserve some commit for the kernel, enough so that it won't
die on OOM. That's a solution, maybe. I won't accept right now the
claim that there's nothing you can do to stop a misbehaving user
process from killing vital protected system processes.
Yes, I know.  And my whole point from the start is that handling OOM
is too much of a pain to bother.  Having to give up writing portable
code definitely falls under "too much of a pain to bother" for lots of
code and lots of programmers. None of this should be controversial in
the least or require this much discussion.

It depends on the goals. Here you might be right. To take a silly
extreme, threading is an example where you have to go to "non
portable" functions to get it to work, but it's definitely worth it,
and people do it all the time. I just disagree with your blanket
assertion that ignores the cost benefit analysis that must go into any
reasonable coding decision.
 
N

Nick Keighley

You have no idea what will happen if you don't catch the exception and
close the program safely.
Many an application has crashed a system or caused a BSOD.-

not since Windows NT
even Microsoft started writing proper Oss after that
 
N

Nick Keighley

trim your posts for $DEITY's sake!


by using the term "application" you are implicitly talking about user
mode development.

Do kernals really call new? Do they throw C++ exceptions?

<expletive>, we are taliking about C++ programs in general.

I thought you were talkign about "applciations"? In fact the
discussion was about applications on phones. Don't sound very kernally
to me.
ANd even if we were just considering applications that run in some
user space on a given OS it is still not guaranteeed what will happen
when an application crashes which is probably why you use the term
"should not" crash the system. In reality you don't have a clue what
will happen.

in a well written OS a crash in a user mode program will not harm the
OS. In this case we are talkign about an exit due to an unhandled
exception. From an OS's point of view this pretty well defined
behaviour. Likely the user program called abort() which is pretty well
a "kill me now!" request.

Do your kernal programs call abort()? What happens then?
 
N

Nick Keighley

On 30/08/2011 14:54, Paul wrote:
On 30/08/2011 14:26, Paul wrote:
I think that pre-STL it was pretty much standard practise to check
for memory allocation failures for example:
float* m1 = new float[16];
if(!m1){
//output an exit msg.
exit();
}
which is exactly what not handling a bad_alloc exception would do...
Rule of thumb for C++ exception aware code: liberally sprinkle throws
but have very few try/catches.
What is the point in having throws if you're not going to catch them?
Again your lack of experience is showing.
How does this address the question...What is the point of a throw if
[it's] not being caught?
to invoke destructors. Go and look up RAII.

And what good would invoking any destructor do?
Your program is about to crash you want to call destructors but not
catch the exception? That doesn't make any sense.- Hide quoted text -

depends what the destructors do. They could close log files, release
databases etc. etc.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,756
Messages
2,569,535
Members
45,008
Latest member
obedient dusk

Latest Threads

Top