There are numerous C++ examples and code snippets which use STL
allocators in containers such as STL vector an string.
It has come to my attention that nobody ever seems to care about
checking the allocation has been successfull. As we have this
exception handling why is it not used very often in practise?.
Surely it should be common practise to "try" all allocations, or do
people just not care and allow the program to crash if it runs out of
memory?
Good-style exception-aware C++ code does __NOT__ check for allocation
failures. Instead, it's written in such a manner that said (and other)
failures don't break it __LOGICALLY__. This is done through a careful
design and observation of exception safety guarantees (http://
en.wikipedia.org/wiki/Exception_handling#Exception_safety) of the
code. Some simple example:
FILE* f = fopen(...);
if (!f) throw whatever();
vector<int> v;
v.push_back(2);
fclose(f);
This snippet should have no-leak (basic) exception safety guarantee,
but it doesn't (possible resource leak: FILE*, if there's an exception
between vector creation and fclose. For example, push_back will throw
if there's no memory to allocate internal vector storage.
To satisfy no-leak guarantee, the above should be:
FILE* f = fopen(...);
if (!f) throw whatever();
try
{
vector<int> v;
v.push_back(2);
fclose(f);
}
catch(...)
{
fclose(f);
throw;
}
The above is pretty horrible, hence one would reach for RAII and use
fstream in lieu of FILE* and no try/catch would be needed for the code
to function correctly.
Another example:
container1 c1;
container2 c2;
c1.add(stuff);
c2.add(stuff);
Suppose that "stuff" needs to be in both c1 and c, otherwise something
is wrong. If so, the above needs strong excetion safety. Correction
would be:
c1.add(stuff);
try
{
c2.add(stuff);
}
catch(...)
{
c1.remove(stuff);
throw;
}
Again, writing this is annoying, and for this sort of things there's
an application of RAII in a trick called "scope guard". Using scope
guard, this should turn out as:
c1.add(stuff);
ScopeGuard guardc1 = MakeObjGuard(c1, &container1::remove,
ByRef(stuff));
c2.add(stuff);
guardc1.Dismiss();
Similar examples can be made for other exception safety levels but IMO
the above two happen in vaast majority of cases.
I think if people were more conscious of this error checking the
reserve function would be used more often.
I am very convinced that this is a wrong way to go reasoning about
error checking with exception-aware code. First off, using reserve
lulls into a false sense of security. So space is reserved for the
vector, and elements will be copied in it. What if copy ctor/
assignment throws in the process? Code that is sensitive to exceptions
will still be at risk. Second, it pollutes the code with gratuitous
snippets no one cares about. There's a better way, see above.
What's so wrong with this reasoning? The idea that one needs to make
sure that each possible failure mode is looked after. This is
possible, but is __extremely__ tedious. Instead, one should think in
this manner: here are pieces of code that might throw (that should be
a vaaaaaast majority of total code). If they throw, what will go wrong
with the code (internal state, resources etc)? (e.g. FILE* will leak,
c1 and c2 won't be in sync...) For those things, appropriate cleanup
action should be taken (in vaaast majority of cases, said cleanup
action is going to be "apply RAII"). Said cleanup action must be a no-
throw operation (hence use of destructors w/RAII). There should also
be a clear idea where no-throw areas are, and they should be a tiny
amount of the code (in C++, these are typically primitive type
assignments, C-function calls and use of no-throwing swap).
There's a school of thought that says that allocation failure should
simply terminate everything. This is based on the notion that, once
there's no memory, world has ended for the code anyhow. This notion is
false in a significant amount of cases (and is not in the spirit of C
or C++; if it were, malloc or new would terminate()). Why is notion
wrong? Because normally, code goes like this: work, work, allocate
some resources, work (with those), free those, allocate more, work,
allocate, work, free, free etc. That is, for many-a-code-path, there's
a "hill climb" where resources are allocated while working "up", and
they are deallocated, all or at least a significant part, while going
"down" (e.g. perhaps calculation result is kept allocated. So once
code hits a brick wall going up, there's a big chance there will be
plenty of breathing room once it comes down (due to all those
freeing). IOW, there's no __immediate need__ to die. Go down, clean up
behind you and you'll be fine. It's kinda better to go back and say "I
couldn't do X due to OOM" is kinda better than dying at the spot,
wouldn't you say?
Finally, there's a school of thought that says that allocations should
not be checked at all. IMO, that school comes from programming under
systems with overcommit, where, upon OOM conditions, external force
(OOM killer) is brought in to terminate the code __not__ at the spot
of the allocation, but at the spot allocated memory is used. Indeed,
if this is the observed behavior, then any checking for allocation
failure is kinda useless. However, this behavior is not prescribed by
C or C++ language and is therefore outside the scope of this
newsgroup ;-).
Goran.