Minimizing Dynamic Memory Allocation

C

Christof Donat

Hi,
Because that's about the only sane way to deal with memory
exhaustion.

*That* depends largely on the application. [...]

Hm, I'd like to see any application that, from a modern point of view, can
deal safely with a std::bad_alloc other than by terminating.

I have not done things like that in C++ up to now, but I once wrote a Java
Application with a dynamically sized Cache that should possibly use all
available Heap untill it is needded somewhere else. Such an application
might have a use for catching std::bad_alloc in order to ask the Cache to
free some memory and try again.

Christof
 
M

Michael DOUBEZ

I'm positive :)
Read James' recent post in this thread for details. (around "A good
general rule is to require any single class to be responsible for at
most one resource. ...")

See also the "Single responsibility principle"

Requiring that a class should be responsible with at most one resource
has nothing to do with the Single Responsability Principle (RSP).

The first is a sound practice to have an exception safe code: managing
multiple resource in a exception safe way tends to be tedious and error
prone. While RSP is a OO principle for enhancing reuse and refactoring
of code; it is related to separation of concern (i.e. obtain good
functional decomposition).
 
J

James Kanze

* James Kanze:
[snip]
To avoid leaks, all classes should include a destructor
that releases any memory allocated by the class'
constructor.
This is just dumb advice.
And it contradicts the advice immediately above: "have a
clear understanding of resource acquisition".
Use types that manage their own memory.
Which is exactly what the "advice" you just called dumb said.
Nope.
The "advice" in the proposed guidelines was to define a
destructor in every class.
Where did you see that?
Top of this quoted section. :)

The quoted section doesn't say that the user has to define a
destructor. It says that the class must "include a destructor
[user defined or implicit] that releases any memory allocated by
the class' constructor." In other words, the class author
should ensure that any memory allocated by the classes
constructor is released in the destructor---there's nothing
there about having to do so explicitly, just ensuring that it is
done.

Which isn't really very good advice, since most of the time, if
the constructor allocates and destructor releases, you really
shouldn't be using dynamic memory anyway. Of the three reasons
for using dynamic memory:

-- lifetime doesn't correspond to a standard lifetime: if the
constructor allocates, and the destructor releases, it's
exactly the standard lifetime of a class member;

-- size unknown at compile time: that's what std::vector et al.
are for;

-- type unknown at compile time (a polymorphic delegate, for
example); boost_scoped pointer would certainly be worth
considering.

Judging by the software I've seen (in many different domains,
but certainly not all), I'd say that the last reason is the
least frequent.

The rule is also stupid because, of course, classes which do
allocate memory which they own (like std::vector) don't
necessarily do so only in the constructor. It (sort of) implies
that std::vector could skip the destructor, since it allocated
the memory in push_back(), and not in the constructor. (I'm
pretty sure that that's not what was meant, but that's what it
literally says.)
The statement about "all classes should" is in a coding
guideline.

And the following words is "include", followed by a description
of what the destructor should do. Obviously, all classes
include a destructor, but not all include a destructor "that
releases any memory allocated by the class constructor". So the
coding guideline is to ensure that the includes destructor does
release any memory allocated by the class constructor. By
whatever means appropriate---making the member a
boost::scoped_ptr<T> instead of a T* would be one means. (Note
that if the class contains a boost::scoped_ptr, it likely needs
a user defined destructor anyway. But that's another issue.)
[snip]
What about a server whose requests can contain arbitrary
expressions (e.g. in ASN.1 or XML---both of which support
nesting)? The server parses the requests into a tree; since
the total size and the types of the individual nodes aren't
known at compile time, it must use dynamic allocation. So
what happens when you receive a request with literally
billions of nodes? Do you terminate the server? Do you
artificially limit the number of nodes so that you can't run
out of memory? (But the limit would have to be unreasonably
small, since you don't want to crash if you receive the
requests from several different clients, in different
threads.) Or do you catch bad_alloc (and stack overflow,
which requires implementation specific code), free up the
already allocated nodes, and return an "insufficient
resources" error.
I haven't done that, and as I recall it's one of those things
that can be debated for years with no clear conclusion. E.g.
the "jumping rabbit" (whatever that is in French)

(Do you mean "chaud lapin". That's not "jumping rabbit", but "hot
rabbit". With definite lubricious overtones.)
maintained such a never-ending thread over in clc++m. But I
think I'd go for the solution of a sub-allocator with simple
quota management.

That's also a possible solution. I'm certainly not saying that
handling bad_alloc is the only possible solution. Depending on
other factors, it may be the easiest or most appropriate,
however. E.g. if you're using some sort of explicit stack,
rather than a recursive parser, or if you can also arrange to
get a bad_alloc or some other error message on stack overflow.
And you're on a correctly configured OS, which will correctly
report insufficient memory.
After all, when it works well for disk space management on
file servers, why not for main memory for this?

Different number of users? Different size of available
resources? Different use patterns? It might work, but there
are enough differences that you can's suppose that it will (or
that it will be the best solution).
Disclaimer: until I've tried on this problem a large number of
times, and failed a large number of times with various aspects
of it, I don't have more than an overall gut-feeling "this
should work" idea; e.g. I can imagine e.g. Windows finding
very nasty ways to undermine the scheme... :)

The biggest problem with trying to catch bad_alloc is that some
systems undermine it. Linux, for example, unless you configure
it specially. Maybe Windows as well; the one time I
experimented with it under Windows (but that was Windows NT with
VC++ 6.0---a long time ago), rather that getting bad_alloc, the
system suspended my process and brought up a pop-up window
suggesting that I terminate other processes. Not a very useful
reaction for a server, where there's no one in front of the
screen, and the connection will time out if I don't respond in a
limited time.
 
A

Alf P. Steinbach

* James Kanze:
* James Kanze:
[snip]
To avoid leaks, all classes should include a destructor
that releases any memory allocated by the class'
constructor.
This is just dumb advice.
And it contradicts the advice immediately above: "have a
clear understanding of resource acquisition".
Use types that manage their own memory.
Which is exactly what the "advice" you just called dumb said.
Nope.
The "advice" in the proposed guidelines was to define a
destructor in every class.
Where did you see that?
Top of this quoted section. :)

The quoted section doesn't say that the user has to define a
destructor. It says that the class must "include a destructor
[user defined or implicit] that releases any memory allocated by
the class' constructor." In other words, the class author
should ensure that any memory allocated by the classes
constructor is released in the destructor---there's nothing
there about having to do so explicitly, just ensuring that it is
done.

As I see it your interpretation of the wording is meaningless, but we'll
probably not agree on that, so...


[snip]
The biggest problem with trying to catch bad_alloc is that some
systems undermine it. Linux, for example, unless you configure
it specially. Maybe Windows as well; the one time I
experimented with it under Windows (but that was Windows NT with
VC++ 6.0---a long time ago), rather that getting bad_alloc, the
system suspended my process and brought up a pop-up window
suggesting that I terminate other processes. Not a very useful
reaction for a server, where there's no one in front of the
screen, and the connection will time out if I don't respond in a
limited time.

The reasonable approach for allocating a large chunk of memory for e.g. a
document or for a sub-allocator would IMHO be to use new(std::nothrow), because
the possibility of allocation failure is then part of the normal case outcome.

For Windows one would probably have to call SetErrorMode at app startup.

The documentation doesn't mention whether it affects out-of-memory handling but
the documentation is in general incomplete, and it's not unlikely.


Cheers & hth.,

- Alf
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,777
Messages
2,569,604
Members
45,204
Latest member
LaverneRua

Latest Threads

Top