Minimizing Dynamic Memory Allocation

M

ma740988

I'm perusing the coding standards put together by hired help and I'm
having a hard time making it pass section 2.1. Section 2.1 reads

-------
Section 2.1 Minimizing Dynamic Memory Allocation
Avoid the use of C's malloc and free. Instead use (sparingly) C++'s
new and delete operators. To avoid memory leaks, a clear
understanding of resource acquisition (allocation and release) is
important. To avoid leaks, all classes should include a destructor
that releases any memory allocated by the class' constructor. To
ensure that destructors are called, class constructors should be
declared

foo::foo ( a, b ) : a ( ... ), b ( ... ) { .... }

and not

foo::foo ( a, b ) { a = ...; b = ...; ... }

because failure in the constructor after initialization of b will call
the destructors for a and b in the first definition, but not in the
second; thus in the second there is a potential memory leak.

Finally, always set a 'new_handler' using the built-in function
set_new_handler. The default new_handler terminates the program when
it cannot satisfy a memory request. Program termination at a critical
time may be disastrous.
-------
For starters, I'd be lying if I said I've never opted for the 'and
not' condition on occasions.
Number 2: Why would a failure in second condition bypass the
destructor for a and b? If I don't have a fully constructed foo
_after_ the initialization of b, the destructor for a and b ought to
be called for both definitions.

Thanks
 
K

Kai-Uwe Bux

ma740988 said:
I'm perusing the coding standards put together by hired help and I'm
having a hard time making it pass section 2.1. Section 2.1 reads

-------
Section 2.1 Minimizing Dynamic Memory Allocation
Avoid the use of C's malloc and free. Instead use (sparingly) C++'s
new and delete operators. To avoid memory leaks, a clear
understanding of resource acquisition (allocation and release) is
important. To avoid leaks, all classes should include a destructor
that releases any memory allocated by the class' constructor. To
ensure that destructors are called, class constructors should be
declared

foo::foo ( a, b ) : a ( ... ), b ( ... ) { .... }

and not

foo::foo ( a, b ) { a = ...; b = ...; ... }

because failure in the constructor after initialization of b will call
the destructors for a and b in the first definition, but not in the
second; thus in the second there is a potential memory leak.

I disagree with this coding guideline. As far as memory leaks and pointer
members are concerned, the "and not" option seems the way to go. Consider

class foo {
int * a;
int * b;
public:
foo ( int i, int j )
: a ( new int ( i ) )
, b ( new int ( j ) )
{}
~foo ( void ) {
delete a;
delete b;
}
};

This leaks the pointee for a if allocation for b throws: the destructor for
the pointer a will be called, but that destructor does not deallocate the
memory pointed to. You get a similar problem with class pointers instead of
int pointers when the constructors for the b pointee throws.

In order to avoid this, auto_ptr is the intended way:

class foo {
int * a;
int * b;
public:
foo ( int i, int j ) {
std::auto_ptr<int> aa = new int ( i );
std::auto_ptr<int> bb = new int ( j );
a = aa.release();
b = bb.release();
}
~foo ( void ) {
delete a;
delete b;
}
};

Note: you do not need to initialize the pointers a and b to 0 because if the
constructor for foo fails, the destructor ~foo() will not be called, which
means we don't delete possibly non-null pointers.

I don't see how you can initialize the pointers a and b in the initializer
list to their final values _without_ the problem of a possible leak (unless
you use a smart pointer of some flavor).

Finally, always set a 'new_handler' using the built-in function
set_new_handler. The default new_handler terminates the program when
it cannot satisfy a memory request.

Incorrect: by default a new expression throws an exception if memory
allocation fails. Only if that exception goes uncaught, the program is
terminated.
Program termination at a critical time may be disastrous.

BTW: The program running in an inconsistent state is probably more
disastrous (do you want a run-away program in a bank buy tons of stock
options or crash).
-------
For starters, I'd be lying if I said I've never opted for the 'and
not' condition on occasions.
Number 2: Why would a failure in second condition bypass the
destructor for a and b? If I don't have a fully constructed foo
_after_ the initialization of b, the destructor for a and b ought to
be called for both definitions.

Yes: I think if a and b are of class type, the constructors for the
subobjects a and b will _always_ be called before the body of the
foo-constructor is entered. Only, if you omit the initializer list, the
default constructors will be called. If the constructor for foo fails in
the body, all constructed subobjects will be destroyed properly.


Best

Kai-Uwe Bux
 
A

Alf P. Steinbach

* ma740988:
I'm perusing the coding standards put together by hired help

Buy Andrei & whoever-it-was' C++ coding guidelines book, one per developer.

and I'm
having a hard time making it pass section 2.1. Section 2.1 reads

Good advice for general C++, but of course there are no absolutes, e.g.,
sometimes some library routine will require you to 'malloc' some argument or
'free' some result.

Instead use (sparingly) C++'s
new and delete operators.

Also good advice, including that word 'sparingly'.

To avoid memory leaks, a clear
understanding of resource acquisition (allocation and release) is
important.

This is true.

To avoid leaks, all classes should include a destructor
that releases any memory allocated by the class' constructor.

This is just dumb advice.

Use types that manage their own memory.

Use smart pointers.

To
ensure that destructors are called, class constructors should be
declared

foo::foo ( a, b ) : a ( ... ), b ( ... ) { .... }

and not

foo::foo ( a, b ) { a = ...; b = ...; ... }

because failure in the constructor after initialization of b will call
the destructors for a and b in the first definition, but not in the
second; thus in the second there is a potential memory leak.

This is incorrect.

However, it's generally a good idea to use initializer lists rather than
assignments.

See the FAQ.

Finally, always set a 'new_handler' using the built-in function
set_new_handler.

This is good advice.

The default new_handler terminates the program when
it cannot satisfy a memory request.

This is incorrect.

Program termination at a critical
time may be disastrous.

And this is just dumb.

The reason you should set a new_handler is in order to override the default
exception throwing behavior and instead let it terminate the program.

Because that's about the only sane way to deal with memory exhaustion.

-------
For starters, I'd be lying if I said I've never opted for the 'and
not' condition on occasions.
?


Number 2: Why would a failure in second condition bypass the
destructor for a and b? If I don't have a fully constructed foo
_after_ the initialization of b, the destructor for a and b ought to
be called for both definitions.

Right.


Cheers, & hth.,

- Alf
 
P

pasa

IMO that section is at least fishy. I suggest you read other
standards, especially ones with rationale and references. (Sutter/
Alexandrescu: C++ Coding Standrards is a good start)

just few fast comments:
- using new isntead f delete does not reduce the numner of dynamic
inits a single bit; preference to new is based on differnt stuff, and
the real motivation is to not use eiter -- but string, vector, list,
etc.

- writing destructors is strictly discouraged -- instead you shall use
RAII members and empty/autogenerated dtors

- in the second examle dtors of a and b are definitely called for the
second case.

- set_new_handler is hyper-advanced stuff fro those who can write
project standards, not just reading them.
 
A

anon

Kai-Uwe Bux said:
ma740988 wrote: [snip]
class foo {
int * a;
int * b;
public:
foo ( int i, int j )
: a ( new int ( i ) )
, b ( new int ( j ) )
{}
~foo ( void ) {
delete a;
delete b;
}
};

This leaks the pointee for a if allocation for b throws: the destructor for
the pointer a will be called, but that destructor does not deallocate the
memory pointed to. You get a similar problem with class pointers instead of
int pointers when the constructors for the b pointee throws.

In order to avoid this, auto_ptr is the intended way:

class foo {
int * a;
int * b;
public:
foo ( int i, int j ) {
std::auto_ptr<int> aa = new int ( i );
std::auto_ptr<int> bb = new int ( j );
a = aa.release();
b = bb.release();
}
~foo ( void ) {
delete a;
delete b;
}
};

I was wondering : why didn't you suggest this right away?

class foo {
std::auto_ptr< int > a;
std::auto_ptr< int > b;
public:
foo ( int i, int j ) :
{
a( new int ( i ) ),
b( new int ( j ) )
}
~foo ( void ) {
}
};
 
J

James Kanze

* ma740988:

[...]
Good advice for general C++, but of course there are no
absolutes, e.g., sometimes some library routine will require
you to 'malloc' some argument or 'free' some result.

Then apply rule 0 (or what should be rule 0 in any reasonable
coding guidelines): any other rule in the guideline can be
violated if there is a justifiable reason for doing so. This
reason must be documented in the code (comment, etc.), and
approved by...

They're called guidelines, and not laws, for a reason. In the
end, if it passes code review, it's OK. Violating a guideline
is a justifiable reason for the reviewers to reject it, but
they aren't required to do so, if they're convinced that the
violation was justified.

Things that can be automatically checked (like use of
malloc/free) may require some sort of special comment to turn
the checking off in a particular case.
Also good advice, including that word 'sparingly'.

For a definition of 'sparingly' which depends on the
application. I can easily imagine numeric applications where no
explicit use of new or delete is appropriate. And I've worked
on server applications where almost everything was allocated
dynamically, by necessity. The real rule is more complex; it
implies understanding what you are doing. Some simple rules can
be applied, however:

-- Never use dynamic allocation if the lifetime corresponds to
a standard lifetime and the type and size are known at
compile time.

-- If the object type doesn't require identity, and the
lifetime does not correspond to a standard lifetime, copy it
rather than using dynamic allocation, unless the profiler
says otherwise.
This is true.
This is just dumb advice.

And it contradicts the advice immediately above: "have a clear
understanding of resource acquisition".
Use types that manage their own memory.

Which is exactly what the "advice" you just called dumb said.

There are two (and I think only two) cases where dynamic
allocation is appropriate: the lifetime of the object doesn't
correspond to a standard lifetime, or the size or type aren't
known at compile time. In the second case, having a type which
manages its own memory, and frees it in the destructor, is
definitely the prefered solution. In the first case (by far the
most prevalent in my applications), it really doesn't mean
anything.
Use smart pointers.

Whatever? Why does making the managing class look like a
pointer change anything? (There are cases where smart pointers
are appropriate, but they don't solve all problems.)
This is incorrect.
However, it's generally a good idea to use initializer lists
rather than assignments.
See the FAQ.

See also the response by Kai-Uwe. Just using initializer lists
can make things worse.

A good general rule is to require any single class to be
responsible for at most one resource. If a class needs several,
it should spawn the management of each off into a separate base
class or member. (There are times when this simplifies life
even when the derived class only needs a single resource. See
the g++ implementation of std::vector, for example.)
This is good advice.
This is incorrect.
And this is just dumb.

More to the point, it doesn't mean anything. And it's not
implementable, at least not reasonably. If the OS crashes, for
example, the program will terminate, or it may terminate because
of a hardware failure. The total system must be so designed
that this isn't "disastrous" (i.e. it doesn't result in a loss
of lives or destruction of property).
The reason you should set a new_handler is in order to
override the default exception throwing behavior and instead
let it terminate the program.
Because that's about the only sane way to deal with memory
exhaustion.

*That* depends largely on the application. In many cases,
you're right, but certainly not all, and there are applications
which can (and must) recover correctly to insufficient memory.
(It's tricky to get right, of course, since you also have to
protect against things like stack overflow and buggy systems
which tell you you've got memory when you haven't.)
 
C

Christof Donat

Hi,
- writing destructors is strictly discouraged

Huh, that is new to me. Where did you get that from.
-- instead you shall use
RAII members and empty/autogenerated dtors

Are you shure, you really are familliar with all implications of RAII?

Christof
 
P

pasa

- writing destructors is strictly discouraged
Huh, that is new to me. Where did you get that from.
Are you shure, you really are familliar with all implications of RAII?

I'm positive :)
Read James' recent post in this thread for details. (around "A good
general rule is to require any single class to be responsible for at
most one resource. ...")

See also the "Single responsibility principle"
 
B

Bart van Ingen Schenau

I'm positive :)
Read James' recent post in this thread for details. (around "A good
general rule is to require any single class to be responsible for at
most one resource. ...")

See also the "Single responsibility principle"

It seems you seriously misunderstand what it means to manage a
resource.
If a class is responsible for managing a resource (a single resource
preferably), then that class most definitely needs a destructor to
clean-up after itself.

The only classes that don't need a user-written destructor are those
that don't need to do any cleanup when they are destroyed. This is
typically because those classes don't manage any resources.
Depending on what you are writing, this could be a majority of the
classes, but it still isn't enough to say that you should try to avoid
writing destructors.

Bart v Ingen Schenau
 
P

pasa

And you miss the separation of responsibilities.
We have RAII/RIID/manager/etc classes -- that have responsibility to
manage a single resource. Examples: auto_ptr, string, vector, CFile,
CSingleLock, shared_ptr. They normally reside in libraries ready to
use. Ocasinally there is some some really new resource to cover, but
it is more like exceptional.

Then there are the normal user classes this whole discussion is about.
They, instead of attempting to manage resources directly shall use the
classes mentioned before. As members, base classes, locally, etc.
You do that, and need not write destructor (cctor, op= ) because
cleanup is properly done.

Most real-life problems come when a user class rather starts using raw
pointers or raw resources, obviously messing up somewhere the first 5+
times. And only to eventually discover using a manager class.

Care to point out what I miss? ;-)
 
K

Kai-Uwe Bux

anon said:
Kai-Uwe Bux said:
ma740988 wrote: [snip]
class foo {
int * a;
int * b;
public:
foo ( int i, int j )
: a ( new int ( i ) )
, b ( new int ( j ) )
{}
~foo ( void ) {
delete a;
delete b;
}
};

This leaks the pointee for a if allocation for b throws: the destructor
for the pointer a will be called, but that destructor does not deallocate
the memory pointed to. You get a similar problem with class pointers
instead of int pointers when the constructors for the b pointee throws.

In order to avoid this, auto_ptr is the intended way:

class foo {
int * a;
int * b;
public:
foo ( int i, int j ) {
std::auto_ptr<int> aa = new int ( i );
std::auto_ptr<int> bb = new int ( j );
a = aa.release();
b = bb.release();
}
~foo ( void ) {
delete a;
delete b;
}
};

I was wondering : why didn't you suggest this right away?

class foo {
std::auto_ptr< int > a;
std::auto_ptr< int > b;
public:
foo ( int i, int j ) :
{
a( new int ( i ) ),
b( new int ( j ) )
}
~foo ( void ) {
}
};

Because of the context you snipped: I wanted to show how to initialize
pointer members and not how to replace those by other types that do their
own memory management. Clearly, the coding guidelines directed me more into
the direction of RAII too much. After all, in that case, I would just do:

class foo {
int a;
int b;
public:
...
};

Nonetheless, you are right that looking for the right smart pointer (or any
other class managing its own memory) can be a good idea.


Best

Kai-Uwe Bux
 
P

peter koch

Hi,


Huh, that is new to me. Where did you get that from.


Are you shure, you really are familliar with all implications of RAII?
Actually I agree more or less with the OP's statement. Not because you
should not write destructors in general but because the principle that
classes should do only one thing typically lends to that design.

/Peter
 
P

pasa

Several posts have overreacted to the original, slightly vague, post.

IMNSHO a vague guideline has negative value. The one we're talking
about is worse, it is strictly incorrect in statements (i.e claiming
dtors not called).

Sure you (and I and many other mentors) can write down what it
*should* (or even was trying to) mean, but the readers have the actual
text.

It did sonfuse the original poster, was fairly criticised by others --
let's admint it failed the mark set for a "good guideline". That is
something providing clear leading; raising no questions or eyebrows.
 
T

tonytech08

- set_new_handler is hyper-advanced stuff fro those who can write
project standards, not just reading them.

If _I_ "get it", I assure you it is not "hyper-advanced". In the
simplest scenario, set_new_handler is used to avoid exceptions in
codebases or application programs where exceptions are an off-limits
subsystem. Simple as pie. (Or "bad analogy", depending on "which side
of the tracks" your mind is on? ;)).
 
A

Alf P. Steinbach

* James Kanze:
* ma740988:
[...]
and I'm having a hard time making it pass section 2.1.
Section 2.1 reads
Good advice for general C++, but of course there are no
absolutes, e.g., sometimes some library routine will require
you to 'malloc' some argument or 'free' some result.

Then apply rule 0 (or what should be rule 0 in any reasonable
coding guidelines): any other rule in the guideline can be
violated if there is a justifiable reason for doing so. This
reason must be documented in the code (comment, etc.), and
approved by...

They're called guidelines, and not laws, for a reason. In the
end, if it passes code review, it's OK. Violating a guideline
is a justifiable reason for the reviewers to reject it, but
they aren't required to do so, if they're convinced that the
violation was justified.

Code reviews are less often performed in the real world than you seem to think.

On the other hand, your idea of a rule 0 in any coding guideline is a very good one.

For if programming could be reduced to a simple set of mechanical rules it would
have been so reduced, long ago.

Things that can be automatically checked (like use of
malloc/free) may require some sort of special comment to turn
the checking off in a particular case.

It seems you're envisioning some automated code-guideline-conformity checker.

Given the level of the OP's firm's new (proposed) guidelines that's an entirely
unreasonable supposition.

There's also the point that using such a checker would amount to have given up
on quality and instead focusing on enforcing unthinking rule conformity, and
that kind of conformity is not a good idea when what you need is intelligent,
creative staff.

For a definition of 'sparingly' which depends on the
application. I can easily imagine numeric applications where no
explicit use of new or delete is appropriate. And I've worked
on server applications where almost everything was allocated
dynamically, by necessity.

Which does not mean it's a good idea to pepper the code with 'new' calls.

Centralize that functionality.

The real rule is more complex; it
implies understanding what you are doing. Some simple rules can
be applied, however:

-- Never use dynamic allocation if the lifetime corresponds to
a standard lifetime and the type and size are known at
compile time.

std::scoped_ptr, bye bye. :)

But I'm sure you're thinking of some given application domain familiar to you.

-- If the object type doesn't require identity, and the
lifetime does not correspond to a standard lifetime, copy it
rather than using dynamic allocation, unless the profiler
says otherwise.




And it contradicts the advice immediately above: "have a clear
understanding of resource acquisition".


Which is exactly what the "advice" you just called dumb said.

Nope.

The "advice" in the proposed guidelines was to define a destructor in every
class. That's dumb on its own, as opposed to using resource-specific management
wrappers such as smart pointers and standard library container classes. And it's
even more dumb, downright catastrophic advice, when the author of the guidelines
isn't familiar with rule of 3 and forgets to mention it here.

The opposite of that approach is to use types that manage their own memory,
i.e., the opposite of mindless duplication and continuous re-invention of
resource management is to centralize it.

There are two (and I think only two) cases where dynamic
allocation is appropriate: the lifetime of the object doesn't
correspond to a standard lifetime, or the size or type aren't
known at compile time. In the second case, having a type which
manages its own memory, and frees it in the destructor, is
definitely the prefered solution. In the first case (by far the
most prevalent in my applications), it really doesn't mean
anything.


Whatever? Why does making the managing class look like a
pointer change anything?

Making the managing class look like a pointer does change a few things, mainly
because of the language's operator-> support, but that change has nothing to do
with what was discussed.

(There are cases where smart pointers
are appropriate, but they don't solve all problems.)

Right, ANY LANGUAGE FEATURE OR TECHNIQUE is appropriate in some cases and
inappropriate in some other cases.

See also the response by Kai-Uwe. Just using initializer lists
can make things worse.

It seems somebody has argued that "just using initializer lists" will correct
"things", presumably some problem.

But why are you remarking on that to me?

A good general rule is to require any single class to be
responsible for at most one resource. If a class needs several,
it should spawn the management of each off into a separate base
class or member. (There are times when this simplifies life
even when the derived class only needs a single resource. See
the g++ implementation of std::vector, for example.)

As a general rule I agree with that. Good advice.

More to the point, it doesn't mean anything. And it's not
implementable, at least not reasonably. If the OS crashes, for
example, the program will terminate, or it may terminate because
of a hardware failure. The total system must be so designed
that this isn't "disastrous" (i.e. it doesn't result in a loss
of lives or destruction of property).



*That* depends largely on the application. In many cases,
you're right, but certainly not all, and there are applications
which can (and must) recover correctly to insufficient memory.
(It's tricky to get right, of course, since you also have to
protect against things like stack overflow and buggy systems
which tell you you've got memory when you haven't.)

Hm, I'd like to see any application that, from a modern point of view, can deal
safely with a std::bad_alloc other than by terminating.

However, if you look up old clc++m threads you'll perhaps find the long one
where I argued with Dave Abrahams that one reasonable approach, for some
applications, is to use "hard" exceptions in order to clean up upwards in the
call stack before terminating (the problem being that C++ lacks support for
"hard" exceptions, so that the scheme would have to rely on convention, which is
always brittle).

E.g. it might be possible to save an open document to disk using only finite
pre-allocated resources.

But still I think the only sane way to deal with memory exhaustion is to
terminate, whether it happens more or less directly at the detection point or
via some "hard" exception.

Then it may perhaps be possible to do a "bird of Phoenix" thing, as e.g. the
Windows Explorer shell often does, or perhaps just a reboot. Or one might have
three machines running and voting, as in the space shuttle. Or whatever, but
continuing on when memory has been exhausted is, IMHO, simply a Very Bad Idea.


Cheers, :)

- Alf
 
B

Bart van Ingen Schenau

And you miss the separation of responsibilities.
We have RAII/RIID/manager/etc classes -- that have responsibility to
manage a single resource. Examples: auto_ptr, string, vector, CFile,
CSingleLock, shared_ptr.   They normally reside in libraries ready to
use. Ocasinally there is some some really new resource to cover, but
it is more like exceptional.

No, I don't miss tthe separation of responsibilities. But I try to
write my coding standards in such a way that they can be applied
equally to both the common, non-resource owning, user classes and the
more exceptional resource manager classes.

Although every coding standard I know has a clause that you may break
the rules is you have a sufficiently good reason, you still have to
explain yourself in a code review and might even have to get a formal
management sign-off on the deviation. That will be enough of a hassle
for some programmers that they will just not write their resource
managers in a RAII fashion.
For that reason I won't actively discourage the writing of
destructors, but instead encourage practices like separation of
responsibilities, re-use of existing components and creating classes
with RAII semantics.

Bart v Ingen Schenau
 
J

James Kanze

* James Kanze:
* ma740988:
[...]
and I'm having a hard time making it pass section 2.1.
Section 2.1 reads
-------
Section 2.1 Minimizing Dynamic Memory Allocation
Avoid the use of C's malloc and free.
Good advice for general C++, but of course there are no
absolutes, e.g., sometimes some library routine will require
you to 'malloc' some argument or 'free' some result.
Then apply rule 0 (or what should be rule 0 in any reasonable
coding guidelines): any other rule in the guideline can be
violated if there is a justifiable reason for doing so. This
reason must be documented in the code (comment, etc.), and
approved by...
They're called guidelines, and not laws, for a reason. In the
end, if it passes code review, it's OK. Violating a guideline
is a justifiable reason for the reviewers to reject it, but
they aren't required to do so, if they're convinced that the
violation was justified.
Code reviews are less often performed in the real world than
you seem to think.

Probably. But without code reviews, there's no point in having
coding guidelines to begin with.
On the other hand, your idea of a rule 0 in any coding
guideline is a very good one.

It's interesting, because it is one of the first things I read
about coding guidelines.
For if programming could be reduced to a simple set of
mechanical rules it would have been so reduced, long ago.

Yep. And we'd be out of work (or at least paid a lot, lot
less):).
It seems you're envisioning some automated
code-guideline-conformity checker.

Yes and no. If some of the conformity checking is automated,
then you need some special handling for cases where rule 0 was
applied. (I've actually seen it done.)
Given the level of the OP's firm's new (proposed) guidelines
that's an entirely unreasonable supposition.

I have no idea what is going on in the OP's organization. But
I've seen some very badly organized places try to use automated
tools to impose bad rules. My point is just that if automated
checking is envisaged, then some provision for rule 0 must be
made.
There's also the point that using such a checker would amount
to have given up on quality and instead focusing on enforcing
unthinking rule conformity, and that kind of conformity is not
a good idea when what you need is intelligent, creative staff.

Some degree of rule conformity is a good idea, although it
certainly doesn't replace an intelligent, creative staff. And
it's always a good idea to automate what you can. What little
experience I've had with automated rule checking has been very
positive, but it was with a very good organization, which
understood the software development process very well. I've no
doubt that it can be seriously misused, however.
Which does not mean it's a good idea to pepper the code with
'new' calls.
Centralize that functionality.

Agreed. There should normally be few instances of new
expressions, even in applications where most objects are
dynamically allocated.

I'm sure that there are exceptions to even this, of course. My
real point is not that you are wrong, per se, but that the
definition of 'sparingly' will depend on the application.
std::scoped_ptr, bye bye. :)

If the type and size are not known at compile time, it's
actually a very good solution. If the type and size are known,
and the lifetime corresponds to the scope, why would you
dynamically allocate?
But I'm sure you're thinking of some given application domain
familiar to you.

In this case, I think it's a general rule, applicable to all
applications. If the type and size are known at compile time,
and the lifetime corresponds to standard lifetime (i.e. a
lifetime defined by the language standard), then you should not
allocate dynamically.
The "advice" in the proposed guidelines was to define a
destructor in every class.

Where did you see that? I don't see anything about "defining a
destructor". Just ensuring that the class has one (which all
classes do), and that it deletes any memory allocated in the
constructor. The obvious way of achieving that would be for all
of the pointers to memory allocated by the class be scoped_ptr.
That's dumb on its own, as opposed to using resource-specific
management wrappers such as smart pointers and standard
library container classes. And it's even more dumb, downright
catastrophic advice, when the author of the guidelines isn't
familiar with rule of 3 and forgets to mention it here.

Hopefully, there would be other rules, concerning when and how
to support copy and assignment. Given the way this rule is
formulated, however, I can understand your doubts about this.
But they would be "other rules", and not necessarily part of
this one.
The opposite of that approach is to use types that manage
their own memory, i.e., the opposite of mindless duplication
and continuous re-invention of resource management is to
centralize it.

The necessity of requiring memory management to be pushed off
into a separate class doesn't seem correct to me. What's
important is to use standard and well understood idioms
(patterns). Thus, if you're using the compilation firewall, the
standard idiom is to provide a non-inline destructor which does
the delete---any attempt to use smart pointers is likely to get
you into problems with instantiating templates over an
incomplete type. (Such problems can be avoided by careful
choice of the pointer type, or by implementing one of your own.
But why bother?)

Again: there are three basic reasons to use dynamic allocation:
the size isn't known at compile time, the type isn't known at
compile time, and the lifetime of the object doesn't correspond
to a standard lifetime. The first is (usually, at least) best
handled by using a standard container, e.g. std::vector. The
second is (usually) best handled by a smart pointer (e.g.
boost::scoped_ptr), and the third needs explicit management---it
may be possible to design a smart pointer to handle it in
explicit cases, but not generally.

Beyond that, there are a few special idioms which require
dynamic allocation for other reasons---things like the
compilation firewall idiom (which is really a variant of not
knowing the exact type, or the details of the type, at least
when compiling client code) or the singleton pattern (but
usually, you're also trying to obtain a non-standard lifetime,
e.g. the destructor is never called---otherwise, you use a local
static). But they're just that, established and well known
patterns.

[...]
As a general rule I agree with that. Good advice.

Finally:).

I think our only difference (here, at least) is that you seem to
be suggesting that this class should usually look like a
pointer. Where as I find that the cases where it should look
like a pointer aren't all that frequent.
Hm, I'd like to see any application that, from a modern point
of view, can deal safely with a std::bad_alloc other than by
terminating.

A well written, robust LDAP server. There's absolutely no
reason why it should crash or otherwise terminate just because
some idiot client sends an impossibly complicated request. And
there's really no reason why it should artificially limit the
complexity of the request.
However, if you look up old clc++m threads you'll perhaps find
the long one where I argued with Dave Abrahams that one
reasonable approach, for some applications, is to use "hard"
exceptions in order to clean up upwards in the call stack
before terminating (the problem being that C++ lacks support
for "hard" exceptions, so that the scheme would have to rely
on convention, which is always brittle).
E.g. it might be possible to save an open document to disk
using only finite pre-allocated resources.
But still I think the only sane way to deal with memory
exhaustion is to terminate, whether it happens more or less
directly at the detection point or via some "hard" exception.
Then it may perhaps be possible to do a "bird of Phoenix"
thing, as e.g. the Windows Explorer shell often does, or
perhaps just a reboot. Or one might have three machines
running and voting, as in the space shuttle. Or whatever, but
continuing on when memory has been exhausted is, IMHO, simply
a Very Bad Idea.

What about a server whose requests can contain arbitrary
expressions (e.g. in ASN.1 or XML---both of which support
nesting)? The server parses the requests into a tree; since the
total size and the types of the individual nodes aren't known at
compile time, it must use dynamic allocation. So what happens
when you receive a request with literally billions of nodes? Do
you terminate the server? Do you artificially limit the number
of nodes so that you can't run out of memory? (But the limit
would have to be unreasonably small, since you don't want to
crash if you receive the requests from several different clients,
in different threads.) Or do you catch bad_alloc (and stack
overflow, which requires implementation specific code), free up
the already allocated nodes, and return an "insufficient
resources" error.

It's not a typical case, and most of the time, I agree with you.
But such cases do exist, and if the OP's organization is dealing
with one, then handling bad_alloc can make sense.
 
J

James Kanze

On Jan 26, 2:07 pm, (e-mail address removed) wrote:

And that class requires a user defined destructor. Writing
destructors is encouraged, not discouraged.
It seems you seriously misunderstand what it means to manage a
resource. If a class is responsible for managing a resource
(a single resource preferably), then that class most
definitely needs a destructor to clean-up after itself.
The only classes that don't need a user-written destructor are
those that don't need to do any cleanup when they are
destroyed. This is typically because those classes don't
manage any resources.

Or make any changes to program state. Most of my destructors
are concerned with roll back, not freeing resources.
Depending on what you are writing, this could be a majority of
the classes, but it still isn't enough to say that you should
try to avoid writing destructors.

Note too that the compiler generated destructor is inline. I
tend to write a lot of destructors because of this as well.
 
J

James Kanze

And you miss the separation of responsibilities.
We have RAII/RIID/manager/etc classes -- that have
responsibility to manage a single resource. Examples:
auto_ptr, string, vector, CFile, CSingleLock, shared_ptr.
They normally reside in libraries ready to use. Ocasinally
there is some some really new resource to cover, but it is
more like exceptional.
Then there are the normal user classes this whole discussion
is about. They, instead of attempting to manage resources
directly shall use the classes mentioned before. As members,
base classes, locally, etc. You do that, and need not write
destructor (cctor, op= ) because cleanup is properly done.
Most real-life problems come when a user class rather starts
using raw pointers or raw resources, obviously messing up
somewhere the first 5+ times. And only to eventually discover
using a manager class.
Care to point out what I miss? ;-)

The fact that library classes only handle general solutions, and
rarely cover the exact case you need. As time goes on, your
personal library will become more an more complete. For your
specific application domain and programming style.

Some things are well covered by the library: if the size isn't
known at compile time, std::vector, std::string et al. seem to
cover most of the cases. If the type isn't known, then you can
consider boost::scoped_ptr, but this still requires that the
pointed to type be completely defined in the header file, where
as a simple pointer doesn't, and if there is only one such
pointer, it really doesn't buy us that much. And in many cases,
clean up will be more complicated than simply calling delete.
 
A

Alf P. Steinbach

* James Kanze:
If the type and size are not known at compile time, it's
actually a very good solution. If the type and size are known,
and the lifetime corresponds to the scope, why would you
dynamically allocate?

Sometimes just for convenience. But it may be that the class in question is
designed for dynamic allocation only. E.g., a window object class, and you're
doing the "main window" (however in this case a standard smart pointer is
usually not good enough).


[snip]
Where did you see that?

Top of this quoted section. :)

I don't see anything about "defining a destructor".

The statement about "all classes should" is in a coding guideline.


[snip]
I think our only difference (here, at least) is that you seem to
be suggesting that this class should usually look like a
pointer. Where as I find that the cases where it should look
like a pointer aren't all that frequent.

Huh, where did you get that impression.


[snip]
What about a server whose requests can contain arbitrary
expressions (e.g. in ASN.1 or XML---both of which support
nesting)? The server parses the requests into a tree; since the
total size and the types of the individual nodes aren't known at
compile time, it must use dynamic allocation. So what happens
when you receive a request with literally billions of nodes? Do
you terminate the server? Do you artificially limit the number
of nodes so that you can't run out of memory? (But the limit
would have to be unreasonably small, since you don't want to
crash if you receive the requests from several different clients,
in different threads.) Or do you catch bad_alloc (and stack
overflow, which requires implementation specific code), free up
the already allocated nodes, and return an "insufficient
resources" error.

I haven't done that, and as I recall it's one of those things that can be
debated for years with no clear conclusion. E.g. the "jumping rabbit" (whatever
that is in French) maintained such a never-ending thread over in clc++m. But I
think I'd go for the solution of a sub-allocator with simple quota management.
After all, when it works well for disk space management on file servers, why not
for main memory for this? Disclaimer: until I've tried on this problem a large
number of times, and failed a large number of times with various aspects of it,
I don't have more than an overall gut-feeling "this should work" idea; e.g. I
can imagine e.g. Windows finding very nasty ways to undermine the scheme... :)


Cheers,

- Alf
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,578
Members
45,052
Latest member
LucyCarper

Latest Threads

Top