Article on possible improvements to C++

N

Nick Keighley

NickKeighley said:
On 21 Nov, 00:02, Paavo Helde <[email protected]> wrote:
[...] all exceptions propagating
out of a library should be derived from std::exception.
which rules out using much of boost

Like which? All Boost exceptions I have seen have been derived from
std::exception. But I admit I have used only a few libraries.

I'd encountered something called boost::exception which didn't derive
from std::exception. Reading the boost documentation more closely it
appears you are supposed (or encouraged) to derive from both std:: and
boost:: exception.

This is even easier. If you are using a particular exception type, you
can throw and catch it exactly in the points you like.

Oh I know I was just pointing out the rasion d'etre of exceptions was
*not* to produce a human readable message.
This has nothing
to do with the original problem raised up in the thread, at least not as
far as I have understood, namely that by using multiple libraries one
often does not know exactly which exceptions might arise, and how to deal
with them.

you'd hope the library would document it somehow.

Obivously, if you are using a custom exception just to wind up
the stack, this is no problem.

whats the difference between a "library exception" and a "custom
exception"
Yes, if the exception is derived from std::exception. It gets more
difficult when it isn't.

there's all sorts of nasty things people do. I liked the one that made
all the exception data private. With no access methods. Were you
supposed to trap it in the debugger?

define "near". What if I want to [break? (I've no idea what word I meant to use here!)]
out of some deeply nested
transaction? The call can't proceed and all the associated resources
need to be freed- some of which are actually physical resources like
radio channels or comms links.

This is fine, and this is exactly what exceptions are meant for.

What I meant by "near" was that it should be in borders of a single
library,

I don't agree

or more generally in the borders of code you directly control.
If the exception passes through another library which you don't have much
control over, it might not get through reliably.

It will if the library is written sanely. Don't catch anything you
don'ty know how to handle [The Ferret Catchers Handbook]
 
N

Nick Keighley

"NickKeighley" <[email protected]>
I keep on seeing things like this. How exactly does RAII deal with
allocations that don't follow a simple [stack based] model?

Insert some example of the model you think problematic.

anything that doesn't follow a stack model. A graphical editor. A
mobile phone system. You are at the mercy of the end user as to when
objects are created and destroyed. With the phone system people can
drive into tunnels or JCBs (back-hoes) can dig up comms links (that
really happened once).

"Remove" is not the way.  You start from nothing -- that is assumed
leak-free ;-)  and add all the stuff in a way that can't leak.  So your
program is always leak-free.

ah, my computer science lecturer used to tell us that a blank piece of
paper had no bugs. Programmers then just went on to add bugs.

The style is like:
 - 'delete' is forbidden in "client" code.  It is privilige of the the few
library classes that serve as managers.   Like auto_ptr.

and who holds the auto-ptr?
 - The result of every new goes immediately to one such manager.
That's about it.

these things always seem to solve the easy cases (ok they used to be
the very hard cases!) but not the hard cases.

 Certainly there are a few cases where ownership is
transferred --

a few!

so look what happens to result of release(), Detach() and
similar calls, that is IME natural.

Same thing applies to other resources: files, handles, GDI resources, locks,
transactions.

except for transactions, yes
As an example, you may look some old Petzold examples to struggle with raw
WIN API in pure C -- and see how the same thing looks using MFC's   CFont,
CBrush and similar wrappers.  

I've wrappered Win32 in C++. Yes, RAII makes life easier.

The difference is incredible in readability
and clearness.  As a side effect DBWIN no longer explodes on any random
program reporting a zillion of lost resources.

If you want a very simple example,

no, I don't want a simple example

think a program that processes text
manipulating strings all its time.  A C++ solution would use std::string,
(or any of the much better string classes).  Doing all the passing-around,
cutting, concatenating, etc.

consider a text editor that allows the user to delete text. Who
deletes the string that holds the deleted stuff and when. What if the
editor has Undo/Redo?
Without having a single new or other allocation in the *client* code of the
program.

the client code is gonna have to do something to trigger the new. Call
a factory for instance.
 While obvoiusly doing a zillion alllocations and deallocations.
Can you describe a way to introduce a leak?

forget to call the thing that triggers the delete.

Sure you can, and many of us do it in practice.  C++ has destructors that
are called automaticly at well defined points

the points are not always so well defined.

-- and that automation can
reliably be used to do the deletes you need.   All of them.

As, unless you start doing WTF things deliberately just to prove idiots'

I'm not *trying* to break things.

endless resources, destructors will be called matching constructors, and
when leaving a scope by *any* means.    

but leaving scope is *not* the correct time to delete some objects!

CallManager::process_event (EventAP event)
{
CallAP call = 0;

if (is_new_call (event))
{
call = CallFactory::create_call (event);
CallList::add_call (call);
}
else
{
CallAp call = CallList::lookup_call (event);
}

call.process_event(event);
}

when this finishes the call should very definitly not be destroyed!
The CallManager (or some class this is delegated to has to track what
goes into CallList (the list of currently active calls) and also be
careful about rmoving things from it- when they are destroyed.


So the programmer's responsibility
is just to not leave non-managed resources around.

oh, *that* all!

(Certainly for certain tasks you can insert GC too, I didn;t work with such
problem yet, but read success stories.)

I've never used garbage collection in C++ either.


--
Nick Keighley

The world you perceive is drastically simplified model of the real
world
(Herbert Simon)
 
J

James Kanze

OTOH, the real way to make correct code is definitely not
going by that info but through using consistent RAII-like
handling, and reviews enforcing it.
I keep on seeing things like this. How exactly does RAII
deal with allocations that don't follow a simple [stack
based] model?
Insert some example of the model you think problematic.
anything that doesn't follow a stack model. A graphical
editor. A mobile phone system. You are at the mercy of the end
user as to when objects are created and destroyed. With the
phone system people can drive into tunnels or JCBs (back-hoes)
can dig up comms links (that really happened once).

Certainly. RAII doesn't apply to entity objects (which means
most dynamically allocated memory). On the other hand, it's
very useful, for enforcing transactional semantics within the
transaction which handles the events: it probably applies to 95%
or more semaphore locks, for example.
ah, my computer science lecturer used to tell us that a blank
piece of paper had no bugs. Programmers then just went on to
add bugs.

It's true, and the secret to quality programming is to not add
bugs. That's why people use code review and unit tests and any
number of other techniques (like not writing overly complicated
code to begin with).
and who holds the auto-ptr?

It's a stupid rule anyway. It doesn't work in practice. The
real rule for memory management is not to use dynamic allocation
at all, except when the object lifetime is explicit (e.g. a call
in a telephone system). And of course then, your design (or
more directly, your requirements specification) determines when
the object should be deleted.
these things always seem to solve the easy cases (ok they used
to be the very hard cases!) but not the hard cases.
except for transactions, yes
I've wrappered Win32 in C++. Yes, RAII makes life easier.
no, I don't want a simple example
consider a text editor that allows the user to delete text.
Who deletes the string that holds the deleted stuff and when.

That's probably not a good example. The text buffer holds the
text before it's deleted, and the deleted text itself is never a
separate object, unless...
What if the editor has Undo/Redo?

Then you save the deleted text in a redo record. Which will be
deleted when the requirements specifications says it should be
deleted.
the client code is gonna have to do something to trigger the
new. Call a factory for instance.

Which comes down to the same. Sometimes the factory method is
justified---it may be preferable to check pre-conditions
beforehand, or to register the created object with a transaction
(so it can be correctly deleted in case of rollback).
forget to call the thing that triggers the delete.
the points are not always so well defined.
I'm not *trying* to break things.
but leaving scope is *not* the correct time to delete some
objects!

It it is the correct time, then you don't want dynamic
allocation to begin with.
CallManager::process_event (EventAP event)
{
CallAP call = 0;
if (is_new_call (event))
{
call = CallFactory::create_call (event);
CallList::add_call (call);
}
else
{
CallAp call = CallList::lookup_call (event);
}
call.process_event(event);
}
when this finishes the call should very definitly not be
destroyed! The CallManager (or some class this is delegated
to has to track what goes into CallList (the list of currently
active calls) and also be careful about rmoving things from
it- when they are destroyed.

I'm not sure what CallAP is, but this looks like a familiar
pattern (except that I'd probably keep the newly created call
object in an auto_ptr until I'd successfully returned from
CallList::add_call). And of course, if the event is "hang up",
and that brings the connection count in the call down to zero,
it is the call itself (in a function called from
Call::process_event) which will do the delete.
 
J

Joshua Maurice

OTOH, the real way to make correct code is definitely not
going by that info but through using consistent RAII-like
handling, and reviews enforcing it.
I keep on seeing things like this. How exactly does RAII
deal with allocations that don't follow a simple [stack
based] model?
Insert some example of the model you think problematic.
anything that doesn't follow a stack model. A graphical
editor. A mobile phone system. You are at the mercy of the end
user as to when objects are created and destroyed. With the
phone system people can drive into tunnels or JCBs (back-hoes)
can dig up comms links (that really happened once).

Certainly.  RAII doesn't apply to entity objects (which means
most dynamically allocated memory).  On the other hand, it's
very useful, for enforcing transactional semantics within the
transaction which handles the events: it probably applies to 95%
or more semaphore locks, for example.
ah, my computer science lecturer used to tell us that a blank
piece of paper had no bugs. Programmers then just went on to
add bugs.

It's true, and the secret to quality programming is to not add
bugs.  That's why people use code review and unit tests and any
number of other techniques (like not writing overly complicated
code to begin with).
and who holds the auto-ptr?

It's a stupid rule anyway.  It doesn't work in practice.  The
real rule for memory management is not to use dynamic allocation
at all, except when the object lifetime is explicit (e.g. a call
in a telephone system).  And of course then, your design (or
more directly, your requirements specification) determines when
the object should be deleted.

A new way I've been thinking about RAII is the following: All "free
resource calls", like delete, release mutex, close connection, etc.,
should be made only inside destructors. Specifically, all resources at
all times should have a clearly identified owner who frees the
resource in its destructor, or the resource is a stack object. It's
easy to extend this idea to shared ownership. Optionally, you can free
resources early as long as the resource still has an owner which would
still free the resource if you "accidentally" commented out the early
release.

The idea is that maintaining the invariant of "ownership" using
destructors produces easier to follow code, and less leaky code. RAII
is all about ownership responsibilities. I haven't taken enough time
to look through all examples, so please treat this as a tentative idea
from myself. Obviously there will be exceptions to this rule, I think.
 
B

Balog Pal

A new way I've been thinking about RAII is the following: All "free
resource calls", like delete, release mutex, close connection, etc.,
should be made only inside destructors. Specifically, all resources at
all times should have a clearly identified owner who frees the
resource in its destructor, or the resource is a stack object. It's
easy to extend this idea to shared ownership. Optionally, you can free
resources early as long as the resource still has an owner which would
still free the resource if you "accidentally" commented out the early
release.
The idea is that maintaining the invariant of "ownership" using
destructors produces easier to follow code, and less leaky code. RAII
is all about ownership responsibilities. I haven't taken enough time
to look through all examples, so please treat this as a tentative idea
from myself. Obviously there will be exceptions to this rule, I think.

That's practicly the same thing I was talking about. Not stupid at all, and
passed the test of the real life in practice. Possibly my ise of "client"
code is not clear -- I keep keep the handlers themselves (that have the
dtors) as "library" code. Which has a different life cycle. Maybe
"framework" would be a better name.

Early drop can be done through the .reset() interface (or its equivalent in
the manager), and commenting it out just results in keeping the thing little
longer.

I don't get what would be the problem with non-stack-frame limited
resources -- the manager may be at some outer block, or a member of the
class, but eventually it will bite the dust too.
 
D

dragan

Joshua said:
dragan wrote:

[snipped]>> Quite a bit of code will you need to properly destroy
automatic
objects (which is quite often the required part of "error
handling". If handling of an error does not require stepping way
back, that error can be IMHO renamed to "yet another condition
arising at normal course of given business").
Another simple answer from moi: RAII. Problem solved. RAII and
exceptions are orthogonal concepts. RAII works as good with other
error handling strategies as it does with exceptions.

RAII is a great idea but you need something to kick off the
destructors. It may be an important component of error handling
strategy but
exception or something else is needed to actually kick off object
destruction if error processing requires changing the context
outside of normal flow control operation.
a robust context-switching mechanism is
needed (longjmp/setjmp, EPOC32/Symbian's "Leaves" with home-made
cleanup stack etc -- you name it).
I wouldn't call setjmp/longjmp a "context switching mechanism", but
I know what you meant. setjmp/longjmp is not an option in C++
because destructors aren't called during "the unwinding" of the
"call stack".

Exactly my point. It won't automatically, you will have to hand-craft
some stupid mechanism like EPOC32/Symbian cleanup stack and never
forget put your important objects on it.

Uh... I think you missed the boat on this one. I think dragan meant
"return error codes" when he said "other error handling strategies".

I meant ALL other handling strategies, not just "return an error code".
(But then again, how many are there?).
(This includes logging the specific error then returning a generic
"failure" error code. It's still returning an error code.) Any other
kind of error handling is extremely exotic.

Not hardly. To say that C++ exceptions is one and returning an error code is
two, and everything else is "exotic", is probably wrong (depending on your
definition of 'exotic').
He meant that RAII works quite well if you use only error return codes
and do not use exceptions, and he is correct. What you do to "kick off
the destructors" is exit the current scope, generally by returning an
error code (or success code, or any kind of returning).

Yes, it is that easy: the C++ scoping mechanisms which guarantee local
object destructor calls. Given that, you can build an alternative to
exceptions around that. The other guy though seemed to be starting with a
"world" where exception mechanisms are the norm (C++) and is oblivious to
the orthogonality of the scoping mechanisms with the exception mechanism.

(At least that's what I remember without going back and reading what I said.
I don't really like long threads because that indicates lack of
communication, or teaching the newbies, or posturing for "advantage", or
"religious issue", or...).

I think C++ may indeed be "dumbing down" programmers. (I'll start that
separate thread so this one can rest in peace). (Posted new thread topic).
I might have to disagree. Is "returning an error code" contained in
"context switching"? Returning an error code is a perfectly fine way
to handle errors, and returning an error code is not in the context of
programming what someone would call "context switching".

He used 'context switching' incorrectly or something. Not to be curbing
though (to the other guy): keep on researching dude! I don't believe anyone
can learn faster than by making mistakes, BUT, I suggest you analyze others'
mistakes rather than go through all the hard knocks on your own. The
difference between man and animal is supposedly accumulated knowledge across
generations. All evidence to the contrary: knowledge has historically taken
a back seat to oppression and politics. While you have more than I did, and
I did envision a Wikepedia-like thing when I was a child, imagine the future
possibilities given that information is freer more now than ever. The trend
is positive, but lame: evolution of animals seems to be on the order of the
purported differentiation between people and animals. Held back by whom?
 
N

Nick Keighley

As I'm not a C++ expert I was wondering if this post (and others in
this thread) was going to earn me a Larting. I do however have
opinions (opinions are like...) on softwasre developmne tand I have
developed and maintained C++ systems.

I just get bugged by the "RAII solves all memory management problems"
line and the implicit assumption that all objects follow a simple FIFO
allocation/deallocation protocol. Don't these people ever tackle real
world problems!
OTOH, the real way to make correct code is definitely not
going by that info but through using consistent RAII-like
handling, and reviews enforcing it.
I keep on seeing things like this. How exactly does RAII
deal with allocations that don't follow a simple [stack
based] model?
Insert some example of the model you think problematic.
anything that doesn't follow a stack model. A graphical
editor. A mobile phone system. You are at the mercy of the end
user as to when objects are created and destroyed. With the
phone system people can drive into tunnels or JCBs (back-hoes)
can dig up comms links (that really happened once).

Certainly.  RAII doesn't apply to entity objects (which means
most dynamically allocated memory).  On the other hand, it's
very useful, for enforcing transactional semantics within the
transaction which handles the events: it probably applies to 95%
or more semaphore locks, for example.

ah, thanks. Entity objects... is that what you call 'em. And yes RAII
fits very nicely with tansactions. If the transaction can't complete
then all the things it allocated are safely freed.
It's true, and the secret to quality programming is to not add
bugs.  

that's why it stuck in my head. Even if its an ideal not entirely
reachable ("never make mistakes"!) its worth keeping at the back of
your mind. Avoid "oh it doesn't matter I can always clean it up
later".
That's why people use code review and unit tests and any
number of other techniques (like not writing overly complicated
code to begin with).
yes



It's a stupid rule anyway.  It doesn't work in practice.

oh, goody so I wasn't so far out in the out-field

 The
real rule for memory management is not to use dynamic allocation
at all, except when the object lifetime is explicit (e.g. a call
in a telephone system).

if things are of variable size? Oh yes use std::vector (or other
suitable container).

I'm beginning to form the rule of thumb that all uses of new[] are
errors. Or at least need looking at very hard.
 And of course then, your design (or
more directly, your requirements specification) determines when
the object should be deleted.

yes. There is no doubt when a call finishes. Well /almost/ no doubt,
some group calls get a little hairy.

<snip>

[simple example] (too simple if you ask me)
consider a text editor that allows the user to delete text.
Who deletes the string that holds the deleted stuff and when[?]

That's probably not a good example.  The text buffer holds the
text before it's deleted, and the deleted text itself is never a
separate object, unless...

sorry, "who deletes the text buffer?"

Then you save the deleted text in a redo record.  Which will be
deleted when the requirements specifications says it should be
deleted.

but it's no longer solved by RAII alone

Which comes down to the same.

exactly, the client code has to cause the object to be created somehow

 Sometimes the factory method is
justified---it may be preferable to check pre-conditions
beforehand, or to register the created object with a transaction
(so it can be correctly deleted in case of rollback).

sorry, the points at which delete are called is well defined with
RAII, it's just the point at which an "entity object" is released is
not specified by a FIFO protocol and hence RAII is not a complete
solution.
It it is the correct time, then you don't want dynamic
allocation to begin with.
thanks



I'm not sure what CallAP is,

stupidly I went to The Hungarian Side of the programming nature. I
meant of course std::auto_ptr<Call>. The example was cobbled together
rather quickly and may be buggy. Second attempt:

CallManager::process_event (std::auto_ptr<Event> event)
{
std::auto_ptr<Call> call(0);
if (is_new_call (event))
{
call = CallFactory::create_call (event);
CallList::add_call (call);
}
else
{
call = CallList::lookup_call (event);
}
call->process_event(event);
}
but this looks like a familiar
pattern (except that I'd probably keep the newly created call
object in an auto_ptr until I'd successfully returned from
CallList::add_call).  And of course, if the event is "hang up",
and that brings the connection count in the call down to zero,
it is the call itself (in a function called from
Call::process_event) which will do the delete.

I was assuming that happened inside
call->process_event(event);
 
N

Nick Keighley


this is a good summary

ok, fine


less sure about this.

there are objects with very long lifetimes. Lifetimes comparable but
not necessarily equal to the lifetime of the system. For instance
calls in some mobile radio systems can be of unlimited duration. But
not all calls are like this. So clearing/hanging up calls must be
properly managed or all sorts of precious reocurces will leak away.

I can't see anyway to do this apart from having objects
(CurrentCallList) that are effectivly global.

so the CurrentCallList in my example gets destructed when the program
terminates and any calls left in are destructed then. Which might be a
bit late. I suppose it would close the call log and tell the user his
call has terminated.

objects that don't follow a FIFO life cycle. Which is pretty much
anything in the real world!

That's practicly the same thing I was talking about. Not stupid at all, and
passed the test of the real life in practice.

managed equipment, calls, circuits, channels, routes, messages, logged
in users, drawable objects in animations, etc. etc. etc.
Possibly my ise of "client"
code is not clear -- I keep [] the handlers themselves (that have the
dtors) as "library" code.   Which has a different life cycle. Maybe
"framework" would be a better name.

don't follow you

Early drop can be done through the .reset() interface (or its equivalent in
the manager), and commenting it out just results in keeping the thing little
longer.

if the system nver terminates that means the object is kept forever.
So as soon as we've used all the available radio channels the system
hangs.


I don't get what would be the problem with non-stack-frame limited
resources -- the manager may be at some outer block, or a member of the
class, but eventually it will bite the dust too.

oh it will bite the dust eventually but system termination time (weeks
later? months later?) is waay too late. There are well defined times
when a call terminates (the user hang up, we lose contact with him,
something breaks)- BUT THEY DONT FOLLOW A FIFO LIFE CYCLE!

And I submit, many real world things don't!
 
N

Nick Keighley

"NickKeighley" <[email protected]> ha scritto nel messaggionews:daae4055-a4bc-4cce-8894-

"library exception" has to be minimize in a library (until 0.0 if possible)
because the programmer that use the library has to choose of what to do
in an error condition (and not the programmer that write the library)

well I tend to write libraraies to implement my application. Are the
exceptions these libraries throw "library exceptions" or "custom
exceptions"? I think I draw the library/application line more blurrily
than you. 3rd party libraries are plainly more "library-like" than
some app-specific libraries.
 
B

Balog Pal

Nick Keighley said:
As I'm not a C++ expert I was wondering if this post (and others in
this thread) was going to earn me a Larting. I do however have
opinions (opinions are like...) on softwasre developmne tand I have
developed and maintained C++ systems.
I just get bugged by the "RAII solves all memory management problems"
line and the implicit assumption that all objects follow a simple FIFO
allocation/deallocation protocol. Don't these people ever tackle real
world problems!

Not sure how you arrived at the everything is FIFO assumption.

I was stating to use managers -- and that managers can be non-locals. After
all we don't have so much places, so if you have a manager it can be:

- local in a block in the immediate function
- local in a block in a function somewhere up the call chain
- a global(/singleton)

you put responsibility of deallocation to the manager, and it will
eventually happen for all of them. There is a chance for pseudo-leaks, we
probably not yet discussed. Like when you have a list with long life, keep
adding nodes and never remove any. Stuff can get stuck in managers that are
global or live wery much up (say right in main()). Getting the resources
released only at prog exit may be not good enough, and act like a regular
leak in all practical terms. No silver bullets. Think that is hardly news
to anyone reading this group.

And pseudo-leaks are damn hard to discuss in abstract, in a forum: how you
tell whether some such node is yet needed in a system or not? Except by the
particular design? ;-)

The point is that the managers in the latter category are not flooding the
app -- you have only a few of them hopefully, and can pay close attention on
the content. And normally they are helped out with more local RAII
sub-managers.

Many real systems work processing 'requests', serving events from outsude.
That have a well defined point of entry, that is fortunately a
function/stack frame too. And you can use that to manage all the stuff that
is local-to that request. When it ends, you can be sure the app's state is
as before. Or changed *only* as expected by the req itself.

The more problematic RL cases when request create a "session" to work with
and must carry state of that further. So you will have a session manager
upwards. Normally some of the further events will bring the session to its
final state where it can be disposed -- then certainly releasing all
associated resources. If driven by outside events, certainly only a proper
design can make sure it happens consistently.
oh, goody so I wasn't so far out in the out-field

Still is is not at all stupid, and it works perfetcly fine in my practice.
Possibly my separation of things to "client" and "library" (aka framework,
support, etc) is not clear and misleads judgement.

The most power of C++ comes from the fact it gives superior tools to create
a quasi-language in which you can most directly and expressively state the
real aim of the program. To em the client or application code is what is
written in that latter language. That definitely don't have place for
keyword delete.
if things are of variable size? Oh yes use std::vector (or other
suitable container).

Like in these very cases. Client code shall no way attempt to implement
containers, string, etc -- but use them for good. :) It asks dynamic
stuff in indirect way, like resize, or invoking factory calls.

Eliminating dynamic memory use is rare -- probably extinct outside embedded.
I'm beginning to form the rule of thumb that all uses of new[] are
errors. Or at least need looking at very hard.

Absolutely. delete[] was the first thing I blacklisted and it certainly
dragged new[]. And didn't meet a single case it was needed in the last 15
years.

[simple example] (too simple if you ask me)
think a program that processes text manipulating strings all
its time. A C++ solution would use std::string, (or any of
the much better string classes). Doing all the
passing-around, cutting, concatenating, etc.
consider a text editor that allows the user to delete text.
Who deletes the string that holds the deleted stuff and when[?]
That's probably not a good example. The text buffer holds the
text before it's deleted, and the deleted text itself is never a
separate object, unless...
sorry, "who deletes the text buffer?"

In a single-doc editor the text buffer can be permanent.
(I guess text editor is not a good forum example, the questions you rised
can be easily answered with a multitude of alternative designs, all with
pros&cons, yet trivially ensuring resource control -- but it would take a
book-worth amount of words. If really needed, pick some sub-part, or narrow
it down.)
but it's no longer solved by RAII alone

Undo also can be made with several approaches, like keeping snapshots of the
current state, or record request to play forward or back.

What you mean by "RAII alone"? Possibly we speak on distict tracks because
you fill that term by different content...

The meaning tied to original mosaic is actually rearly meant literally --
the most common use is RRID (res. release is destruction) leaving
allocation freeform -- still routinely referred as RAII. :-o

As I use it RAII stays for an even relaxed form, that allows the full
interface auto_ptr has, including reset() and release() -- just makes sure
dedicated ownership is maintained, and destruction of a manager does mean
release of anything owned.


Maybe we could call it Controlled Resource Management, or Controller-based
RM, or something like, but it would end up unused like RRID, and I did not
see in practice RAII's tendency to force restrictions.

IMO discounting UB and other blacklisted cases they are.

sorry, the points at which delete are called is well defined with
RAII, it's just the point at which an "entity object" is released is
not specified by a FIFO protocol and hence RAII is not a complete
solution.

If you mean the "is-initialization" part, sure, it covers only a subset of
cases.





CallManager::process_event (std::auto_ptr<Event> event)
{
std::auto_ptr<Call> call(0);
if (is_new_call (event))
{
call = CallFactory::create_call (event);
CallList::add_call (call);
}
else
{
call = CallList::lookup_call (event);
}
call->process_event(event);
}
<<

This does not make much sense to me, id call_list is the manager/owner of
calls, you don't use auto_ptr like that, and especially don't place the
result of lookup in auto_ptr. There must be exactly one owner. Maybe you
meant shared_ptr, but it looks like overkill too.

And this is a good example what client code shall NOT do by any means --
leave call management with the call manager. So the whole thing becomes a
one-liner:

CallList::get_call(event).process_event(event);

get_call within the manager can see whether it is existing or a new call,
create as necessary and return one by ref. (This example implies no NULL
possibility, if there is one, make it

if(Call * pCall = CallList::get_call(event))
pCall->process_event(event);
 
B

Balog Pal

Nick Keighley said:
there are objects with very long lifetimes. Lifetimes comparable but
not necessarily equal to the lifetime of the system. For instance
calls in some mobile radio systems can be of unlimited duration. But
not all calls are like this. So clearing/hanging up calls must be
properly managed or all sorts of precious reocurces will leak away.
I can't see anyway to do this apart from having objects
(CurrentCallList) that are effectivly global.

See my other post.

so the CurrentCallList in my example gets destructed when the program
terminates and any calls left in are destructed then. Which might be a
bit late. I suppose it would close the call log and tell the user his
call has terminated.

Exactly what I just said. But... what can you do about it? The call is
either needed to linger, than it is the correct thing, or it must be
terminated, that was due well before exit. That is likely a leak in the
*design* (like at the abstract/UML level). If not, it's a bug of translation
of design to code.

objects that don't follow a FIFO life cycle. Which is pretty much
anything in the real world!

My observation are the "is-init" part, that truly enforces FIFO is pretty
rare kind of use. Partly due to some widespread library designs.

MFC uses 2-phase init extensively, and even the true RAII cases are broken
with the interface, see CSingleLock -- who in right mind would make it take
= false as default param?

Still RRID handles most of the practical cases for good -- and the order is
many times not as direct.

managed equipment, calls, circuits, channels, routes, messages, logged
in users, drawable objects in animations, etc. etc. etc.

LOL, made ton of all all those and more :)

Possibly my ise of "client"
code is not clear -- I keep [] the handlers themselves (that have the
dtors) as "library" code. Which has a different life cycle. Maybe
"framework" would be a better name.
don't follow you

Too bad. More specificly?

if the system nver terminates that means the object is kept forever.
So as soon as we've used all the available radio channels the system
hangs.

See earlier and other post.
oh it will bite the dust eventually but system termination time (weeks
later? months later?) is waay too late. There are well defined times
when a call terminates (the user hang up, we lose contact with him,
something breaks)- BUT THEY DONT FOLLOW A FIFO LIFE CYCLE!

Never occoured to me they are ;-) The important part is they do follow a
LIFE CYCLE. You maintain that. Most importantly at paper design. I saw
way more problems messed up right there, what leaves no chance in the
implementation to be correct. If your system allows (or overlooks) endless
calls, and it is a permanent system, yes, it will eventually run out of
juice. Protocol steps that wait trigger from outside have attached timers
for a good reason...

(Actually one of my main lines of work is duning all kind of communications.
At high level, middle level, low level, you name it, and most systems
planned for non-stop use. And they work too, many years a chunk, sometimes
stopped for some hardware maitainance or accident.... When I say it works it
is not theory, but life-proven practice, and on failure I'd probably not
talk here unless usenet is mainstream in jails. ;-)
 
N

Nick Keighley

A few posts back you wrote

"OTOH, the real way to make correct [non-leaking] code is definitely
not going by [new/delete logging] info but through using consistent
RAII-like handling, and reviews enforcing it."

and

"As test runs will hardly cover all possible paths including errors
and exceptions, so relying on the empty leak list from a random run is
nothing but illusion of being okay. While with trivial style it is
easy to make leaks impossible."

I disputed that RAII cured all emory leak problems (I was wrong to
suggest garbage collection as that is equally helpless in the face of
foolish application coders). You now appear to be agreeing with me,
but without explicitly saying so.



"Nick Keighley" <[email protected]>


Not sure how you arrived at the everything is FIFO assumption.

what else does RAII enforce apart from a FIFO allocation/deallocation
policy? If RAII is the solution to all memory management problems then
all memory management must be a FIFO based policy.

I was stating to use managers

you never mentioned them before. What is a "manager"? Like the
CallManager class in my example? Or like the CallList class in my
example?

-- and that managers can be non-locals. After
all we don't have so much places, so if you have a manager it can be:

- local in a block in the immediate function
- local in a block in a function somewhere up the call chain
- a global(/singleton)
ok.


you put responsibility of deallocation to the manager, and it will
eventually happen for all of them. There is a chance for pseudo-leaks,

what you call a pseudo-leak *I* call a leak.

we
probably not yet discussed.  Like when you have a list with long life, keep
adding nodes and never remove any. Stuff can get stuck in managers that are
global or live wery much up (say right in main()).  Getting the resources
released only at prog exit may be not good enough, and act like a regular
leak in all practical terms.  

right so you agree that RAII doesn't solve all resource management
problems? Goody.

No silver bullets.  Think that is hardly news
to anyone reading this group.

so why did you say "with trivial style it is easy to make leaks
impossible"?

And pseudo-leaks are damn hard to discuss in abstract, in a forum: how you
tell whether some such node is yet needed in a system or not? Except by the
particular design?  ;-)

The point is that the managers in the latter category are not flooding the
app -- you have only a few of them hopefully, and can pay close attention on
the content.  And normally they are helped out with more local RAII
sub-managers.

ok. Not what you originally said though. I'll stop saying this.

The most power of C++ comes from the fact it gives superior tools to create
a quasi-language in which you can most directly and expressively state the
real aim of the program.  To em the client or application code is what is
written in that latter language.  That definitely don't have place for
keyword delete.

ah, now this is clearer. Ban new/delete from application code. Or
certainly delete.

Like in these very cases.  Client code shall no way attempt to implement
containers, string, etc -- but use them for good.  :)    It asks dynamic
stuff in indirect way, like resize,  or invoking factory calls.

Eliminating dynamic memory use is rare -- probably extinct outside embedded.

yes but you don't have to have it application code

[simple example] (too simple if you ask me)
think a program that processes text manipulating strings all
its time. A C++ solution would use std::string, (or any of
the much better string classes). Doing all the
passing-around, cutting, concatenating, etc.
consider a text editor that allows the user to delete text.
Who deletes the string that holds the deleted stuff and when[?]
That's probably not a good example. The text buffer holds the
text before it's deleted, and the deleted text itself is never a
separate object, unless...
sorry, "who deletes the text buffer?"

In a single-doc editor the text buffer can be permanent.

(I guess text editor is not a good forum example,

I thought it was a good example...
:)

the questions you rised
can be easily answered with a multitude of alternative designs, all with
pros&cons, yet trivially ensuring resource control -- but it would take a
book-worth amount of words. If really needed, pick some sub-part, or narrow
it down.)



Undo also can be made with several approaches, like keeping snapshots of the
current state, or record request to play forward or back.

What you mean by "RAII alone"?  Possibly we speak on distict tracks because
you fill that term by different content...

well to me RAII enforces a stack discipline of resource allocation
(you yourself early said something about the resouce being released
when the stack frame is left).

The meaning tied to original mosaic is actually rearly meant literally --  
the most common use is RRID (res. release is destruction)  leaving
allocation freeform -- still routinely referred as RAII. :-o

As I use it RAII stays for an even relaxed form, that allows the full
interface auto_ptr has, including reset() and release() -- just makes sure
dedicated ownership is maintained, and destruction of a manager does mean
release of anything owned.

Maybe we could call it Controlled Resource Management, or Controller-based
RM, or something like, but it would end up unused like RRID, and I did not
see in practice RAII's tendency to force restrictions.

ok. But these more sophisticated resource management strategies can no
longer be termed "trivial"

IMO discounting UB and other blacklisted cases they are.


If you mean the "is-initialization" part, sure, it covers only a subset of
cases.

oh by RAII I include RRID, I regard it as a single term for the entire
concept I don't see any in chopping it up.

CallManager::process_event (std::auto_ptr<Event> event)
{
    std::auto_ptr<Call> call(0);
    if (is_new_call (event))
    {
        call = CallFactory::create_call (event);
        CallList::add_call (call);
    }
    else
    {
        call = CallList::lookup_call (event);
    }
    call->process_event(event);}

<<

This does not make much sense to me, id call_list is the manager/owner of
calls, you don't use auto_ptr like that, and especially don't place the
result of lookup in auto_ptr.

I probably should have used a normal pointer.
 There must be exactly one owner.  Maybe you
meant shared_ptr, but it looks like overkill too.

And this is a good example what client code shall NOT do by any means --  
leave call management with the call manager. So the whole thing becomes a
one-liner:

CallList::get_call(event).process_event(event);

and if it's a new call?

get_call within the manager can see whether it is existing or a new call,
create as necessary and return one by ref.

ok... That looks like a mere re-arrangement to me.
 (This example implies no NULL
possibility, if there is one, make it

I was assuming lookup_call threw if it failed
if(Call * pCall = CallList::get_call(event))
        pCall->process_event(event);

I'll not dispute my example should have more thought put into it
 
J

James Kanze

A new way I've been thinking about RAII is the following: All
"free resource calls", like delete, release mutex, close
connection, etc., should be made only inside destructors.

There are two major problems with this: it doesn't always apply,
and it only concerns resources. What you're concerned about is
program coherence: freeing a mutex lock (in a destructor) when
you've left the locked data in an inconsistent state, for
example, is not going to help. You must ensure that the
destructor also leaves the object in a coherent state. (This is
what is often meant by exception safety, but it really concerns
anything which might affect program flow.)
Specifically, all resources at all times should have a clearly
identified owner who frees the resource in its destructor, or
the resource is a stack object.

This is fine to a point, but I am supposing that the "identified
owner" is an object, and instance of a class. If that object is
dynamically allocated, and you're considering memory as a
resource, then you have a vicious circle.
It's easy to extend this idea to shared ownership. Optionally,
you can free resources early as long as the resource still has
an owner which would still free the resource if you
"accidentally" commented out the early release.
The idea is that maintaining the invariant of "ownership"
using destructors produces easier to follow code, and less
leaky code. RAII is all about ownership responsibilities. I
haven't taken enough time to look through all examples, so
please treat this as a tentative idea from myself. Obviously
there will be exceptions to this rule, I think.

The problem is that in many cases, the "owner" of an object is
the object itself. The whole basis of OO is that objects have
behavior; that behavior often determines their lifetime. (It's
not for nothing that most deletes in well written code are
"delete this", at least in some application domains.)
 
J

James Kanze

As I'm not a C++ expert I was wondering if this post (and
others in this thread) was going to earn me a Larting. I do
however have opinions (opinions are like...) on softwasre
developmne tand I have developed and maintained C++ systems.
I just get bugged by the "RAII solves all memory management
problems" line and the implicit assumption that all objects
follow a simple FIFO allocation/deallocation protocol. Don't
these people ever tackle real world problems!

It's the latest silver bullet. It's quite possible that it
actually works for some real world problems. I've just never
seen any where it does: it doesn't work for anything which looks
like a server, for example, or handles external events (which
includes GUI's). (It does work when the "dynamic objects" are
variable length arrays, used in numeric
calculations---std::vector is a good example. Except, of
course, that an implementation of std::vector that only deletes
in the destructor doesn't work.)
OTOH, the real way to make correct code is definitely
not going by that info but through using consistent
RAII-like handling, and reviews enforcing it.
I keep on seeing things like this. How exactly does
RAII deal with allocations that don't follow a simple
[stack based] model?
Insert some example of the model you think problematic.
anything that doesn't follow a stack model. A graphical
editor. A mobile phone system. You are at the mercy of the
end user as to when objects are created and destroyed.
With the phone system people can drive into tunnels or
JCBs (back-hoes) can dig up comms links (that really
happened once).
Certainly. RAII doesn't apply to entity objects (which
means most dynamically allocated memory). On the other
hand, it's very useful, for enforcing transactional
semantics within the transaction which handles the events:
it probably applies to 95% or more semaphore locks, for
example.
ah, thanks. Entity objects... is that what you call 'em.

I forget where I got the word from; I know I looked for a word
for it for a long time. Anyway, it seems as good as anything
else, and I've not heard any other suggestions, so that's what I
call them.
And yes RAII fits very nicely with tansactions. If the
transaction can't complete then all the things it allocated
are safely freed.

Yes. And once you know the transaction has succeeded, you
release the permanent objects from the RAII holders.
that's why it stuck in my head. Even if its an ideal not entirely
reachable ("never make mistakes"!) its worth keeping at the back of
your mind. Avoid "oh it doesn't matter I can always clean it up
later".

In a well run organization, it's not "I never make mistakes",
but rather, "Even the best programmers make mistakes, so we set
up various controls (code review, unit tests) to catch them
before they actually get into production code."
oh, goody so I wasn't so far out in the out-field

No. That doesn't mean never use std::auto_ptr---I use it a lot.
But most of the time, if the transaction succeeds, the pointer
will be recovered by calling the release function. At the
simplest, auto_ptr holds the pointer until the object is
successfully enrolled in all of the places necessary for it to
receive the appropriate incoming events.
if things are of variable size? Oh yes use std::vector (or
other suitable container).

Exactly. Let the standard library worry about dynamic
allocation. (There are obviously exceptions to this. For
example, if you're implementing std::vector:).)
I'm beginning to form the rule of thumb that all uses of new[]
are errors. Or at least need looking at very hard.

That's been my rule since at least 1992 (when I implemented my
first array classes). Within the implementation, you almost
always want to separate allocation from initialiation, so it's
::eek:perator new() and a placement new per element. And outside
the implementation of such classes, you use such classes, rather
than new[].
yes. There is no doubt when a call finishes. Well /almost/ no
doubt, some group calls get a little hairy.

And what happens if someone goes off, and forgets to hang up?
There are also time-outs, and such. But it's application level
logic: not something you define as a coding rule, but rather
something the domain specialists define in the requirements
specifications.
[simple example] (too simple if you ask me)
think a program that processes text manipulating strings
all its time. A C++ solution would use std::string, (or
any of the much better string classes). Doing all the
passing-around, cutting, concatenating, etc.
consider a text editor that allows the user to delete
text. Who deletes the string that holds the deleted stuff
and when[?]
That's probably not a good example. The text buffer holds
the text before it's deleted, and the deleted text itself is
never a separate object, unless...
sorry, "who deletes the text buffer?"

The user? When the text buffer disappears is generally part of
the requirements specifications, see e.g. the commands bdelete
and bwipeout in vim. My point was just that you don't generally
delete an std::string or whatever when you delete text in an
editor.
but it's no longer solved by RAII alone

Certainly not. (My comments were meant to expand on yours, not
to disagree.)
exactly, the client code has to cause the object to be created
somehow

Yes. I was responding to you AND the people you responded to,
at the same time.
sorry, the points at which delete are called is well defined
with RAII, it's just the point at which an "entity object" is
released is not specified by a FIFO protocol and hence RAII is
not a complete solution.

The quoting above is deep enough that I'm no longer sure who's
saying what, but some smart pointers are not that deterministic.
Saying that an object will be deleted when the last pointer to
it is destructed is saying that unless you know the exact
lifetimes of all of the pointers, you don't know when it will be
deleted. You're basically using smart pointers as a slow and
bugging implementation of garbage collection. Except that the
indeterminisms of garbage collection are well known.

[...]
stupidly I went to The Hungarian Side of the programming
nature. I meant of course std::auto_ptr<Call>.

But you don't want std::auto_ptr here.
The example was cobbled together rather quickly and may be
buggy. Second attempt:
CallManager::process_event (std::auto_ptr<Event> event)
{
std::auto_ptr<Call> call(0);
if (is_new_call (event))
{
call = CallFactory::create_call (event);
CallList::add_call (call);
}
else
{
call = CallList::lookup_call (event);
}
call->process_event(event);
}
I was assuming that happened inside
call->process_event(event);

Which is a member function of of call.

I think we basically agree. (I learned the pattern when I
worked on telephone systems, but I've since learned that it
applies in just about all servers and all event driven systems.)
 
F

Francesco S. Carta

About RAII / RRID, shouldn't the order of construction / destruction
be FILO instead of FIFO?

As I always understood it, this is how C++ automatic objects and
hierarchies work.
 
A

Alf P. Steinbach

* io_x:
are you sure all your "const" are right?

You can add some if you want.

are you sure that the memory point from p is not a memory leak?

p does leak.

It's not an example of memory management.

Is there anhoter way to call
"void* operator new( size_t const size, char const* const blah )"
other than "new( "Get it?" ) S( 42 )"?
Yes.



even if i write
int main()
{
S* const p = new( "Get it?" ) S( 42 ); delete p;
}

it calls, the "delete p" string, the [default] distructor of S?
Yes.



------------------------
#include <stdio.h>
#include <stdlib.h>
#define P printf
#define i8 signed char
#define V void

struct S{
public:
void* (operator new)(size_t size, char* blah)
{printf( "allocation function called, arg = \"%s\"\n", blah );
return :):eek:perator new)(size);
}
S(int whatever)
{Sarr=0;
printf( "constructor called, arg = %d\n", whatever );
if(whatever<=0) return;
Sarr= (i8*) malloc(whatever);
P("S(int)");
}
S(){
printf( "void costructor\n");
Sarr= (i8*) malloc(1024);
P("S()");
}
~S(){
printf("distructor\n");
free(Sarr);
}
i8* Sarr;
};


int main(void)
{// S* p=new("Get it?") S( 42 );
// S* p=new("Get it?") S;
// S* p=new("Get it?") S();
S* p=new("Get it?")S;
delete p;
P("\nEND\n");
return 0;
}

here the memory pointed from p [ came from :):eek:perator new)(size)]
is really free?

Don't know if you have introduced any undefined behavior or the like, but if
not, then yes it's freed.


Cheers & hth.,

- Alf
 
B

Balog Pal

James Kanze said:
There are two major problems with this: it doesn't always apply,

Possibly it is just a wrong selection of wording.

I wrote many manager classes and read implementation of stock ones -- IME it
is rare to put the release code in the dtor. And too often I put it there
first just to move away very soon. To a private function like Dispose (I
would call it release, but it that got used for a different thing), and call
that from dtor.

And it is needed for other functions like reset() or operator =.

There is some beauty in the simplest RAII form -- the manager is NOCOPY and
has absolutely nothing but ctor and dtor. It really leaves no chance at all
to mess up. But I used those only for mutex-lock/flag/counter guards that
are so absolutely tied to a scope block and such simple use case.

For many other situations I just used a policy-based smart pointer
(interface functions like that of auto_ptr, have a nocopy and a deepcopy
variant, and the payload of Dispose mentioned above is injected as policy).
Normally I'm against having funtions in the interface that are not known to
be needed up front, but this looks like exception. Possibly because the
mentioned functions are getting special attention in review anyway.
and it only concerns resources. What you're concerned about is
program coherence: freeing a mutex lock (in a destructor) when
you've left the locked data in an inconsistent state, for
example, is not going to help. You must ensure that the
destructor also leaves the object in a coherent state. (This is
what is often meant by exception safety, but it really concerns
anything which might affect program flow.)

Yeah, this is a way tougher problem than maintain symmetry of alloc
functions. In general at least. There are ways to shove off some of the
burden by sticking to just basic guarantees, and compensate by discarding a
major context entirely on exception. So stronger guarantees are needed only
at handling the few objects that are beyond that and mutable by the thing --
ideally nothing or just a few.
This is fine to a point, but I am supposing that the "identified
owner" is an object, and instance of a class. If that object is
dynamically allocated, and you're considering memory as a
resource, then you have a vicious circle.

Precisely you have the circle if your if is true indefinitely in the
recursion. In practice there will be a roof. If you stick to the 'every
allocation function can happen only in an ownership-taking parameter place'
you will naturally discover should you missed a real circle.
The problem is that in many cases, the "owner" of an object is
the object itself. The whole basis of OO is that objects have
behavior; that behavior often determines their lifetime. (It's
not for nothing that most deletes in well written code are
"delete this", at least in some application domains.)

Actually I heard this statistic (though many times) only from a single
person, you, James. While anywhere I looked delete this had zero instances
or very few. Including the well-written programs. So I have
reservations, that it is a tech for indded "some" domains, and not the rest.

Say if a program processes text (and for whatever reason cose C++ instead of
....) does it realisticly need delete this anywhere?

Now as I think of it, MFC has a plenty of classes with self-deleting, and it
makes sense indeed, as part of the framework. And I used those classes
extensively too (stock or as parent classes), but hardly ever needed to do
the same in mine directly. So I tend to not count'em as part of *my*
application, maybe that creates the difference. ;-)
 
J

James Kanze

Not sure how you arrived at the everything is FIFO assumption.
I was stating to use managers -- and that managers can be
non-locals. After all we don't have so much places, so if you
have a manager it can be:
- local in a block in the immediate function
- local in a block in a function somewhere up the call chain
- a global(/singleton)

The destructors of those all occur at very specified points in
time. Specified points which don't necessarily (or even often,
in a lot of application domains) with where the program's
requirements specification say that the object's lifetime ends.
you put responsibility of deallocation to the manager, and it
will eventually happen for all of them.

Not necessarily. The typical server runs in an endless loop.
Variables in the same scope as the loop and global variables are
never destructed. Even if you reboot the server from time to
time, you can't wait until then to delete things like call
records (supposing a telephone switch or transmission
equipment).
There is a chance for pseudo-leaks, we probably not yet
discussed.

I'm not sure what you mean by "leak" and "pseudo-leak". If we
take the earlier example of a call handling system, if any
resources associated with the call are not released when the
call is terminated, it is a leak.
Like when you have a list with long life, keep adding nodes
and never remove any.

If you continue to need to access those nodes, you don't have a
leak, pseudo- or not (although you may have a requirements
specification that is impossible to implement on a machine with
finite memory). If you don't continue to access those nodes,
then you have a leak---there's nothing pseudo- about it, since
it will bring the system down (no pseudo-core-dump, but a real
one) sooner or later.
Stuff can get stuck in managers that are global or live wery
much up (say right in main()). Getting the resources released
only at prog exit may be not good enough, and act like a
regular leak in all practical terms. No silver bullets.
Think that is hardly news to anyone reading this group.

Getting resources released only at program exit is *not* good
enough. His example was call handling: program exit probably
won't be for a couple of years or more, and the system might be
handling 10's of thousands of calls a day.
And pseudo-leaks are damn hard to discuss in abstract, in a
forum: how you tell whether some such node is yet needed in a
system or not? Except by the particular design? ;-)

That's true for most objects.
The point is that the managers in the latter category are not
flooding the app -- you have only a few of them hopefully, and
can pay close attention on the content. And normally they are
helped out with more local RAII sub-managers.
Many real systems work processing 'requests', serving events
from outsude. That have a well defined point of entry, that
is fortunately a function/stack frame too. And you can use
that to manage all the stuff that is local-to that request.
When it ends, you can be sure the app's state is as before.
Or changed *only* as expected by the req itself.

Yes. And the expected change, in some cases, is that a new
object was created or that an existing object was destructed.
The more problematic RL cases when request create a "session"
to work with and must carry state of that further. So you
will have a session manager upwards.

That was basically his example. A call manager is really a sort
of a session manager. (Unlike most "sessions", a call typically
involves several parties, and the session only ends when the
last partie leaves it.)
Normally some of the further events will bring the session to
its final state where it can be disposed -- then certainly
releasing all associated resources. If driven by outside
events, certainly only a proper design can make sure it
happens consistently.

Certainly. And in the absense of such cases, when do you use
dynamic memory?
Still is is not at all stupid, and it works perfetcly fine in
my practice. Possibly my separation of things to "client" and
"library" (aka framework, support, etc) is not clear and
misleads judgement.
The most power of C++ comes from the fact it gives superior
tools to create a quasi-language in which you can most
directly and expressively state the real aim of the program.
To em the client or application code is what is written in
that latter language. That definitely don't have place for
keyword delete.

The keyword "delete" is a red-herring. Suppose a call object is
processing a "hang-up" event, and it determines that this event
corresponds to the last party in the call. Whether it does
"delete this" or "manager.please_please_delete_me(this)" really
doesn't change anything. (Actually, it does. When you see
"delete this" in the code, you know that the only thing you can
do is return. It's more tempting to continue processing after
something like "manager.deenrol(this)".)
Like in these very cases. Client code shall no way attempt to
implement containers, string, etc -- but use them for good.
:) It asks dynamic stuff in indirect way, like resize, or
invoking factory calls.

I think we all agree here. There are two major reasons to use
dynamic memory: the object's lifetime doesn't correspond to any
of the predefined lifetimes (local or global), or the object's
size or actual type aren't known at compile time. For the
second, I don't think there's any real disagreement: you use
some sort of "container" (preferrably the standard containers)
to manage the memory. For the second, the issues are more
complex: a transaction typically does correspond to (or end at)
an established scope, a connection only does if the server
dedicates a thread to each connection (which for various
reasons isn't really an option in call handling). In such
cases, you might want to use something like auto_ptr to hold
any objects until you commit, but once you've committed, there's
no real point, and even a few drawbacks, to using some special
manager object to avoid an explicit "delete".
Eliminating dynamic memory use is rare -- probably extinct
outside embedded.

Come now, you don't use "new int" when what you want is an int
with the same lifetime as the function. Practically, because
int is a value object which supports copy, you don't use "new
int" ever. (About the only time you might dynamically allocate
a value object is when it is big enough to make copying too
expensive. Otherwise, you copy.)
I'm beginning to form the rule of thumb that all uses of
new[] are errors. Or at least need looking at very hard.
Absolutely. delete[] was the first thing I blacklisted and it
certainly dragged new[]. And didn't meet a single case it was
needed in the last 15 years.

Beat you there:). I've never used a new[] or a delete[], and
I've been programming in C++ for 17 or 18 years now.
[simple example] (too simple if you ask me)
think a program that processes text manipulating
strings all its time. A C++ solution would use
std::string, (or any of the much better string
classes). Doing all the passing-around, cutting,
concatenating, etc.
consider a text editor that allows the user to delete
text. Who deletes the string that holds the deleted
stuff and when[?]
That's probably not a good example. The text buffer holds
the text before it's deleted, and the deleted text itself
is never a separate object, unless...
sorry, "who deletes the text buffer?"
In a single-doc editor the text buffer can be permanent.
(I guess text editor is not a good forum example, the
questions you rised can be easily answered with a multitude of
alternative designs, all with pros&cons, yet trivially
ensuring resource control -- but it would take a book-worth
amount of words. If really needed, pick some sub-part, or
narrow it down.)
Undo also can be made with several approaches, like keeping
snapshots of the current state, or record request to play
forward or back.
What you mean by "RAII alone"? Possibly we speak on distict
tracks because you fill that term by different content...
The meaning tied to original mosaic is actually rearly meant
literally -- the most common use is RRID (res. release is
destruction) leaving allocation freeform -- still routinely
referred as RAII. :-o

Yes. The name is historical, and actually slightly confusing,
because what characterizes RAII is the release in a destructor.
But the name is ubiquious, so we're more or less stuck with it.

But that doesn't change his comment: the decision as to when to
"drop" an redo record (regardless of how the information is
stored) will almost certainly not be made in a destructor.
IMO discounting UB and other blacklisted cases they are.

I was thinking about things like shared_ptr, which are
definitely undeterministic. More importantly, of course, the
points in which a destructor are called are very limited (except
for explicit deletes), and often don't occur in places related
to the required lifetime of the object.
If you mean the "is-initialization" part, sure, it covers only
a subset of cases.

He explicitly said "is released". He's not mentionned creation.
CallManager::process_event (std::auto_ptr<Event> event)
{
std::auto_ptr<Call> call(0);
if (is_new_call (event))
{
call = CallFactory::create_call (event);
CallList::add_call (call);
}
else
{
call = CallList::lookup_call (event);
}
call->process_event(event);}

This does not make much sense to me, id call_list is the
manager/owner of calls, you don't use auto_ptr like that, and
especially don't place the result of lookup in auto_ptr.
There must be exactly one owner. Maybe you meant shared_ptr,
but it looks like overkill too.

Yep. For this, a raw pointer is the most appropriate solution.
(Long term. I'd use an auto_ptr between the
CallFactory::create_call and the successful addition of the call
to the list.)

In the case of a call, the "owner" is the call itself. No other
owner makes sense.
And this is a good example what client code shall NOT do by
any means -- leave call management with the call manager. So
the whole thing becomes a one-liner:

get_call within the manager can see whether it is existing or
a new call, create as necessary and return one by ref.

Maybe. There are different levels, and at the lowest level, you
don't want to rewrite std::map so that it will handle creation of
new objects. Note that Call is probably a virtual base class,
and CallFactory will create different derived classes according
to information in the event. And of course, what makes it a new
call depends on data in the event as well, and possibly the
state of other objects.

The example is considerably simplified. Consider the off-hook
event, which occurs when you lift the receiver off the hook: if
the current status of the terminal is "ringing", then you
process the event with the call object which generated the
"ringing" status; if the current status is inactive, you create
a new call object. In at least one possible design, you would
in fact first obtain an equipment object from the event, and ask
it for the call object. But once created, the call object
manages itself.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,053
Latest member
BrodieSola

Latest Threads

Top