Never ever use a raw pointer when a smart pointer can do the same job

A

Alf P. Steinbach

* Noah Roberts:
Well, for one, what if you want to change what's returned from the
constructor?

You never do.

However, as I mentioned and you snipped, there are valid reasons for using
factory functions, in other contexts.

You mention one of those below (plus one that might be valid, depending on the
concrete details, plus some invalid ones ;-) ).

There's the one case where you want to ensure that the
object is always created with a shared_ptr (as in you're using
shared_from_this),

That's more complicated to do from scratch with the non-factory approach, yes.

It's one case where a class specific set of factory functions might be a good
solution.

At least if you're lacking some infrastructure.

and there's the other case when you don't care but
sometime down the road someone decides that a subclass should be
returned rather than the one you're constructing.

You don't. That's totally evil. You're *changing the behavior* without changing
the type, leaving client code stranded with mysterious behavior change.

Clients don't
actually need to know this

They do.

but if you didn't wrap the constructor in a
factory method or object then they are stuck changing...every single one
that created the object.

You don't.

The guys at Net Objectives actually recommend doing it all the time but
they're working with languages like Java. You just can't in C++ and
even if you did (because you could I suppose return the base as a value)
it would just lose any extra information gained by subclassing. But if
you're positive that nobody should be creating some class on the stack
then it can be a very positive step to simply wrap the constructor(s) in
factory functions.

That's one of the cases where a set of factory functions absolutely isn't the
answer.

There is no advantage.

But there is far more code.

It's like delving into assembly code in order to do multiplication, when C++
offers a perfectly good multiplication operator.

It's just completely brain-dead.

They don't have to be the AbstractFactory object,
but a basic FactoryMethod can provide this service.

The abstract factory works well when you're creating a conglomeration of
objects based upon the parameters passed in. It can be the one, sole
object in your design that has to know about the whole tree of classes,
pick the best one(s), and build what clients need. That's really the
only thing I ever use it for. If I find myself implementing an NxM set
of methods in this factory, as you indicate, I ask myself if it's really
the best thing for the job.

Yes, there are some valid uses for factory functions. And the above might be
one, depending on the concrete details. Ensuring dynamic allocation is however
not a valid reason; for that it's just plain silly to code up a set of factory
functions when there is a very simple almost no-cost way to achieve it.


Cheers & hth.,

- Alf
 
A

Alf P. Steinbach

* Noah Roberts:
Which is of course exactly what I said was important to remember when
you did a "delete this" (I protected the constructor instead). You'd
better make sure nobody can create that object as a local stack item if
you're going to do that.

You'd better also make sure of that whenever you 'delete'. ;-)

That issue is simply not different for 'delete this' versus 'delete that'.


Cheers & hth.,

- Alf
 
N

Noah Roberts

Alf said:
* Noah Roberts:

You don't. That's totally evil. You're *changing the behavior* without
changing the type, leaving client code stranded with mysterious behavior
change.

That's only true if one of two things is true:

* Your client code is depending on the private behavior of the class
instead of the advertised pre/post conditions.

* Your new sub object violates the advertized pre/post conditions of the
class it inherits from.

Either way, the evil is located elsewhere as both of these are clear
violations of LSP and OCP. Your code was evil to begin with and ANY
change is going to be problematic.

No, they don't. They need to know the interface they are using and
anything pretending to provide that interface had better do so.
You don't.

Great argument there.
 
N

Noah Roberts

Alf said:
* Noah Roberts:

You'd better also make sure of that whenever you 'delete'. ;-)

Since when? Any object you'd call delete upon can also be safely
created on the stack. I know that you know this difference so I have no
idea why you'd argue otherwise.

Really...since when had you better make sure no client can create
something on the stack if you've called delete on some instance of it
somewhere??? An object that deletes itself is vastly different in this
regard.
That issue is simply not different for 'delete this' versus 'delete that'.

It's way different. If your object does a "delete this" then any client
that wants to use that object on the stack is not allowed to. If you
don't document this, but instead advertise an interface that allows
it...you've just screwed yourself and any future person working on the
product.

A "delete that" object behaves like any other object and can be safely
created on the stack. A "delete this" object does not and had better
not be.

This is actually just getting into areas too stupid to talk about. I'm
sorry, normally I have a lot of respect for your opinions but not in
this case.
 
A

Alf P. Steinbach

* Noah Roberts:
That's only true if one of two things is true:

* Your client code is depending on the private behavior of the class
instead of the advertised pre/post conditions.

* Your new sub object violates the advertized pre/post conditions of the
class it inherits from.

If the new class changes nothing wrt. client code, then there is absolutely no
reason to foist a replacement on the client code.

I thought you'd understand that from my comments, but I wasn't clear enough.

So let me put it this way: *either* the new class is different in some way that
affects behavior, in which case you're changing the behavior, or else it's not,
in which case it's not very smart to do a lot of work to replace the original.



Cheers & hth.,

- Alf
 
A

Alf P. Steinbach

* Noah Roberts:
Since when?

Always.

Don't ever 'delete' an object that hasn't been allocated by 'new'.

Any object you'd call delete upon can also be safely
created on the stack. I know that you know this difference so I have no
idea why you'd argue otherwise.

Really...since when had you better make sure no client can create
something on the stack if you've called delete on some instance of it
somewhere???

That's a different question. Here you're not talking about "that object" that
you're deleting, but other instances. It sounds confused to me.

An object that deletes itself is vastly different in this
regard.

It's way different. If your object does a "delete this" then any client
that wants to use that object on the stack is not allowed to. If you
don't document this, but instead advertise an interface that allows
it...you've just screwed yourself and any future person working on the
product.

Yes, it's a good idea to limit how self-deleting objects can be allocated, and
that's why I showed how to do that, upthread.

However, it's not the case that using the self-deleting object then requires
more care in that respect.

On the contrary, with the self-deleting object class that aspect has been dealt
with centrally, whereas with the non-self-deleting object class the client code
generally has to deal with it, separately, for each and every instance, hence
*more* care is needed for the general non-self-deleting object class.

A "delete that" object behaves like any other object and can be safely
created on the stack. A "delete this" object does not and had better
not be.

This is actually just getting into areas too stupid to talk about. I'm
sorry, normally I have a lot of respect for your opinions but not in
this case.

Yes, it's pretty simple, and I don't understand why you didn't just agree as
soon as James and I joined the discussion; after all, we're always right. ;-)


Cheers,

- Alf (tongue in cheek)
 
N

Noah Roberts

Alf said:
* Noah Roberts:

If the new class changes nothing wrt. client code, then there is
absolutely no reason to foist a replacement on the client code.

I thought you'd understand that from my comments, but I wasn't clear
enough.

So let me put it this way: *either* the new class is different in some
way that affects behavior, in which case you're changing the behavior,
or else it's not, in which case it's not very smart to do a lot of work
to replace the original.

Sounds to me like maybe your classes depend too much on HOW things do
something rather than that they do. A stack drawing client doesn't care
how the items in the stack draw, it only cares that they do. The drawn
objects don't care how the stack drawing client decides to draw them,
and in fact probably don't even know they're in a stack at all. A hit
check algorithm doesn't care how the objects it's checking give it a
boundary polygon, only that they do and that this polygon represents the
boundaries of the object in question; the objects returning the polygon
do not care or know how the hit check algorithm works, only that it
needs a polygon.

All clients should be thus.

This is the essence of OOP: you can change the behavior of your
application by extending the objects that implement it without having to
make sweeping changes across the whole codebase. It's also the core
reason behind the most important principle of OOP: the open/closed.
This is done by making sure objects are not irresponsibly coupled to
each other, INCLUDING (most importantly probably) that client code is
not coupled to the implementation details behind an interface.

This is the core of encapsulation.

Once again, I'm quite surprised at your argument. This is back to the
basics stuff.
 
A

Alf P. Steinbach

* Noah Roberts:
Sounds to me like maybe your classes depend too much on HOW things do
something rather than that they do. A stack drawing client doesn't care
how the items in the stack draw, it only cares that they do. The drawn
objects don't care how the stack drawing client decides to draw them,
and in fact probably don't even know they're in a stack at all. A hit
check algorithm doesn't care how the objects it's checking give it a
boundary polygon, only that they do and that this polygon represents the
boundaries of the object in question; the objects returning the polygon
do not care or know how the hit check algorithm works, only that it
needs a polygon.

All clients should be thus.

This is the essence of OOP: you can change the behavior of your
application by extending the objects that implement it without having to
make sweeping changes across the whole codebase. It's also the core
reason behind the most important principle of OOP: the open/closed. This
is done by making sure objects are not irresponsibly coupled to each
other, INCLUDING (most importantly probably) that client code is not
coupled to the implementation details behind an interface.

This is the core of encapsulation.

Once again, I'm quite surprised at your argument. This is back to the
basics stuff.

It doesn't seem that you understood it.

But let's take your first example, the stack.

You're saying a factory function is nice because it allows you to change the
stack representation without updating the clients.

If (A) the new stack class doesn't change anything wrt. the client code, then
it's just so much waste.

If it does change something, e.g. allowing *new* client code to use some extra
features or whatever, then either (B) it conforms to the old interface, in which
case the original class should just be replaced with the new one (and no need
for a factory function to do that), or else (C) it doesn't conform to the old
interface, in which case it absolutely shouldn't be foisted on old client code.

I'd be surprised if you can think of a case I haven't enumerated here.

But perhaps you can, I always try to be open-minded. ;-)


Cheers & hth.,

- Alf
 
N

Noah Roberts

Alf said:
* Noah Roberts:

It doesn't seem that you understood it.

But let's take your first example, the stack.

You're saying a factory function is nice because it allows you to change
the stack representation without updating the clients.

If (A) the new stack class doesn't change anything wrt. the client code,
then it's just so much waste.

If it does change something, e.g. allowing *new* client code to use some
extra features or whatever, then either (B) it conforms to the old
interface, in which case the original class should just be replaced with
the new one (and no need for a factory function to do that),

or else (C)
it doesn't conform to the old interface, in which case it absolutely
shouldn't be foisted on old client code.

I'd be surprised if you can think of a case I haven't enumerated here.

(D) case (B) but you want to replace the object based upon state of the
application that nobody else involved cares about or knows. We might
hypothesize a case (using the example given) when based on some user
selection criteria the stack draws in one order verses another. With
this simple example this may not be the best way of implementing that
choice (you might chose instead to implement a State pattern in the draw
stack - still vastly changing internals though), but it would certainly
be one way and with more complex systems it could very well be the best
or only one.

This is a perfect use of factory method. It is, in fact, the very
reason to use one.

We might further hypothesize that this is a new requirement in an old
product. Had you chosen to use "new DrawStack()" rather than
"DrawStack::Create()" you have a lot more editing to do to provide this
change.

(E) you've decided that the interface in question needs to be changed to
an abstract interface entirely, the creation parameterized, and a
default provided for "legacy" clients. Again, case (B) mostly with the
additional issue that other clients are providing further information
they know about and you don't want to couple anyone to the hierarchy you
are encapsulating through the abstract interface.

When you need to force someone to use heap allocation (again the factory
method won't work with stack allocation) then usually you're well on
your way toward these cases anyway for you're probably doing something
with a polymorphic interface. It is well warranted to decouple clients
from the static return type of your creation method then.

In languages like Java it can be a good general step that costs nothing
extra to implement for any class you author...in C++ it's a bit
different and reserved for special cases.
 
N

Noah Roberts

Alf said:
* Noah Roberts:

Always.

Don't ever 'delete' an object that hasn't been allocated by 'new'.



That's a different question. Here you're not talking about "that object"
that you're deleting, but other instances. It sounds confused to me.



Yes, it's a good idea to limit how self-deleting objects can be
allocated, and that's why I showed how to do that, upthread.

My argument is for the requirement of that care.
However, it's not the case that using the self-deleting object then
requires more care in that respect.

You'd better protect against stack based creation or you're going to
have problems, period.
On the contrary, with the self-deleting object class that aspect has
been dealt with centrally, whereas with the non-self-deleting object
class the client code generally has to deal with it, separately, for
each and every instance, hence *more* care is needed for the general
non-self-deleting object class.

My argument is that any object that deletes itself must be protected
against stack allocation. This can clearly be seen by, as you say,
moving upthread. If you want to argue about something else then maybe
you should do so with *someone* else.
 
A

Alf P. Steinbach

* Noah Roberts:
or else (C)

(D) case (B) but you want to replace the object based upon state of the
application that nobody else involved cares about or knows. We might
hypothesize a case (using the example given) when based on some user
selection criteria the stack draws in one order verses another. With
this simple example this may not be the best way of implementing that
choice (you might chose instead to implement a State pattern in the draw
stack - still vastly changing internals though), but it would certainly
be one way and with more complex systems it could very well be the best
or only one.

This is not a case of "sometime down the road" someone replacing the compile
time class.

It's a case of knowing up front that different classes may be involved at run time.

So it's not an additional case for what we discussed, where it's still A, B or C.

This is a perfect use of factory method.

It may be, depending on the concrete details.

But it has nothing to do with your claim.

Also, it has nothing to do with ensuring dynamic allocation.

It is, in fact, the very reason to use one.
Uh.


We might further hypothesize that this is a new requirement in an old
product. Had you chosen to use "new DrawStack()" rather than
"DrawStack::Create()" you have a lot more editing to do to provide this
change.

I don't buy that argument. For it can be used to add any additional dead code
that supports some limited functionality. Hey, if something needing this
functionality should become a requirement then we're already some steps in that
direction (oh dang, now we're there and it's not quite like imagined, or, oh
dang, now we're way past the point where that could happen). It's not even
necessarily a good idea when you /know/ that something like that will become a
requirement. For you're then in the position of those who paid extra for
"prepared for stereo" radios that, the manufacturer claimed, could easily be
upgraded to stereo: at the time stereo broadcasts were finally made those who
bought them had to buy new radios anyway, the old "prepared" radios scrapped.


(E) you've decided that the interface in question needs to be changed to
an abstract interface entirely, the creation parameterized, and a
default provided for "legacy" clients.

This is an example of the case I labeled (B), already discussed.

Again, case (B) mostly with the
additional issue that other clients are providing further information
they know about and you don't want to couple anyone to the hierarchy you
are encapsulating through the abstract interface.

When you need to force someone to use heap allocation (again the factory
method won't work with stack allocation) then usually you're well on
your way toward these cases anyway for you're probably doing something
with a polymorphic interface.

No.

First, polymorphism doesn't generally force heap allocation (mostly heap
allocation is forced, when it is, by the need for self destruction, or as a way
to avoid slicing, or e.g. by the size of the object, or other concerns not
directly related to polymorphism).

Second, any polymorphic treatment is usually at sites later referencing the
created object: at the creation site one usually knows exactly what kind of
object one needs created.

It is well warranted to decouple clients
from the static return type of your creation method then.

The premise for that doesn't hold: it's not usual, but rather unusual, that the
code creating an object needs to be held unaware of the exact type created.

In languages like Java it can be a good general step that costs nothing
extra to implement for any class you author...in C++ it's a bit
different and reserved for special cases.

Huh?


Cheers & hth.,

- Alf
 
A

Alf P. Steinbach

* Noah Roberts:
My argument is for the requirement of that care.

Yes, it's a good idea to limit how self-deleting objects can be allocated, and
that's why I showed how to do that, upthread.

And now you're agreeing with that, for the second time.

That's nice, but baffling.

You'd better protect against stack based creation or you're going to
have problems, period.

Here you're violently agreeing with what I wrote in the previous paragraph, for
the third time.

That's, well, even more baffling.

My argument is that any object that deletes itself must be protected
against stack allocation. This can clearly be seen by, as you say,
moving upthread.

"Must" is too strong, IMHO, but otherwise the article where I showed how to do
that, upthread, showed how to do that, so to speak.

And as that article showed, for a self-deleting object the requirement of
dynamic allocation can easily be enforced centrally, whereas this is not so for
objects that are deleted by client code.

And so your "oh really" in response to James' "delete this doesn't require more
care than any other delete" is shown to be inappropriate in this regard: for a
properly designed 'delete this' class the usage requires less care, not more.

If you want to argue about something else then maybe
you should do so with *someone* else.

It seems you have you've lost the original context, your original claim. So I
reiterated it above. :)


Cheers & hth.,

- Alf
 
N

Noah Roberts

Alf said:
* Noah Roberts:

This is not a case of "sometime down the road" someone replacing the
compile time class.

It's a case of knowing up front that different classes may be involved
at run time.

So it's not an additional case for what we discussed, where it's still
A, B or C.

This is central to what I'm discussing. If it's not related to your
point then you're talking about something else.
It may be, depending on the concrete details.

But it has nothing to do with your claim.

Also, it has nothing to do with ensuring dynamic allocation.



I don't buy that argument. For it can be used to add any additional dead
code that supports some limited functionality. Hey, if something needing
this functionality should become a requirement then we're already some
steps in that direction (oh dang, now we're there and it's not quite
like imagined, or, oh dang, now we're way past the point where that
could happen).

Normally you have a point, but when one simple step can prepare you for
a variety of possible future changes, with no added expense (and in fact
benefits you NOW by enforcing creation policy that you should already be
enforcing) then a basic approach like this is entirely warranted.

When working on classes do you completely ignore exception safety if it
isn't directly related to the feature you are currently developing?
Probably not, because you know that a little work now will very often
save you piles of it in the future. We don't hack out what just works
when a little care in what we're doing results in something inherently
more reusable.

It's not even necessarily a good idea when you /know/
that something like that will become a requirement. For you're then in
the position of those who paid extra for "prepared for stereo" radios
that, the manufacturer claimed, could easily be upgraded to stereo: at
the time stereo broadcasts were finally made those who bought them had
to buy new radios anyway, the old "prepared" radios scrapped.




This is an example of the case I labeled (B), already discussed.

If you say so. If that is as you say then you already know that this is
impossible without either A) having already coded a creation factory or
B) change every client that requests creation....which ends up being
exactly my point.
No.

First, polymorphism doesn't generally force heap allocation

You do realize that you completely inverted what I said, or do you?

(mostly heap
allocation is forced, when it is, by the need for self destruction, or
as a way to avoid slicing, or e.g. by the size of the object, or other
concerns not directly related to polymorphism).

Second, any polymorphic treatment is usually at sites later referencing
the created object: at the creation site one usually knows exactly what
kind of object one needs created.

And this is very often unnecessarily so.
The premise for that doesn't hold: it's not usual, but rather unusual,
that the code creating an object needs to be held unaware of the exact
type created.

I completely disagree....or rather it should be held as circular. I'm
arguing for a factory method to be the creator and you're arguing that
the part creating needs to be aware of what it's creating. When
creation is encapsulated it can be changed per the needs of the
requirements. When it is not then it becomes static per the needs of
the client classes and algorithms that have requested creation.

Take, for a basic example, a GUI controller governing the drawing of
some object. It interprets user events and eventually tells a document
somewhere that it needs to change in some way. One very good way to do
this is to encapsulate those changes in a different object and pass that
to the document for activation (command). A naive approach to this is
to have the tool simply create the command it wants, say "create object".

In truth, this is a coupling that is not necessary and in effect binds
the tool to a particular command. The tool isn't bound to the document
because you've implemented a generic interface (right?) so why should it
be bound to the command that manipulates that document. It doesn't need
or care how the command "creates the object" only that it is the right
command to do so. A more robust way of doing this is to remove the
creation from that point and put it in a factory of sorts. The tool is
then parameterized essentially and is able to *request* the creation of
the appropriate command and pass that along to observers.

How does this help you? For one thing you can test the tool without
having to link with the command. You can give the tool a fake factory
that generates a mocked command. This allows you to test the tool as a
unit on its own. If, on the other hand, your tool creates the command
(since it seems from a naive approach to NEED to know what command its
creating) then you're going to have to use various linker tricks to get
the same behavior that is much easier to gain.

How does this help you later? Maybe, just maybe, you'll actually want
to use this tool in another program that also needs to "create object"
but has a completely different object to create and works with a
completely different kind of document. No reason you can't simply shove
this sucker in a library and reuse it...the core purpose of OOP.

So while I do agree that it IS more usual for people to have the tool
simply create the command. It needs to know to create a "create object"
tool so obviously it needs to be directly coupled with that creation,
right? That is what most do but it actually turns out that this is NOT
the case and you gain quite a bit by encapsulating the creation away
from those who request it.

Simply put, whenever something calls "new XXX" you couple to XXX by
name. Although this is quite common practice it is also quite often an
unnecessary coupling that is very easy to remove. Any time you can
remove unnecessary coupling...well, what do you do?

Yeah, I've come to realize that.
 
N

Noah Roberts

Alf said:
* Noah Roberts:

Yes, it's a good idea to limit how self-deleting objects can be
allocated, and that's why I showed how to do that, upthread.

And now you're agreeing with that, for the second time.

That's nice, but baffling.

Only to someone who can't read. I said exactly this in my original
context...as you say, placed above. You want to argue that this is
exactly the same thing as saying that objects deleting themselves are NO
DIFFERENT than objects which don't...which is just stupid. It is to
that I disagree, not when you repeat what I originally stated: that you
need to protect such objects from stack allocation.

I'm done.
 
N

Noah Roberts

Noah said:
Only to someone who can't read. I said exactly this in my original
context...as you say, placed above. You want to argue that this is
exactly the same thing as saying that objects deleting themselves are NO
DIFFERENT than objects which don't...which is just stupid. It is to
that I disagree, not when you repeat what I originally stated: that you
need to protect such objects from stack allocation.

I'm done.

Actually, I'm going to expand this...not for your benefit, but for
anyone confused by your disagreement with something I never stated.

A self deleting object is one that will respond to some function calls
by calling "delete this". Example:

struct self_deleter { void do_something() { delete this; }};

The fact that it will do this requires that this object always be
created by a call to new. Note that there is no way for the object to
know how it was created unless it enforces that method so it can't
conditionally delete based on how it was created.

The above class is badly design because it can be used incorrectly.
Your class advertises an interface that is inappropriate for its use:
namely it can be created on the stack as a local variable. Any such use
though will evoke serious derangement in the program which will most
likely exhibit itself by a hard crash...as in the program just
disappears on the user.

Thus, when WRITING this class you'd best serve your clients (objects
using your class) by disallowing this erroneous use. Encapsulate the
creation so that it cannot possibly be used to create a local variable
on the stack.

Steinbach is actually addressing something completely different and
calling them the same. It is of course true that you never delete an
object twice but this is a completely different issue than the semantic
difference between an object that can be created on the stack, and one
that will cause serious error if ever done. While you never will call
delete on a stack created *instance* of some object, you MUST never
create the stack *instance* in the first place in the case of an object
that will call delete on its own.

On the one hand you have allowed two different methods of creation and
need to make sure you never interchange them with any given instance; on
the other hand you have an object that is inherently only able to use
one method of creation so you'd be best served enforcing that, at the
static level, by making sure any attempt to use the other results in
compilation error.

Why wouldn't you leave this up to documentation and developer know-how?
You could...if you could make sure everyone A) read all the
documentation, B) could remember everything they've read, and C) never
made a mistake and assumed that because a function is there to BE called
that it can be called. The steps required to protect the user (often
yourself) are minimal so really the question is, why WOULD you leave
this up to documentation and developer know-how?

If you're stuck wondering why anyone would argue these things are the
same, you're in the same boat I am.
 
J

James Kanze

* Noah Roberts:
If the new class changes nothing wrt. client code, then there
is absolutely no reason to foist a replacement on the client
code.

Huh? The behavior can easily depend on various elements in the
external environment. Under X, for example, it is frequent to
support several look and feels, determined from an environment
variable, a configuration file or whatever. The immediate
client of the code doesn't know or care about the look and feel,
it wants a button. So your factory function returns a button
with the correct look and feel.

For that matter, I use a similar strategy for handling
differences between Windows and Unix: for reading directories,
for example, if I'm under Unix, the necessary data is a DIR*,
under Windows, a WIN32_FIND_DATA, but neither appears in the
class definition the client sees.
I thought you'd understand that from my comments, but I wasn't
clear enough.
So let me put it this way: *either* the new class is different
in some way that affects behavior, in which case you're
changing the behavior, or else it's not, in which case it's
not very smart to do a lot of work to replace the original.

You seem to be missing the difference between contractual
behavior, and implementation behavior. Roughly speaking: what
(the class does) and how (it does it). The contractual behavior
(the "interface") is just that, contractual. But you often have
need for different implementation behavior.
 
T

Thomas J. Gritzan

[...about enforcing heap allocated objects...]
Actually, I'm going to expand this...not for your benefit, but for
anyone confused by your disagreement with something I never stated.

A self deleting object is one that will respond to some function calls
by calling "delete this". Example:

struct self_deleter { void do_something() { delete this; }};

The fact that it will do this requires that this object always be
created by a call to new. Note that there is no way for the object to
know how it was created unless it enforces that method so it can't
conditionally delete based on how it was created.

The above class is badly design because it can be used incorrectly. Your
class advertises an interface that is inappropriate for its use: namely
it can be created on the stack as a local variable. Any such use though
will evoke serious derangement in the program which will most likely
exhibit itself by a hard crash...as in the program just disappears on
the user.

It seems you didn't understand why Alf made the destructor protected. It
serves two purposes:

1) You cannot call 'delete that' on a pointer to this object, so you
enforce the 'delete this' policity.
2) You cannot allocate that object on the stack, because it would
require a call to the destructor, which is protected.

So Alf's solution does everything you want, but avoids factory
functions. By this way, you can use all constructors of the class
normally, and you don't need to provide one factory function for every
constructor of that class.
 
A

Alf P. Steinbach

* James Kanze:
Huh? The behavior can easily depend on various elements in the
external environment. Under X, for example, it is frequent to
support several look and feels, determined from an environment
variable, a configuration file or whatever. The immediate
client of the code doesn't know or care about the look and feel,
it wants a button. So your factory function returns a button
with the correct look and feel.

The button example is a bit misleading since there are other concerns that can
force a factory function. Still, there's nothing that prevents direct 'new' of a
class 'Button', where the look and feel can depend on configuration, say. In
fact I have a class 'PushButton' that does this, and it required no special
support from me to get it to do that: a factory function would just be more
code, no-purpose code, and anyway, is clearly not required.

For that matter, I use a similar strategy for handling
differences between Windows and Unix: for reading directories,
for example, if I'm under Unix, the necessary data is a DIR*,
under Windows, a WIN32_FIND_DATA, but neither appears in the
class definition the client sees.

Consider normal code, this from a Boost documentation example:


bool find_file( const path & dir_path, // in this directory,
const std::string & file_name, // search for this name,
path & path_found ) // placing path here if found
{
if ( !exists( dir_path ) ) return false;
directory_iterator end_itr; // default construction yields past-the-end
for ( directory_iterator itr( dir_path );
itr != end_itr;
++itr )
{
if ( is_directory(itr->status()) )
{
if ( find_file( itr->path(), file_name, path_found ) ) return true;
}
else if ( itr->leaf() == file_name ) // see below
{
path_found = itr->path();
return true;
}
}
return false;
}


Would you really have the 'directory_iterator' allocated by a factory function,
with the client code dealing with a pointer to the iterator, as Noah, and now (I
think inadvertently) you, argues that it should be, or that it's natural to do?


ool find_file( const path & dir_path, // in this directory,
const std::string & file_name, // search for this name,
path & path_found, // placing path here if found
const iter_factory& factory ) // platform-specific iterators
{
if ( !exists( dir_path ) ) return false;
directory_iterator::auto_ptr end_itr = factory.new_iter(); // past-the-end
for ( directory_iterator::auto_ptr itr = factory.new_iter( dir_path );
*itr != *end_itr;
++*itr )
{
if ( is_directory((*itr)->status()) )
{
if ( find_file( (*itr)->path(), file_name, path_found ) ) return true;
}
else if ( (*itr)->leaf() == file_name ) // see below
{
path_found = (*itr)->path();
return true;
}
}
return false;
}


I think not. ;-)

You seem to be missing the difference between contractual
behavior, and implementation behavior.

I don't think that *I* have given that impression.


Cheers,

- Alf
 
J

James Kanze

* James Kanze:
The button example is a bit misleading since there are other
concerns that can force a factory function.

And? It's obviously not the only example, and there are just as
obviously other possible solutions. It is, however, a
reasonable case where one would provide a factory function which
returns an object of a derived type unknown to the client code.
Still, there's nothing that prevents direct 'new' of a class
'Button', where the look and feel can depend on configuration,
say. In fact I have a class 'PushButton' that does this, and
it required no special support from me to get it to do that: a
factory function would just be more code, no-purpose code, and
anyway, is clearly not required.

Not required, perhaps, but it is certainly the simplest
solution. In particular, in the cases I've used it, the factory
function is in a dynamically loaded object; you don't encumber
the application with all of the look and feels, only with the
one it used.
Consider normal code, this from a Boost documentation example:
bool find_file( const path & dir_path, // in this directory,
const std::string & file_name, // search for this name,
path & path_found ) // placing path here if found
{
if ( !exists( dir_path ) ) return false;
directory_iterator end_itr; // default construction yields past-the-end
for ( directory_iterator itr( dir_path );
itr != end_itr;
++itr )
{
if ( is_directory(itr->status()) )
{
if ( find_file( itr->path(), file_name, path_found ) ) return true;
}
else if ( itr->leaf() == file_name ) // see below
{
path_found = itr->path();
return true;
}
}
return false;
}
Would you really have the 'directory_iterator' allocated by a
factory function, with the client code dealing with a pointer
to the iterator, as Noah, and now (I think inadvertently) you,
argues that it should be, or that it's natural to do?

No. I use a different model---the directory is a container, and
provides the iterators (which are portable, since they are
declared as friend, and refer to the container to do the work).
But I think you're right with regards to your point---the
factory function certainly isn't visible to the client. (The
Directory container-like object basically uses the compilation
firewall idiom, with the factory function used to provide the
implementation class. And now that I think about it... The real
reason why a factory function is used has more to do with the
way errors are handled than with hiding the implementation
details.)

Thinking about it, I think that most of my uses of factory
functions are more or less hidden from the client code. Those
that aren't generally take an argument which will determine the
actual type returned, and are concerned with loading dynamic
objects---the client code can't just do "new
ISO88598_1_Translator" because 1) it might be necessary to load
the code first, and 2) it might be that there is no such class,
that the call to the factory function getCodeTranslator( "ISO885901" )
actually returns "new SingleByteCodeTranslator(
translationTableISO8859_1 )" (given that every single byte code
translates more or less in the same manner).

(This is an actual example. The code translators are all
immutable objects, and there is a map which contains pointers to
the ones that exist. So the factory function first looks in the
map, to see if an entry is already present, and if not, it
loads the appropriate dynamic object, calls its factory
function, and inserts the object into the map before returning
it.)
 
J

James Kanze

A self deleting object is one that will respond to some
function calls by calling "delete this". Example:
struct self_deleter { void do_something() { delete this; }};
The fact that it will do this requires that this object always
be created by a call to new. Note that there is no way for
the object to know how it was created unless it enforces that
method so it can't conditionally delete based on how it was
created.
The above class is badly design because it can be used
incorrectly. Your class advertises an interface that is
inappropriate for its use: namely it can be created on the
stack as a local variable. Any such use though will evoke
serious derangement in the program which will most likely
exhibit itself by a hard crash...as in the program just
disappears on the user.

The class that Alf proposed can't be created on the stack or as
a static variable, except in class members or derived class
members, since it's destructor is protected. When declaring a
variable, the destructor must be accessible at the point of
declaration.

But perhaps more to the point---you can misuse classes which do
delete this. You can misuse any class, if you really want to.
The question is rather "what is the risk that a reasonable
programmer would do so." I often derive (publicly) from
std::iterator, for example, when I define my own iterator.
std::iterator doesn't have a virtual destructor, nor is the
destructor protected, so client code can easily delete through a
pointer to the std::iterator base, causing undefined behavior.
Do I worry about this? No, because I consider the probability
of that happening in code written by a reasonable programmer to
be almost 0. There's just no reason for a reasonable programmer
to ever have a pointer to std::iterator.

Similarly, most application level classes tend to fall into two
categories: those which are always allocated dynamically, and
have explicitly managed lifetimes, and those which are never
allocated dynamically. There are obviously exceptions, but they
aren't that frequent, and the "delete this" idiom applies
exclusively to the first category. Classes which based on their
overall semantics make no sense allocated other than
dynamically. By definition---if a class has a managed lifetime,
and should be destructed in response to an external event, then
it can't be used in contexts where the destruction is
"pre-programmed" independently of that event. The delete this
issue is, in fact, completely irrelevant.
Why wouldn't you leave this up to documentation and developer
know-how?
You could...if you could make sure everyone A) read all the
documentation, B) could remember everything they've read, and
C) never made a mistake and assumed that because a function is
there to BE called that it can be called. The steps required
to protect the user (often yourself) are minimal so really the
question is, why WOULD you leave this up to documentation and
developer know-how?

Because, like so many other things, you have to. Again, whether
the class uses delete this, or the delete is managed by a
transaction, or whatever, really isn't relevant. A programmer
cannot successfully use a class unless he knows at least the
basics of what the class does, and what it's purpose is. If the
role of the class requires a managed lifetime, then the
programmer can't use the class successfully without knowing
this. The name of the class will often give a strong hint, but
in the end, some documenation will probably be necessary. And a
programmer who uses a class without reading the documentation,
on the odd chance that it might do something useful in his case,
is a programmer who doesn't last long.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,764
Messages
2,569,564
Members
45,039
Latest member
CasimiraVa

Latest Threads

Top