std::string name4 = name4;

J

James Kanze

How is "p == 0" what I mean? "p == NULL" or "p == nullptr" would be more
like what I mean.

And in future C++, "p == nullptr" will be the best way to write
it. In the meantime, "p == NULL" expresses the intent quite
well.
Sure, a pointer IS an integer,

No, a pointer is NOT an integer.
but semantically, it is
a big difference for me. In my opinion, "!p" is much more intuitive and
closer to what I mean: A pointer that doesn't have a value, so it is NOT
really a (valid) pointer.

Well, one could simply define a global function:
template<typename T>
bool
isValid(T* p)
{
return p != 0; // or however else you wanted to write it.
}
This actually has some concrete advantages---you can overload it
to work with any smart pointers you might use, and use the same
syntax to compare all pointers, smart or raw. In practice,
however, it doesn't seem to be accepted practice---at least,
I've never seen any place which used it. (I actually rather
like it---it really says what I mean. But I seem to be in
a minority of one here.)
 
F

Felix Palmen

* James Kanze said:
No, a pointer is NOT an integer.

In the way it is compared, it is. In terms of /some/ mathematical
properties, it is, too. In terms of implementation, it is, at least on
the platforms I know about. But that's what I meant: Semantically, it's
something completely different.
Well, one could simply define a global function:
template<typename T>
bool
isValid(T* p)
{
return p != 0; // or however else you wanted to write it.
}
This actually has some concrete advantages---you can overload it
to work with any smart pointers you might use
[...]

I think that's a nice idea. But I'd call it different -- maybe
"isAlive"? I normally expect isValid() to be a member function doing
some plausibility checks on the actual value(s) of the instance.

Regards, Felix
 
J

James Kanze

In the way it is compared, it is.

Do you mean the fact that we use == to compare it? If so,
a double is also an integer.
In terms of /some/ mathematical properties, it is, too.

Which? It doesn't have any of the properties of integers that
I know of: you can't add two pointers, and when you subtract
them, you don't get a value of the same type.
In terms of implementation, it is, at least on the platforms
I know about.

And on some of the platforms I know of, the generated code for
comparing pointers is significantly different from that used to
compare integers. (There's a reason why the standard doesn't
require comparison for inequality to work universally on
pointers.)
 
Ö

Öö Tiib

The real question is whether ownership even has a meaning.  An
object is an autonomous entity, with its own behavior.  It
doesn't (necessarily, at least) "belong" to any other object.

That other, owner object must not be particularily real "owner" but
some whole for a part. If there are none then it feels that i lack a
step of hierarchy and some things belong nowhere. It may be for
example a "collective" or "population" if nothing else.
Just the opposite.  If you forget about ownership, and consider
that an object has responsibilities, and that one of those
responsibilities is to delete itself when it no longer has any
reason to continue to exist, it simplifies a lot of things.

Presence of ownership (more properly whole/part hierarchy) makes
things like dumping full state of everything, saving and restoring
situations simpler. Serialization is often headache or nightmare that
is good to simplify. Other things it helps with are when achieving
full exception safety or lockless synchronization. These topics take
also master hands to make properly.

Of course there are tons of situations where i have not been.
Application (may be?) does not need the advantages what such hierarchy
gives. Hierarchy is maybe too "feudal" and "inflexible" for some other
thing? What are these "lot of things"?
 (Of
course, a lot depends on the application.  If you have complex
transactions, which have to be atomic, it's very difficult to
roll back a deleted object.  So regardless of who makes the
decision to delete---the object itself or some other
object---the "delete" will take the form of a request to the
transaction manager, who will do the actual delete in the commit
phase.)

Probably i see now. You do not have whole/part relations, instead some
finalizing is done by transaction manager. Sounds interesting, but i
suspect that it is complex magic there and i can really not see how
not.
I've yet to find a good reason why an autonomous object should
belong to anything.  This whole concept of ownership has been
artificially introduced to C++ in order to pretend to manage
memory issues, but it really doesn't make any sense at a larger
level.  (Most of the time, of course.  There are exceptions.)

For me it somehow makes perfect symmetry sense on largest level ...
"ashes to ashes, dust to dust". Something that controls creation also
controls destruction. Initially it may be a stub without much
functionality but further maintenance may always face a need for some
better book-keeping and when there is hirarchy present by design then
it is simpler to extend its functinality. For example when owning
whole is a (polymorphic?) collective then it is simple to add various
factory / recycling / graveyard / necromancy / reusage / statistic /
debugging facilities to it. You may say of course that it is all for
managing "memory issues", but i do not feel that way.

At top of hierarhy are singletons. For singletons i avoid having
classes, makes their inner design most flexible. There may be objects
behind curtains but it is implementation detail then and i have no
need for "delete this" there too.
This is independent of who does the delete---in my parenthetical
example above, the transaction manager certainly doesn't own the
object, even if it does do the actual delete in its commit
phase.  (And the object lifetime can obviously be more than
a single transaction.)

Yes, sounds so. Possibly i do not fully understand your example about
transactions. It helps to minimize undo/redo problems?
 
T

Timothy Madden

Lynn said:
I expect the C4700 behavior for all types, not just the hardware types. I
wonder if this is possible to turn on in the MS compilers.

The standard calls these /fundamental/ types, or even basic types, and
not hardware types. :)

I think the no warning for std::string s = s comes from the fact that
the std::string copy constructor involved here only takes a const
/reference/ to s when invoked, and not the /value/ of s, and the
compiler finds it acceptable for some reason to take a reference to an
yet-to-be-initialized variable.

Anyway using s in general in order to construct s is obviously a good
thing and the language should allow it, as it does now. However string s
= s is obviously a run-time error and a compiler should issue a warning.

Thank you,
Timothy Madden
 
J

James Kanze

That other, owner object must not be particularily real "owner" but
some whole for a part. If there are none then it feels that i lack a
step of hierarchy and some things belong nowhere. It may be for
example a "collective" or "population" if nothing else.

But why do you need to introduce additional, owning "wholes", if
the application logic doesn't require them. That's just added
complexity.
Presence of ownership (more properly whole/part hierarchy) makes
things like dumping full state of everything, saving and restoring
situations simpler.

For some definition of "everything"?

What you're talking about doesn't sound like ownership to me,
but rather navigation. If you need to navigate over
"everything" (for whatever reasons), then you need to provide
the means of doing so. It has nothing to do with ownership or
object lifetimes.
Serialization is often headache or nightmare that is good to
simplify. Other things it helps with are when achieving full
exception safety or lockless synchronization. These topics
take also master hands to make properly.

But ownership in an absolute sense doesn't really play a role
with either. Exception safety is just a form of transactional
integrety, which doesn't involve ownership (except insofar as
transactions can be said to "own" the objects in the
transaction). And the "ownership" relevant to thread safety is
which thread "owns" an object (at a particular time).
Of course there are tons of situations where i have not been.
Application (may be?) does not need the advantages what such
hierarchy gives. Hierarchy is maybe too "feudal" and
"inflexible" for some other thing? What are these "lot of
things"?
Probably i see now. You do not have whole/part relations, instead some
finalizing is done by transaction manager.

It depends on the application. If the application handles
external requests which may modify any number of objects
(including creating new objects and deleting existing objects),
and such requests must be atomic, then you need some sort of
transaction management.
Sounds interesting, but i suspect that it is complex magic
there and i can really not see how not.

No magic, but if you need transactional integrity, you need some
way of rolling back any changes that might have been made before
failure occurs. (If at all possible, of course, it is
preferrable to ensure that everything will work, and report any
errors and abort the transaction before changing anything. But
this isn't always possible.)

[...]
Yes, sounds so. Possibly i do not fully understand your
example about transactions. It helps to minimize undo/redo
problems?

It ensures the basic integrity and internal coherence of the
system. Depending on the automicity required, it may be
trivial, but I've usually worked on larger systems, where it
required some additional logic.
 
Ö

Öö Tiib

But why do you need to introduce additional, owning "wholes", if
the application logic doesn't require them.  That's just added
complexity.

In my experiece requirements just forget to mention there some level
of abstraction. Requirements always imply there is something.
Application logic is later needing it there. "Collective" or
"population" are just examples, there is usually something with more
finesse needed like "collective in location".
For some definition of "everything"?

Yes it is that everything that is relevant for something. How can it
be they are not needed to be saved/dumped/serialized? In context of
what they are serialized when they are?
What you're talking about doesn't sound like ownership to me,
but rather navigation.  If you need to navigate over
"everything" (for whatever reasons), then you need to provide
the means of doing so.  It has nothing to do with ownership or
object lifetimes.

Somehow it feels that these TransactionManagers and NavigationManagers
and so on are for avoiding clear tree-like hierachy of data. However
these do not contradict with clear tree-like data. They feel like some
optimizations/shortcuts and so make it easier to test and verify that
the "managers" work correctly.
But ownership in an absolute sense doesn't really play a role
with either.  Exception safety is just a form of transactional
integrety, which doesn't involve ownership (except insofar as
transactions can be said to "own" the objects in the
transaction).  And the "ownership" relevant to thread safety is
which thread "owns" an object (at a particular time).

Position in objects hierarchy has not to be static. Considering how
unpopular is std::auto_ptr ... people seemingly expect a thing to rot
at place where it did born. However ownership may be transfered when
needed.
It depends on the application.  If the application handles
external requests which may modify any number of objects
(including creating new objects and deleting existing objects),
and such requests must be atomic, then you need some sort of
transaction management.


No magic, but if you need transactional integrity, you need some
way of rolling back any changes that might have been made before
failure occurs.  (If at all possible, of course, it is
preferrable to ensure that everything will work, and report any
errors and abort the transaction before changing anything.  But
this isn't always possible.)

Note that if the object under transaction has hierarchical layout of
data then it makes achieving such integrity simpler. Make copy of
object before transaction and swap and discard the spoiled one when
transaction did fail. It is not always best performing solution but it
is most simple to implement.
    [...]
Yes, sounds so. Possibly i do not fully understand your
example about transactions. It helps to minimize undo/redo
problems?

It ensures the basic integrity and internal coherence of the
system.  Depending on the automicity required, it may be
trivial, but I've usually worked on larger systems, where it
required some additional logic.
 
J

James Kanze

In my experiece requirements just forget to mention there some
level of abstraction.

Or you're inventing some additional layers (and additional
complexity) that just isn't necessary.
Requirements always imply there is something.

Obviously. Starting at the universe. But modeling things that
aren't necessary to the application introduces additional
complexity.
Application logic is later needing it there. "Collective" or
"population" are just examples, there is usually something
with more finesse needed like "collective in location".

I've never found this to be the case. Unless such
a "collective" solves a real problem, its introduction is
artificial and adds unnecessary complexity.
Yes it is that everything that is relevant for something. How can it
be they are not needed to be saved/dumped/serialized? In context of
what they are serialized when they are?

When do you ever serialize the entire application? Depending on
the case, there may be "collections", but most of the time,
they're outside the application, sitting in a data base
somewhere. There's never a case where you'd want to serialize
all of the ClientOrder, or IP addresses, for example; you update
the entries for each in the database, when you modify them.
Somehow it feels that these TransactionManagers and
NavigationManagers and so on are for avoiding clear tree-like
hierachy of data.

There is no "NavigationManager". Navigation is handled by the
objects themselves. And the TransactionManager doesn't avoid
anything---it simply deals with a concrete requirement of the
application. There is no tree-like hierarchy of the data in
most applications; typically, the relationships are far more
complex, and usually, they can and should be considered peer
relationships.
However these do not contradict with clear tree-like data.
They feel like some optimizations/shortcuts and so make it
easier to test and verify that the "managers" work correctly.

A TransactionManager is part of the essential program logic.
The requirements specify the rules concerning transactional
integrity.
Position in objects hierarchy has not to be static.

There is no need for a hierarchy to begin with. Unless it's
part of the requirements, it's something artificially imposed on
the design, limiting and adding complexity.
Considering how unpopular is std::auto_ptr ...

What does that have to do with the problem? The reason auto_ptr
is unpopular is that its specification never stopped changing,
so people were never sure about what it might do. (That was
then---we use it a lot now.)

[...]
Note that if the object under transaction has hierarchical
layout of data then it makes achieving such integrity simpler.

The objects managed by the transaction might have hierarchial
data, although in many cases, it is flat. The objects managed
by the transaction, however, have many and varied relationships
between themselves, most of which are peer relationships, and
not hierarchial. You can impose an additional, artificial
hierarchial relationship on them if you want, but it's just
additional, artificial complexity.
Make copy of object before transaction and swap and discard
the spoiled one when transaction did fail. It is not always
best performing solution but it is most simple to implement.

It's also one of the most frequently used. For objects which
are modified. The issues are more complex when creation and
deletion are taken into account.
 
Ö

Öö Tiib

Or you're inventing some additional layers (and additional
complexity) that just isn't necessary.


Obviously.  Starting at the universe.  But modeling things that
aren't necessary to the application introduces additional
complexity.

Technically you are correct. There is a theoretical possibility that i
add unnecessary complexity.

I stick to that since it did never really hurt nor make anything more
complex. It is like i stick to concepts of Object-Oriented design
(like information hiding, composition, inheritance and polymorphism).
It is clear that (and easy to demonstrate when) it adds (a little
overhead) complexity. It adds never too lot of overhead and so i stick
to it.
I've never found this to be the case.  Unless such
a "collective" solves a real problem, its introduction is
artificial and adds unnecessary complexity.

Usually it solves a real problem. For extending example of "collective
in location": Various effects are location-local (like thread-local?).
You do not need to build separate messaging, signaling, observing
system into each of objects to tell to whole population in a location
about such effects for example. Instead you can tell it to "collective
in location".
When do you ever serialize the entire application?  

Almost never. That is the point. Tree-like hierarchy helps me to
simplify and to sort out what is a minimal "whole" set that has to be
serialized.

[ ... ]
There is no "NavigationManager".  Navigation is handled by the
objects themselves.  And the TransactionManager doesn't avoid
anything---it simply deals with a concrete requirement of the
application.  There is no tree-like hierarchy of the data in
most applications; typically, the relationships are far more
complex, and usually, they can and should be considered peer
relationships.

Ok. Lets say i need a complex multidirectional graph. I implement it
on base of vertexes list and nodes list. For me such underlying simple
structure helps to implement and verify and serialize and test that
complex structure of graph and algorithms on it. Now ... how such
underlying tree-like structure ("graph" -> "node list" -> "nodes" and
"graph" -> "vertex list" -> "vertex") makes something more complex? On
the contrary, the nodes and vertexes lists are maybe not naturally
there but for me they make everything simpler and more robust.
A TransactionManager is part of the essential program logic.
The requirements specify the rules concerning transactional
integrity.

Hmm. Are you claiming that presence of such TransactionManager somehow
contradicts with or is limited by tree-like data hierarchy? Data
hierarhy likely simplifies developing such TransactionManagers.
There is no need for a hierarchy to begin with.  Unless it's
part of the requirements, it's something artificially imposed on
the design, limiting and adding complexity.

What does it really limit? It adds one view to data (data hierarchy)
so that presence of such view has to be taken into account, yes. But
that comes automatically with experience. For one you may not "delete
this" something just like that, but i have not met urge to do it
anywway. Such limit feels not too intrusive for me. However it has
removed number of "what the hell we do now" situations.
The objects managed by the transaction might have hierarchial
data, although in many cases, it is flat.  The objects managed
by the transaction, however, have many and varied relationships
between themselves, most of which are peer relationships, and
not hierarchial.  You can impose an additional, artificial
hierarchial relationship on them if you want, but it's just
additional, artificial complexity.

The data hierarchy may be flat. If i need too heavily to manage
complex relations then i may create objects that are well ... called
"relation". Then i put the relations into hierarchy as well (so
relation does not belong to related objects) to make them simpler to
manage. It may be artifical and unnecessary over-engineering for one
task (then i avoid it) and good design decision for other task. When
however i do not have such hierarchy present at all then i may face
difficulty (no place) to add such extended design.
It's also one of the most frequently used.  For objects which
are modified.  The issues are more complex when creation and
deletion are taken into account.

When there is great likelihood that number of objects participating in
atomic operation are not modified, created nor deleted during
transaction, but it is hard to predict then lazy copying (copy-on-
write) may boost performance considerably. Algorithm has always to
discard one copy after transaction (either original set or results of
failed transaction) and it is cheaper to discard a copy that was never
really made at first place. Again ... without hierarchy such lazy
copies may be somehow hanging in air and may make things more error-
prone (at least for average maintainer).
 
J

James Kanze

Usually it solves a real problem. For extending example of
"collective in location": Various effects are location-local
(like thread-local?). You do not need to build separate
messaging, signaling, observing system into each of objects to
tell to whole population in a location about such effects for
example. Instead you can tell it to "collective in location".

And then? This artificial "collective in location" doesn't
really need to know; other objects may or may not need to know.

Or does the "collective in location" notify all of the other
objects in it? This seems a bit strange to me: perhaps some of
the interested objects are in a different "collective in
location", and most of the objects in the "collective in
location" are probably not interested.
Almost never. That is the point. Tree-like hierarchy helps me to
simplify and to sort out what is a minimal "whole" set that has to be
serialized.

The "whole" set which needs to be serialized is the set of
objects modified in the transaction (if the serialization is for
persistency), or the objects which should be returned from the
request (if the serialization is for data transfer)---in the
latter case, it's entirely possible that some of the objects are
on the stack.
[ ... ]
There is no "NavigationManager". Navigation is handled by the
objects themselves. And the TransactionManager doesn't avoid
anything---it simply deals with a concrete requirement of the
application. There is no tree-like hierarchy of the data in
most applications; typically, the relationships are far more
complex, and usually, they can and should be considered peer
relationships.
Ok. Lets say i need a complex multidirectional graph.
I implement it on base of vertexes list and nodes list. For me
such underlying simple structure helps to implement and verify
and serialize and test that complex structure of graph and
algorithms on it. Now ... how such underlying tree-like
structure ("graph" -> "node list" -> "nodes" and "graph" ->
"vertex list" -> "vertex") makes something more complex? On
the contrary, the nodes and vertexes lists are maybe not
naturally there but for me they make everything simpler and
more robust.

What role do the node list and vertex list play in the
application? I've not done a lot of work with graphs, but in
general, the graph itself contains the nodes.
Hmm. Are you claiming that presence of such TransactionManager
somehow contradicts with or is limited by tree-like data
hierarchy?

No. It's orthogonal to the data hierarchy.
Data hierarhy likely simplifies developing such
TransactionManagers.

Just the opposite. It's one more relationship you have to worry
about.
What does it really limit? It adds one view to data (data
hierarchy) so that presence of such view has to be taken into
account, yes. But that comes automatically with experience.
For one you may not "delete this" something just like that,
but i have not met urge to do it anywway. Such limit feels not
too intrusive for me. However it has removed number of "what
the hell we do now" situations.

So you have to have some unnatural logic when an object
determines that its correct response to a simulus is to die?

The point is that you're doing extra work for nothing.

[...]
When there is great likelihood that number of objects participating in
atomic operation are not modified, created nor deleted during
transaction, but it is hard to predict then lazy copying (copy-on-
write) may boost performance considerably. Algorithm has always to
discard one copy after transaction (either original set or results of
failed transaction) and it is cheaper to discard a copy that was never
really made at first place. Again ... without hierarchy such lazy
copies may be somehow hanging in air and may make things more error-
prone (at least for average maintainer).

That's simply not true. I've never seen an application where
copy on write was appropriate for transaction management; but
that's not really a problem if that's what you want to do. What
is important is that you notify the TransactionManager in all
such cases, since in the end, it is the TransactionManager (and
only the TransactionManager) who can determine which set of
objects to keep. If you're modifying the original object (not
always a good idea if you're multithreaded), then you could
argue that the TransactionManager is the owner of all of the
backup copies.
 
Ö

Öö Tiib

And then?  This artificial "collective in location" doesn't
really need to know; other objects may or may not need to know.

Or does the "collective in location" notify all of the other
objects in it?  This seems a bit strange to me: perhaps some of
the interested objects are in a different "collective in
location", and most of the objects in the "collective in
location" are probably not interested.

Bad design is always possible. Lets just imagine that i put effort
into avoiding making hierarchy that supports telling to all idle
baker's to fix more shoes? I believe that there always exists at least
one good hierarchy that supports the goals of software. Same
functionality is possible to achieve with multiple ways so ... i can
not argue here.
The "whole" set which needs to be serialized is the set of
objects modified in the transaction (if the serialization is for
persistency), or the objects which should be returned from the
request (if the serialization is for data transfer)---in the
latter case, it's entirely possible that some of the objects are
on the stack.

Yes. Static, dynamic and automatic storages are there in C++ for
allowing to optimize memory management. Why you brought objects on
stack into a discussion about "delete this"? This is another (subtle)
problem, average novice may fail to guard such suicidal objects from
being in the stack. It is somewhat easier to keep track about storage
type under instances (of same class) by some external book-keeper.
What role do the node list and vertex list play in the
application?  I've not done a lot of work with graphs, but in
general, the graph itself contains the nodes.

Graph, nodes and vertexes are just abstract words. You said that a
tree is too trivial so i did bring graph as example how a graph is
technically implementable as tree with two branches. As example for an
imaginary application node lets take "Station" and vertex "Railway".
Node list is "list of railroad stations" and vertex list is "list of
railways".

Situation that the hierarchy is missing: No artifical "Railway System"
graph is present, it was not explicitly required. Just Stations, the
Objects. If Station discovers that all railway coming and going from
it are removed then ... "delete this"? Maybe you can give better
example, lack of owning objects based on such example is outright
egregious.

[...]
So you have to have some unnatural logic when an object
determines that its correct response to a simulus is to die?

The point is that you're doing extra work for nothing.

My "unnatural" logic is that object can itself decide to turn into
corpse of object as maximum self-violence. Ok now you say that it is
unneeded two phase destruction ... or that second phase of mine is
done by wrong book-keeper. Lets just agree that we disagree. ;)
That's simply not true.  I've never seen an application where
copy on write was appropriate for transaction management; but
that's not really a problem if that's what you want to do.  What
is important is that you notify the TransactionManager in all
such cases, since in the end, it is the TransactionManager (and
only the TransactionManager) who can determine which set of
objects to keep.  If you're modifying the original object (not
always a good idea if you're multithreaded), then you could
argue that the TransactionManager is the owner of all of the
backup copies.

I did not say that copy on write replaces transaction management. It
is one possible optimization (unpopular for some strange reasons). You
can never modify the original with copy on write unless you are single
owner of copies. Original is immutable for all owners of copy. Since
original is immutable for everybody it is quite good for
multithreading, you can read it without locks. If one of owners is a
TransactionManager then that is fine since TransactionManager does
never write into his "backup copy" and so there will be never worst
case writing races too (that is when CoW makes one copy too many). I
am not saying that it is best solution but extra level of indirection
of CoW is often cheaper than read locks or always really copying.
 
J

James Kanze

Bad design is always possible. Lets just imagine that i put effort
into avoiding making hierarchy that supports telling to all idle
baker's to fix more shoes? I believe that there always exists at least
one good hierarchy that supports the goals of software. Same
functionality is possible to achieve with multiple ways so ... i can
not argue here.

I don't doubt that one *can* always find or create some sort of
ownership relationship which englobes, directly or indirectly,
all objects. The question is: why? What does it buy you,
except extra complexity and additional effort?
Yes. Static, dynamic and automatic storages are there in C++
for allowing to optimize memory management.

No. At a higher level, it has nothing to do with memory
management. It's a question of object lifetime, and the
distinction between value semantics and entity semantics.
Why you brought objects on stack into a discussion about
"delete this"?

Your efforts to give everything an owner.
This is another (subtle) problem, average novice may fail to
guard such suicidal objects from being in the stack.

That is, quite frankly, ridiculous. The type of an object
pretty much determines its lifetime requirements, and I've yet
to encounter any type which would be used as a local variable
sometimes, and other times be allocated on the stack.
It is somewhat easier to keep track about storage type under
instances (of same class) by some external book-keeper.

It is even easier to design the application so that every type
has a distinct role and responsibilities, so you don't need to
keep track of storage type.

[...]
Graph, nodes and vertexes are just abstract words. You said that a
tree is too trivial so i did bring graph as example how a graph is
technically implementable as tree with two branches. As example for an
imaginary application node lets take "Station" and vertex "Railway".
Node list is "list of railroad stations" and vertex list is "list of
railways".
Situation that the hierarchy is missing: No artifical "Railway System"
graph is present, it was not explicitly required. Just Stations, the
Objects. If Station discovers that all railway coming and going from
it are removed then ... "delete this"? Maybe you can give better
example, lack of owning objects based on such example is outright
egregious.

It's a perfectly good (albeit simple) example. A railway
station knows which lines serve it, and a line knows which
stations it serves. (That's navigation, not ownership.) And
each has to be an observer of the other, so that they can update
the various relationship. Some external events (commands, or
whatever) may affect either the stations or the lines---if the
external event causes one or the other to disappear, it must
notify its observers.

Note that this is independent of the "delete this" issue.
Depending on how the code is organized, the delete can come from
the object itself ("delete this"), as part of its behavior, or
from the ephemeral event (which because it is ephemeral, can't
really be considered an owner).

If there is a need to find a particular station or line from
some external identifier (name, etc.), then all of the objects
will also be in a map of some sort. But that isn't an owner,
either; it's just a service entity. Typically, it is the
constructor and destructor of the object (station, line) which
inserts and removes the object from such a map.
 
Ö

Öö Tiib

I don't doubt that one *can* always find or create some sort of
ownership relationship which englobes, directly or indirectly,
all objects.  The question is: why?  What does it buy you,
except extra complexity and additional effort?

It is all the same benefits of why we have centralized book-keeping in
real life organizations and why everybody always reports (at least on
paper) to someone. Normally everybody does their everyday (quick) work
using horizontal relations between them and their surrounding
colleagues, bosses and subordinates. On exeptional cases like when
there is need for something far or for organization-wide communication
the information will flow following (slow) lines of authority. It is
to lower noise to signal ratio and need to manage too numerous
relations if nothing else.

There will always be some late requirements that are complex to
implement without such supporting structure and also it is hard to
explain (to authors of requirements) what is so tricky about it. Other
case is that it is not as tricky as it needs a new dependency (and
horisontal relation) to be formed between entities that are far from
each other and mostly unrelated. The people with visions and
requirements usually somehow expect such structure behind things even
if they do not explicitly tell it. Some design patterns (like
"visitor", i do not like it really) expect something like hierarchy
and may face problems when it is missing.
No.  At a higher level, it has nothing to do with memory
management.  It's a question of object lifetime, and the
distinction between value semantics and entity semantics.


Your efforts to give everything an owner.

Oh. Objects in stack have owner. Techinically the object on stack is
owned by member function and member function is owned by whoever is
"this" in it. I do not likely put such temporaries into same lists
with stationary objects. I suspect it is some conflicting idiom about
objects themselves adding themselves into related navigational and
servicing collectives when constructed? With clear hierarchy i do not
need such idioms, owners are decided externally.
That is, quite frankly, ridiculous.  The type of an object
pretty much determines its lifetime requirements, and I've yet
to encounter any type which would be used as a local variable
sometimes, and other times be allocated on the stack.

This feels not entirely natural concept. At least some sort of
background information is missing. For example that typical exception-
safety thing is often fine enough (if not it depends a lot on
background):

// this is dynamically allocated T
void T::doSomething()
{
T tmp( *this ); // tmp is T on stack
// ...
// manipulate tmp, fails with throws
// ...
swap( tmp ); // does not throw
}

So "this" there sort of owns its temporary copy that does dangerous
things.
It is even easier to design the application so that every type
has a distinct role and responsibilities, so you don't need to
keep track of storage type.

Sometimes there is difference in roles and responsibilities by object.
For example in Java you have to make different classes for "Type" and
"ImmutableType". In C++ we sometimes do something similar (iterator
and const_iterator), on most typical situations we have "Type*" and
"Type const*". But that goes far from "delete this" and "data
hierarchies" possibly.

    [...]
It's a perfectly good (albeit simple) example.  

Graph is usually ok example for complex set of relations, i meant the
railway system is maybe too simple subtype of graph and self-
destroying railway line feels too unbelievable concept.
A railway
station knows which lines serve it, and a line knows which
stations it serves.  (That's navigation, not ownership.)  And
each has to be an observer of the other, so that they can update
the various relationship.  Some external events (commands, or
whatever) may affect either the stations or the lines---if the
external event causes one or the other to disappear, it must
notify its observers.

Yes, a line should know between what it is. A station may also have
such direct knowledge about lines. There is also possibility that list
of lines to station is not needed to be cashed into station. If lines
to station are rarely needed then station may ask from his owner,
"railway system" each time what lines service it. On that case it is
also OK if station is not observing the lines.
Note that this is independent of the "delete this" issue.
Depending on how the code is organized, the delete can come from
the object itself ("delete this"), as part of its behavior, or
from the ephemeral event (which because it is ephemeral, can't
really be considered an owner).

For me such "ephemeral event" causing destruction sounds like a
natural disaster. :D If t is such it can tell to railroad that it is
now "broken" and to station that it is now "ruins". If the ruins of
stations and broken lines will be removed or repaired can not be
decided by neither event nor object. It may be organized other way but
it feels less natural for me ... so it is like philosophy issue? On
any case i do not feel like doing extra work or adding extra artifical
complexity by maintaining the lists.
If there is a need to find a particular station or line from
some external identifier (name, etc.), then all of the objects
will also be in a map of some sort.  But that isn't an owner,
either; it's just a service entity.  Typically, it is the
constructor and destructor of the object (station, line) which
inserts and removes the object from such a map.

Yes, for me these "list of stations" and "list of lines" are such
service entities (not necessarily lists in c++ sense)that are allowing
the owner graph "railway system" to get statistics and to find,
extract, add, remove and manage. Railway system therefore takes
responsibility there. It is often hard to squeeze out every such need
from clients of the software early and adding something like that
later sometimes results with duplicate constructs fulfilling
overlapping responsibilities.

Typical example is when someone needs to add some sort of statistics
and then starts to form such a list in wrong place. It is simpler to
realize that it is wrong when there is something else already on
picture with related more universal responsibilities.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,065
Latest member
OrderGreenAcreCBD

Latest Threads

Top