Sune said:
<Is action table not just another name for vtable, i.e. an array of
function pointers? If so, the performance ought to be the same, but C++
will have better abstraction, since you will not have to manage the
tables manually.>
Manually, why? Regarding performance, see below.
Hmm... If action tables can be handled automatically by the compiler,
and provide no call overhead, maybe I should be using them!? Could you
provide me with a hint on what they are; I have to admit never hearing
the term prior to this thread.
<I would
happily sacrifice the extra cycles to get more encapsulation and
abstraction - I value my maintenance time more than the CPU's execution
time. >
Virtual functions negatively affect performance in 3 main ways:
1) The constructor of an object containing virtual functions must
initialize the vptr table, which is the table of pointers to its member
functions.
Are you sure? In the implementations I have seen (Renesas C++ and GCC),
the vtables are created as constant data. The only thing initialized in
the constructor is a pointer to the current data type's vtable.
2) Virtual functions are called using pointer indirection, which
results in a few extra instructions per method invocation as compared
to a non-virtual method invocation.
On the Renesas H8S processor, with GCC and optimization on, the extra
indirection consists of two assembler instructions. This is not much,
to me. My default strategy would be to allow virtual functions
anywhere, and then later on check the performance. Are there two
classes in my system that suffer heavily from the extra indirection?
OK, then I rewrite those two. As someone else said: "Premature
optimization is the root to all evil."
BTW, how will the function table approach work here? Will it really
have zero overhead, compared to a direct function call?
1,2) Becomes an even larger performance problem if the base class is
virtual. Some, as EventHelix (strong C++ supporter) even tells people
not to use it at all for this reason.
That is true, here the compiler will start putting out thunks. As
others have pointed out: the feature is there, and we choose when to
use it. In each individual instance, the designer will have to weigh
the pros and cons. Is the additional elegance worth the performance
penalty?
3) Virtual functions whose resolution is only known at run-time cannot
be inlined. (For more on inlining, see the Inlining section.
Again true, but would that not also be true if you are calling
functions through an action table?
Measurements in C/C++ Users Journal (June 2004) show that a call to a
virtual function takes 8 times the time of a inlined function.
Would not that depend a bit on the size of the function body? Have they
been measuring that on simple "get/set" functions?
<I would
happily sacrifice the extra cycles to get more encapsulation and
abstraction...>
- Abstraction is not a feat of OO but your mind. It's about conceptual
categorization. Babbage used abstractions...
- The necessary encapsulation can be achieved in procedural C++, an
anonymous namespace means private, the rest is public. protected is
considered even by Bjarne Stroustrup to be a language flaw.
Yes, I quite agree that OO did not invent, or is the only answer to
encapsulation and abstraction. I have done my best to use these things
in C, assembler, SDL, etc. However, I think C++ offers a few neat
features that help, and that inheritance and polymorphism are among
those features.
Aargh - it hurt being hit by a Stoustrup quote. Do you have a reference
for it, so I can look up the context? I must say I am a bit surprised
he dismisses protected; in his book "The C++ programming language" he
describes how protected is used to differentiate between derived
classes and "the general public" without providing any indication he is
the least displeased with it. A direct quote: "protected is a fine way
of specifying operations for use in derived classes" (section
15.3.1.1.)
Ok, I have to admit, when I wrote the most radical things about
inheritance I was upset beacuse that guy called me a troll without
understanding the least bit what I was talking about. Being a
condescending ass and all...
Inheritance is NICE if performance is of less importance AND you are
coding to meet requirements that are not a moving target (a system/CPU
architecture or GUI may be examples of this). I and many others who
work close to a customer who needs new features every 6 months (which
many times contradicts the requirements handed to us 6 months before)
move away from inheritance and move towards delegation. Why? Basically
because reality force us to constantly re-evaluate our abstractions,
and many times due to times constraints, violate some of them...
Delegation is more 'elastic' in these cases I think.
I am getting this warm, fuzzy feeling of agreement here - soon we will
be holding hands and singing by the camp fire...
That last sentence made it interesting again... do you have any
examples of how delegation provides that extra flexibility?
When you prefer delegation over inheritance - is it for flexibility or
performance reasons? If the latter, would not delegation also incur a
performance hit compared to a direct function call, as we traverse a
delegation chain?