S
Steven T. Hatton
I stumbled upon this blog while googling for something. I have to say, I
really don't understand what Lippman is trying to tell me here. I included
the first paragraph for context, but the second paragraph is the one that
has me most confused.
<quote url="http://blogs.msdn.com/slippman/archive/2004/01/27/63473.aspx">
In C++, a programmer can suppress a virtual call in two ways: directly use
an object of the class, in which case the polymorphism of the object is
eliminated except in the trivial case in which the subtype hierarchy is the
same size as the class of the object being directly manipulated. [The
analogy under .NET, although it is not supported, would be toggling a
reference type into a value type for some small program extent, eliminating
the overhead of the managed heap and the virtual mechanism of the
interface.] Obviously, this is a very special use of the polymorphic
object, and is as likely to be an error on the programmer?s part as to be
his intention. However, the ability to design first class value types ?
think of them as Abstract Data Types ? and value type inheritance is
something that I sorely miss under .NET, where complex value types are in
my experience somewhat gimped. The second and more prevalent mechanism to
suppress a virtual call is to invoke a class method through the fully
qualified class scope operator. For example,
WidgetExtension::display() { Widget::display(); /* now our
specialized display */ }
This pattern of localization within a call chain of a type-dependent method
relies on the ability of the user to limit the number of methods invoked to
the initial virtual instance, which can occur anywhere within the
inheritance chain. The subsequent chain of base class calls are then inline
expanded. Without explicit language support, the habit of programmers
concerned with performance [I don?t have any hard data, so this is
anecdotal] is to duplicate the base class code within the derived instance
to achieve the same result. This of course tightly couples the
implementation of the method with that of the base hierarchy and a single
change in the state members can cause the whole thing to derail. [The state
of OO optimization is not currently far enough along to guarantee the
elimination of these calls although that is, of course, feasible in
theory.]
</quote>
Doe anybody understand what he's trying to say? Can a relatively simple
example be created to show both the kind of class hierarchy he is talking
about, and what is meant by "suppress a virtual call is to invoke a class
method through the fully qualified class scope operator" (which I think I
vaguely understand), nd "The subsequent chain of base class
calls are then inline expanded"?
As I understand things, a virtual function invocation is a lookup in a vtbl
followed by an access to the actual function being executed. I guess he
could mean that a derived class would have its own vtbl pointing to its
baseclass, etc., and that the "chain of base class calls" is the process of
climbing up that vtbl stack.
really don't understand what Lippman is trying to tell me here. I included
the first paragraph for context, but the second paragraph is the one that
has me most confused.
<quote url="http://blogs.msdn.com/slippman/archive/2004/01/27/63473.aspx">
In C++, a programmer can suppress a virtual call in two ways: directly use
an object of the class, in which case the polymorphism of the object is
eliminated except in the trivial case in which the subtype hierarchy is the
same size as the class of the object being directly manipulated. [The
analogy under .NET, although it is not supported, would be toggling a
reference type into a value type for some small program extent, eliminating
the overhead of the managed heap and the virtual mechanism of the
interface.] Obviously, this is a very special use of the polymorphic
object, and is as likely to be an error on the programmer?s part as to be
his intention. However, the ability to design first class value types ?
think of them as Abstract Data Types ? and value type inheritance is
something that I sorely miss under .NET, where complex value types are in
my experience somewhat gimped. The second and more prevalent mechanism to
suppress a virtual call is to invoke a class method through the fully
qualified class scope operator. For example,
WidgetExtension::display() { Widget::display(); /* now our
specialized display */ }
This pattern of localization within a call chain of a type-dependent method
relies on the ability of the user to limit the number of methods invoked to
the initial virtual instance, which can occur anywhere within the
inheritance chain. The subsequent chain of base class calls are then inline
expanded. Without explicit language support, the habit of programmers
concerned with performance [I don?t have any hard data, so this is
anecdotal] is to duplicate the base class code within the derived instance
to achieve the same result. This of course tightly couples the
implementation of the method with that of the base hierarchy and a single
change in the state members can cause the whole thing to derail. [The state
of OO optimization is not currently far enough along to guarantee the
elimination of these calls although that is, of course, feasible in
theory.]
</quote>
Doe anybody understand what he's trying to say? Can a relatively simple
example be created to show both the kind of class hierarchy he is talking
about, and what is meant by "suppress a virtual call is to invoke a class
method through the fully qualified class scope operator" (which I think I
vaguely understand), nd "The subsequent chain of base class
calls are then inline expanded"?
As I understand things, a virtual function invocation is a lookup in a vtbl
followed by an access to the actual function being executed. I guess he
could mean that a derived class would have its own vtbl pointing to its
baseclass, etc., and that the "chain of base class calls" is the process of
climbing up that vtbl stack.