does virtualizing all methods slow C++ down ?

G

Goran

  "Obviously" just could as well mean "significantly" (iow. the slowdown
is "obvious", ie. very noticeable).

  From my own tests there's no practical difference between a virtual
function call and a regular function call.

We agree, don't we? In my first post, right after the "offending"
"obviously", I said (this time, with relevant emphasis):

As others have said, you probably CAN'T NOTICE the slowdown. Not
unless something REALLY DEGENERATE is happening in the code (like,
you're spending most of the complete program time virtually calling
functions that are very shot and aren't virtual "in practice").

To be quite honest I posted here more to bash generalized use of
"virtual". I care about that more than CPU cycles ;-).

Goran.
 
N

Noah Roberts

 >> [...]

If you virtualize one, there is no cost for virtualizing the others.
There is a cost associated with virtualizing one method. See the FAQ.

That's not true.

It is true that the size of an object increases for the first virtual
function you add, but not for further virtual functions (for usual
vpointer implementations).
However, _every_ function might be slower when called virtually, because
the compiler has less potential for optimization (function inlining etc)
than for non-virtual functions.

Not really. Most cases where inlining applies are still quite
possible when the functions are virtual.
 
R

Richard Damon

Not really. Most cases where inlining applies are still quite
possible when the functions are virtual.

Inlining of virtual functions can only occur if the static type of the
object is known (which in my experience tends to be rare for this type
of object) or the virtual function has been declared "final", but for a
function that really never needed to have been declared virtual, the
final would have been added to the first deceleration of being declared
virtual, which implies that it is known that it doesn't need to be
virtual in the first place.
 
J

Joshua Maurice

(for example,
some compilers provide extensions to convert a virtual function pointer
into a non-virtual one which can then be used inside the loop, providing
better optimization possibilities).

I don't know what that means. You can already get a member function
pointer. Perhaps this extension allows you to say "take this member
function pointer and this pointer-to-object, and resolve the member
function pointer down into a new function pointer like thingy which
you can use on that pointer-to-object, but not other objects of
different dynamic type"? Still, that's changing effectively "one
indirection and one function pointer call" to just "one function
pointer call". It's still a function call through a function pointer.
I don't see how a compiler C++ extension could change a virtual
function call into a vanilla function call - that is, I don't see how
it could be done to allow for compile time inline code expansion, and
that's what really matters.

Now, compilers can do inline code expansion of virtual function calls
in some cases if they're really good, but you don't need C++
extensions for that, just a better optimizer. I don't see what a C++
extension would do.
 
N

Nobody

I don't know what that means.

I think that he is referring to automatic specialisation, i.e.:

if (&obj->method == &Base::method)
// generated code using inlined Base::method
else if (&obj->method == &Derived::method)
// generated code using inlined Derived::method
...
else
// generated code calling obj->method() via vtable

This gets pretty hairy if there are multiple objects and many known
subclasses. And it only works if the method definitions are available.
 
L

ld

Does virtualizing all methods slow C++ down ?  We
virtualize all methods as a matter of course since
we use groups of objects all over the place.  To
illustrate this I have included the top portion of
our header files:

class GenGroup : public DataGroup
{
public:
    int transferingDataNow;
public:
       //  constructor
    GenGroup ();
    GenGroup (const GenGroup & rhs);
    GenGroup & operator = (const GenGroup & rhs);
       //  destructor
    virtual ~GenGroup ();
    virtual GenGroup * clone () { return new GenGroup ( * this); }
    virtual void makeUnitsInput (InputCollection * anInpCol);
    virtual void reinitStreamBox (FmFile * FMFile);
    virtual DataDescriptor * descriptor (int aSymbol, int version);
...

Thanks
Lynn

To make things short,

- A polymorphic class in C++ is a class with at least one virtual
member function or one virtual base class.

- The memory extra cost in your polymorphic instance will be about one
pointer per branch having one polymorphic base class in your
inheritance DAG.

- The speed extra cost of a virtual call will be in general less than
25% compared to a monomorphic call (as soon as you really use an
abstract class for the call), up to possibly zero cost. But there is
situation where you can slow it down significantly like if the
overridden member function is using non trivial contravariant or
covariant argument (only "this" can be contravariant in c++) and
covariant return type. Then the compiler will have to apply thunks to
adjust properly the object offset back and forth to match its input
and ouput types to the virtual member function expectation.

Since you were talking about smalltalk in this thread, it is also
possible to write a message dispatcher which is as fast as late
binding (virtual call). My experience is that direct call, virtual
call or message dispatch don't make the difference (except for
expressivity and design). The matter is that polymorphic types (if
used as such) loose their value semantic (compared to concrete types)
and thus need to be dynamically created (allocated, cloned, etc). This
is where polymorphic code slow down significantly the runtime if it is
badly design.

Regards,

Laurent.
 
Z

zindorsky

It's still a function call through a function pointer.
I don't see how a compiler C++ extension could change a virtual
function call into a vanilla function call - that is, I don't see how
it could be done to allow for compile time inline code expansion, and
that's what really matters.

In theory at least, a compiler could create self-modifying code that
would convert virtual function calls to direct function calls. That
is, at runtime after the code gets the function pointer from the
vtable, it could also rewrite the machine code inside the tight loop
to directly call that function. Of course, as a practical matter it
would be pretty hard for a compiler to do that, but it is
theoretically possible.
 
K

Kevin P. Fleming

In theory at least, a compiler could create self-modifying code that
would convert virtual function calls to direct function calls. That
is, at runtime after the code gets the function pointer from the
vtable, it could also rewrite the machine code inside the tight loop
to directly call that function. Of course, as a practical matter it
would be pretty hard for a compiler to do that, but it is
theoretically possible.

Assuming a single-threaded environment :)
 
N

Nobody

Assuming a single-threaded environment :)

It would work in a multi-threaded environment provided that it can update
function pointers and/or jump instructions atomically.

A bigger problem nowadays would be W^X (write or execute but not both)
memory protection.
 
B

Balog Pal

This looks more than awful as C++ code.

Huh? Yes, you gain a VMT after the first virtual and it will not multiply.
But for every call, late binding is extra cost over early binding, so there
is a definite cost.

Indeed. virtual is not a hodgepodge thing. You use a virtual function to
have base class communicate with a subclass.
When you make something virtual you must document a ton of things,
especially what is required from the overrider, semantics, restrictions,
intent to call/not call the base version, etc... and optimally, provide
cases when overriding is useful and intended. Without that you have a sure
way for disaster, and speed will hardly be a factor.
We have so many methods in our 600 classes and 700K lines
of code that we are not sure what needs to be virtual and
does not. So we do all to keep from missing 1, 2 or 20.

gcc has a fine warning when you declare a function that hides another one in
a base class rather than overriding a virtual. Just turn it on and forget
crippling your codebase even further.

C++11 even have attributes to make the detection a language feature.
 
L

Lynn McGuire

That explains a lot about that "virtual everywhere" design. Go look in
"The C++ Programming Language"; Stroustrup has a few things to say
about using C++ as if it was Smalltalk.


It may not be highest priority, but I still think there's something
seriously wrong with having the code this way -- especially if you
have design rules to forbid normal C++ in new or rewritten code.

My guess (which is only a guess) is that the worst penalty is that it
discourages the use of small classes, wrappers and helper functions
which would make the code more readable. Inlining makes such things
cost-free, but if you don't have inlining you probably think twice
before doing such simplifications (at least I do).

/Jorgen

Well, we have over 600 classes in our software which dates
back to 1988 or so.

BTW, the reason that we use virtual all over the place is
that we use vectors of base class pointers. Having the
virtual command is just safer rather than chasing down each
method (and they are MANY) to see if they are over-ridden
by one of the descendent classes.

Thanks for the comments !

Lynn
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,044
Latest member
RonaldNen

Latest Threads

Top