[Q] Virtual destructors and linking

O

Oleksii

Hello,

I'm rather new to the advanced topics, therefore I cannot explain the
following myself. Could anyone give me a hint on this one?

I'm trying to avoid link-time dependencies on (a test version of)
certain classes. In other words I don't want to link stuff that I don't
use. One thing that worked for me was replacing instance members
(MyClass myClass) within a class with auto_ptr (auto_ptr<MyClass>
myClass) and initializing it with a new instance of a class in the
product code and a NULL in the test code.

This works... unless the class being auto_ptr'ed has a non-virtual
destructor. The linker is happily eating input with virtual
destructors. Otherwise the implementation of the destructor is claimed
to be missing.

The whole thing was originally tested on M$ visual studio compiler,
then on gcc (cygwin/3.4.4).

Is this a feature of compiler(s) being too smart (even when all
optimizations are disabled) or is it an expected behavior? What exactly
is happening here? If this is compiler-specific I assume it is not very
scalable and I'd better drop using it. Any other suggestions in that
case?

Here is some sample code:

#include <memory>
#include <iostream>

class MyClass2
{
public:
MyClass2() {};
virtual ~MyClass2();
};

int main()
{
// Uncommenting the following line would lead to complaint
// from linker that requires implementation for ~MyClass2().
// std::auto_ptr<MyClass2> myClass2r (new MyClass2);

std::auto_ptr<MyClass2> myClass2n;
}

p.s. I'm working on a rather big legacy code thing and therefore a bit
limited in redesign. Link-time stubbing for test purposes seems to be
the easiest approach.

Thanks in advance, Oleksii.
 
V

Victor Bazarov

Oleksii said:
I'm rather new to the advanced topics, therefore I cannot explain the
following myself. Could anyone give me a hint on this one?

I'm trying to avoid link-time dependencies on (a test version of)
certain classes. In other words I don't want to link stuff that I don't
use. One thing that worked for me was replacing instance members
(MyClass myClass) within a class with auto_ptr (auto_ptr<MyClass>
myClass) and initializing it with a new instance of a class in the
product code and a NULL in the test code.

But that's definitely not the same... An instance of a class requires
different code to call members than an auto_ptr...
This works... unless the class being auto_ptr'ed has a non-virtual
destructor. The linker is happily eating input with virtual
destructors. Otherwise the implementation of the destructor is claimed
to be missing.

The whole thing was originally tested on M$ visual studio compiler,
then on gcc (cygwin/3.4.4).

Is this a feature of compiler(s) being too smart (even when all
optimizations are disabled) or is it an expected behavior? What exactly
is happening here? If this is compiler-specific I assume it is not very
scalable and I'd better drop using it. Any other suggestions in that
case?

Here is some sample code:

#include <memory>
#include <iostream>

class MyClass2
{
public:
MyClass2() {};
virtual ~MyClass2();

So, why don't you just write

virtual ~MyClass2() {}

The same empty parentheses don't seem to cause you any trouble in the
[unneeded] default constructor...
};

int main()
{
// Uncommenting the following line would lead to complaint
// from linker that requires implementation for ~MyClass2().
// std::auto_ptr<MyClass2> myClass2r (new MyClass2);

Of course it requires the implementation of ~MyClass2() -- it uses it!
The destructor for 'auto_ptr' actually destroys the object by calling
'delete' on the point it "owns".
std::auto_ptr<MyClass2> myClass2n;

I am not sure how the compiler could be that smart, but probably it can.

Here you never actually create an instance of 'MyClass', so it never needs
to destroy it. I am guessing that the clever implementation of
std::auto_ptr somehow knows whether the object has or hasn't been
constructed and produces a special destructor in the case where only the
default constructor was used. Optimizing compiler might be able to do
something like that...
}

p.s. I'm working on a rather big legacy code thing and therefore a bit
limited in redesign. Link-time stubbing for test purposes seems to be
the easiest approach.

What exactly are your limitations? Can't you give your classes empty
(doing nothing) destructors?

V
 
R

Ron Natalie

Oleksii said:
Hello,

I'm rather new to the advanced topics, therefore I cannot explain the
following myself. Could anyone give me a hint on this one?

You're lucky it works even without a virtual destructor.
You effectively use the destructor in either case.
 
O

Oleksii

First of all you're right, in the implementation I have to change all
myClass.DoSomehting() into myClass->DoSomething(), but I consider it as
a minor change. I assume that was the difference you have mentioned. I
don't have to add anything though - no additional work for destructors,
since smart pointers do everything under the hood.

Regarding my limitations. I cannot change design of the classes, but I
can tweak classes a bit (e.g. replacing instances with pointers). The
problem with changing design is that the code is not testable
standalone - it is a huge monolithic piece of sw.
(Ideally) I don't want to link/target all libraries I use in my
production code, but only the necessary ones (at the moment I have to
copy 18(!) DLLs to the test folder to be able to instantiate my class
in test environment...). When I've used auto_ptr's I only have to
include the definitions of the classes, but I don't have to link/target
the implementations (DLLs) along!

That is also the reason why I don't provide any empty implementations
for destructors - they do have they full-blown implementations in a
separate DLL, but I don't want to link/stub it because in principle it
is never used.

The reason why I come up with the question is that I don't understand
how is it possible that this works :) and how scalable it is
(compiler-specific).

I hope this makes it clearer.

oleksii
 
O

Oleksii

Ron said:
You're lucky it works even without a virtual destructor.
You effectively use the destructor in either case.

Yep, I also feel that. And therefore before using it in the production
code I want to understand why is it working and not throwing any
exceptions (e.g. when auto_ptr is trying to determine which destructor
has to be called). What I think now is that might be a smart
implementation of the delete operator (not the operator delete if I'm
correct), that does not bind to the class implementation since NULL
pointer is provided. Just a thought.

I've looked through the auto_ptr implementation and I don't see
anything very special about this case.

Nevertheless the question remains: why this damn thing works with
virtual constructors and complains when there is no virtual
constructor...

oleksii
 
V

Victor Bazarov

Oleksii said:
[..]
The reason why I come up with the question is that I don't understand
how is it possible that this works :) and how scalable it is
(compiler-specific).

I am not sure, but I have a gut feeling that it's compiler-specific.
You should take a look at the implementation of the 'auto_ptr' class
template. Perhaps the compiler/library employs some kind of indirect
construction and since the other (underlying) template is not used
(and therefore not instantiated) in the case when you only use the
default c-tor for 'auto_ptr', the code to call the destructor is not
there (not instantiated), and the linker doesn't need it. Now, why
the things are different between virtual/non-virtual destructors, I
don't know. Hard to imagine how it would be implemented. Then again,
I am not a compiler expert, perhaps there is something that your
compiler does behind the scenes, which it cannot do if the destructor
is non-virtual...

V
 
S

Swampmonster

Oleksii said:
Is this a feature of compiler(s) being too smart (even when all
optimizations are disabled) or is it an expected behavior? What exactly
is happening here? If this is compiler-specific I assume it is not very
scalable and I'd better drop using it. Any other suggestions in that
case?

Here is some sample code:

#include <memory>
#include <iostream>

class MyClass2
{
public:
MyClass2() {};
virtual ~MyClass2();
};

int main()
{
// Uncommenting the following line would lead to complaint
// from linker that requires implementation for ~MyClass2().
// std::auto_ptr<MyClass2> myClass2r (new MyClass2);

std::auto_ptr<MyClass2> myClass2n;
}

p.s. I'm working on a rather big legacy code thing and therefore a bit
limited in redesign. Link-time stubbing for test purposes seems to be
the easiest approach.

Hi! First I want to say that I'm 99% sure that the standard does not
in any way force a system to link this without crying about the missing
"virtual ~MyClass2". But I can tell you why it works since it's really
very basic.

Let's say we have a pointer called "p" which is of type "MyClass2*".
If "MyClass2" has a non-virtual dtor, then "delete p;" will directly
invoke "MyClass2::~MyClass2()". Whether or not "p" is NULL is only
known at runtime, and so the linker will cry about
"MyClass2::~MyClass2()" if it can't be found.

If "MyClass2" has a virtual dtor, then "delete p;" will call the dtor
of "MyClass2" or the dtor of a class derived from "MyClass2" if "p"
happens to point to such a derived class. Now, that could be done by
many different means of reflection, but the way it is usually done is
by using a virtual-function-table (short: vtable). For this to work
the compiler places a pointer to that vtable inside every instance of
"MyClass2". Now to call a virtual function on "MyClass2" the compiler
looks up the vtable by using that vtable pointer stored somewhere
"inside MyClass2", and then look up the function-address in the vtable.
Then it just calls the function at that address, whatever it might be.
So in short, the "delete p" thingy does no longer have to know the
address of "MyClass2::~MyClass2()". If "p" points to a "MyClass2"
the dtor slot in the vtable will point to "MyClass2::~MyClass2()",
but if "p" points to some derived class that slot will point to
dtor of that derived class.
Now, the compiler only has to know the address of the vtable when
initializing a new instance of "MyClass2" since this is the point where
it will store the vtable's address "inside" the "MyClass2" instance.
So, to sum it up: the ctor code of "MyClass2" will take the address of
"MyClass2"'s vtable, and the vtable will reference the dtor. Since the
ctor of "MyClass2" is not called by your program the linker will not
link it. And since the ctor is the only part that directly references
the vtable of "MyClass2" the vtable will also not be linked, and since
the vtable of "MyClass2" is the only part that directly references the
dtor of "MyClass2" the dtor will also not be linked.

So it's not about a compiler being too smart, but about a linker that
does it's job as it's supposed to do it.

Look at this for example:

class MyClass2
{
public:
MyClass2() {};

// with ~MyClass2 being virtual the call to MyClass2::MyClass2
// in main() will cause a linker error, since the compiler has
// to create the vtable for MyClass2 which contains a reference
// to ~MyClass2. If you try the same with a non-virtual ~MyClass2
// the program will probably link fine (at least with MSVC7.1
// it definately does) since the dtor is never being called,
// and since it's not virtual anymore it's also not being
// referenced.
virtual ~MyClass2();
};

int main()
{
int space[100];
reinterpret_cast<MyClass2*>(&space)->MyClass2::MyClass2();
}
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,754
Messages
2,569,528
Members
45,000
Latest member
MurrayKeync

Latest Threads

Top