I recently had an interview for a C++ programmer position and the
interviewer asked me questions like:
where does the virtual table reside? (my answer: in memory)
This question assumes the compiler implements virtual functions as an
array of function pointers. The implementations listed in Lippman's
book (from memory and it's been a long while since I last read it) turn
the following:
class foo {
virtual void fn();
data d;
};
into one of (ignoring std::type_info):
// table inlined at the beginning
struct foo {
void (*fn)(foo*);
data d;
};
// table inlined at the end
struct foo {
data d;
void (*fn)(foo*);
};
struct foo_vtbl {
void (*fn)(foo*);
};
// vtable at beginning
struct foo {
const foo_vtbl *vtbl;
data d;
};
// vtable at end
struct foo {
data d;
const foo_vtbl *vtbl;
};
There are tons of other possible choices. One possibility is to keep
track of all RTTI objects in a giant global table and do a lookup based
on the address and then call the desired function based on a switch
statement. This has the advantage of requiring no storage inside the
class at all.
If we add a virtual function to a class will we see an increase in
class size? (i said yes)
While size of class grow at constant rate with the addition of each
virtual function? (i said not sure)
These two questions support my assumption about those 4 implementations
above. You can see that if we inline the table, the class size
increases, but if we use a vtable, the class size stays constant.
For multiple/virtual inheritance with virtual data members this gets
much more complicated and it really depends on how the implementation
ties all the things required by the C++ standard together to see if
adding a virtual function in some base class increases the size.
He then paused and gave me some advice on which books to read, saying
that I must read "Inside the C++ Object Model". He also said that I
can't be a good C++ programmer without being a good C programmer.
This guy sounds like one of those people that gets really familiar with
how their compiler implements things and then makes all sorts of
non-standard conforming/portable assumptions instead of what the
standard actually guarantees. (I'd recommend getting the book though, I
used it to learn C++ but it definitely shouldn't be used as a
substitute for the C++ standard)
It is disappointing because I answered each question to the best of my
ability. There were some algorithm problems as well but he criticized
me most for knowledge of C++. So essentially there is no set standard
for how objects should be represented internally?
A C++ implementation may be required to follow some ABI for backwards
compatibility/interoperability reasons, but as far as the C++ standard
is concerned, the answer is no and the examples given in the book were
pretty much cfront centric and designed to be easily ported. Non-cfront
compilers have the advantage of knowing the machine model and can do
all sorts of evil tricks like using thunks.