D
Douglas Peterson
Take a look at this code, it looks funny as its written to be as short as
possible:
-- code --
struct Base
{
~Base() { *((char*)0) = 0; }
};
struct Derived : public Base
{
Derived() { throw 1; }
};
int main(int,char**)
{
try { new Derived; }
catch (int) { }
return 0;
}
-- code --
Now don't get excited about the *((char*)0) = 0, I'm just using that as a
kind of compiler independent, hard coded breakpoint. I want to verify that
the destructor is in fact getting called without relying on a debugger set
breakpoint which can sometimes not occur, or the code be optimized out so
that there is no point to break on.
So what are we looking at?
We are creating an instance of Derived inside an exception handler. During
construction Derived throws an exception and the handler's unwinding
mechanism is calling the destructor for the base class.
My question is: Why?
Where is the logic in partially deconstructing an aggregate object that's
signaling it cannot be constructed by way of an exception? If you move the
....ahem... breakpoint to Derived's destructor, that never gets called. Makes
sense not to deconstruct an object that's not fully constructed, so again,
why partially deconstruct it?
The other thing that doesn't make sense to me is that this object is not
being created on the stack, so why would *any* destructor be called by the
unwinding mechanism? There is no call to delete anywhere in the code, so
what makes the compiler think it should be deconstructed at all?
You may have guessed that I'm having chicken and egg problems and you'd be
correct. Now I *think* I can fix the problem by rearraging order of
operations in my constructors (although I have not at this point worked out
the details, and of course that adds a degree of complexity to the code I'd
rather avoid), but I'd really like to understand the reasoning behind this.
Perhaps I'm doing something fundamentally 'wrong' and need to rethink my
design (I sure hope not!).
possible:
-- code --
struct Base
{
~Base() { *((char*)0) = 0; }
};
struct Derived : public Base
{
Derived() { throw 1; }
};
int main(int,char**)
{
try { new Derived; }
catch (int) { }
return 0;
}
-- code --
Now don't get excited about the *((char*)0) = 0, I'm just using that as a
kind of compiler independent, hard coded breakpoint. I want to verify that
the destructor is in fact getting called without relying on a debugger set
breakpoint which can sometimes not occur, or the code be optimized out so
that there is no point to break on.
So what are we looking at?
We are creating an instance of Derived inside an exception handler. During
construction Derived throws an exception and the handler's unwinding
mechanism is calling the destructor for the base class.
My question is: Why?
Where is the logic in partially deconstructing an aggregate object that's
signaling it cannot be constructed by way of an exception? If you move the
....ahem... breakpoint to Derived's destructor, that never gets called. Makes
sense not to deconstruct an object that's not fully constructed, so again,
why partially deconstruct it?
The other thing that doesn't make sense to me is that this object is not
being created on the stack, so why would *any* destructor be called by the
unwinding mechanism? There is no call to delete anywhere in the code, so
what makes the compiler think it should be deconstructed at all?
You may have guessed that I'm having chicken and egg problems and you'd be
correct. Now I *think* I can fix the problem by rearraging order of
operations in my constructors (although I have not at this point worked out
the details, and of course that adds a degree of complexity to the code I'd
rather avoid), but I'd really like to understand the reasoning behind this.
Perhaps I'm doing something fundamentally 'wrong' and need to rethink my
design (I sure hope not!).