Isn't the ability to do just that what we strive to do? To write
reliable software even in the face of the unexpected?
There may be a confusion over the meaning of "expected" here. I was
responding to Roland Pibinger's phrase "expected runtime scenario"
believing he meant that a contract violation by a third party
component was a normal ("expected") event and that your program is
inherently flawed if correct behaviour relies on third party
components not violating their contracts.
By definition, the only way [*] it is possible for a component to
violate its interface specification is if it has a bug. And you can't
build a reliable product if one of the components has a bug. If the
bug is in a component you produce, you need to fix the bug. If the bug
is in a third party component, you need to change supplier (or get the
current supplier to fix their product).
[*] Assuming the documentation fully describes the interface. But
then, if it doesn't, you don't have the information needed to
integrate that component into your product in the first place.
I always looked at everything that I've done to try to engineer better
software as being able to cope with every more error cases and faulty
systems (and users) with the minimum of problems.
Users are different. You have to be prepared for anything from them.
I was talking about faulty systems that *are part of* the product, as
opposed to systems the product interacts with.
An example, I hope not too contrived, suppose I am producing DVD
recorders. The DVD drive that reads and writes discs is part of the
product. If it violates its interface spec (which in this case will
have electrical and mechanical aspects as well as software, but the
principle is the same) I can't build a product around it and that's
that. On the other hand, the individual discs the users put in the
machine are something my system interacts with. I must be able to
detect incorrect format or faulty discs and inform the user while
continuing to run.
Gavin Deane