I know this Dijkstra quote get trotted out a lot, but is it actually
true? Computers don't actually deal in infinite sets. So in principle
In Usenet, it's even the case that non-testing can prove the absence of
functionality. E.g. ``I know nothing about that <editor/os/programming
language> but I know it's broken and it sucks.
I can enumerate all cases.
Dijkstra might say to that: if you are able to test a hypothesis for all
possible combinations of all of its inputs, then you are not really testing any
more, but proving.
If a particular space is small enough to succumb to this approach, then in fact
that approach falls into the domain of formal verification.
By analogy, enumerating all values of the boolean variables in a simple logical
argument, to build an exhaustive truth table, is one of the formal ways of
proving or disproving that it's valid; it's different from software
testing.
This may be practically impossible for
large or complex programs but it can be done in some cases. Therefore
But it's practically impossible for even small programs! The combinatorial
explosion in the input space grows rather fast.
If there are four independent inputs that are 32 bits wide, you have 128 bits.
Also, you really have to be sure that you are exhaustively testing the space.
Certain bugs can increase the space. If the program invokes undefined
behaviors, how do you know they don't explode the space?
For instance, accessing some uninitialized memory of 32 bits might not appear
as an obvious input to your program, but it could potentially blow up your
state space by a factor of 2^32.
And then of course are programs with real-time behavior, where no test
case is 100% repeatable, even if you build a detailed simulation of
the system (which will be necessarily a discrete simulation).