Jacob said:
You describe an ideal world where the unit test writer thinks
of every possible scenario beforehand. In such a regime you don't
need unit testing in the first place.
My experience is that you tend to "forget" certain scenarios
when you write the code, and then "forget" the exact same cases
in the test. The result is a test that works fine in normal cases,
but fails to reveal the flaw in the code for the not-so-normal
cases. This is a useless and costly excercise. Random inputs may
cover some of the cases that was forgotten in this process.
There is where TDD comes in.
If we write one test at a time .
Write Just Enough Code to make the test pass.
Refactor to improve the current state of the design
We are only writing code for tests we already have. The next test is
only needed if we need to code something or to strengthen the
corner-case tests of the code that we have just made.
This way - there is no forgetting.
To make this achievable, each test case (method) should :
1) only test one aspect of the code
2) Have as few asserts as possible (1 being the best)
3) Be small (like any method) ~ 10(or what ever your favourite number
is) lines of code.
4) be fast - the faster they run, the more we run them continuosly,
the sooner we find problems.
5) Do not use/touch: Files, Networks, dbs - these are slow compared to
in memory fake data/objects.
If I have a flaw in my code I'd be more happy with a test that
indicates this *sometime* rather than *never*. Of course *always*
is even better, but then we're back to Utopia.
BTW: You can acheieve repeatability by specifying the random
seed in the test setup. My personal approach is of course to seed
with a maximum of randomness (using current time millis
you might want to google 'seeding with time' to see why its not a great
idea.... especially when unit tests are concerned.
Again you add definition to unit testing without further reference. Unit
testing is *in practice* white-box testing since the tests are normally
written by the target code developer, but it is actually beneficial to
treat it as a black-box test: Look at the class from the public API,
consider the requirements, and then try to tear it appart without thinking
too much about the code internals. This is at least my personal approach
when writing unit tests for my own code.
white box /black box.... all the same really from a testing PoV... the
only difference is how tolerable the test case is to the code design
changing. White box..not terribly tolerant. Black box...tolerant.
With TDD, its better to consider the unit tests to be 'Behavior
Specification Tests'. They are validating that the specified Behavior
exists within the code under test. But each specification test is
specifying a small part of the code under test, as we have multiple
small test cases. Not few large testcases.
For example, we have Calculator class that can Add, Subtract, Multiply &
Divide Integers.
So we'd have the following tests...
testAddingZeros()
testAddingPositiveNumbers()
testAddingNegativeNumbers()
testAddingNegativeWithPositiveNumbers()
testAddingPositiveWithNegativeNumbers();
testDividingByZero()
testDividingPositiveNumberByNegative()
.....
I Don't need to have tests for different values within the Integer Range
within each test case, as I have separate testcases for the different
boundaries. One benefit of having separate named testcases rather than
lumping them all in a single testAdd() method, is that I can write Just
Enought code to make each test pass. However, the biggest benefit comes
later when I or someone else modifies the code and one or two Named
testcase fail rather than a single test case. Immediately - with having
debug! I can see what has broken.
"typing.... run all tests ... bang!
...
testAddingNegativeWithPositiveNumbers() failed - expected -10, got -30)
"
I know I've broken the negative with Positive code somehow, but I also
know I Have Not broken any other conditions (testcases).
if all of those asserts were in one testAdd() method, then any asserts
after the one testing -10 + 20 would NOT be run, so I would know if I've
broken anything else.
This might seem like a small thing, but when your application has 1700s
unit tests, its so much easier to see whats happening quickly with this
apporach.
Now each of these test cases my end up being the same apart from the
values passed to the Calc object and the expected output.
In that case I'd do one of two things:
1) refactor the tests to use a private helper method
private void testWith(Integer num1, Integer num2, Integer expected)..
2) Apply the 'ParameterisedTestcase pattern.
Andrew