This is completely broken. You can't test an implementation of 'square'
with an identical implementation. You need a separate representation
for your expected result. Otherwise, you are not testing anything.
I've already answered this in a different posting: The unit test
reflects the requirements. The requirements for square() is to
return the square of the input: v*v. From a black-box perspecitive
I don't know the implementation of square(). It can be anything.
This is also wrong. The boundaries of the input is stated in the
function's contract. It is not something determined by the user's level
of experience. Your test cases must cover the boundary conditions
stipulated by the function's documented contract *as* *well* *as*
boundary conditions based on white-box knowledge of the function's
implementation. If you cover these cases, plus a small assortment of
well-chosen "sanity" values, you don't need to waste time with large
amounts of random data.
This is all correct given you are able to identify the boundary
cases up front. In some cases you are, but for more complex ones
you easily forget some in the same way you forget to handle these
cases in the original code (that's why there are bugs afterall).
Imagine implementing a tree container. In order to test correct
removal of nodes, some of the boundary cases might be:
remove root
remove intermediate node
remove leaf node
remove root when this is the only node
remove root with exactly one leaf
remove root with exactly one intermediate node
remove intermediate node with one child
remove intermediate node with many children
remove leaf node without siblings
remove leaf node with siblings
remove intermediate node with root parent
remove intermediate node with only leaf nodes
remove intermediate node with leaf nodes and other intermediate nodes
remove intermediate node with only other intermediate node children
remove non-existing node
remove null
remove node with unique name
remove node with non-unique name
etc.
The above might or might not be boundary cases, that actually depends
on the implementation: A good implementation has few! From experience
you "know" which cases are more likely to contains bugs, even
without knowing the implementation.
I don't say you shouldn't cover the boundary cases explicitly,
of course you should (see #13 in the guidelines).
But when that is in place I whould have built a tree on random, containing
a random number of nodes (0 - 1.000.000 perhaps), and then picked nodes on
random and performed a random (add, remove, movde, copy, whatever) operation
on those, a random number of times (0 - 10.000 perhaps) and verified that the
operation behave as expected and that the tree is always in a consistent state
afterwards. This whould leave me with the confidence that if there are
cases I've forgotten (or that appears during code refactoring) they might
be trapped by this additional test.