unit testing guidelines

H

Hendrik Maryns

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
NotDashEscaped: You need GnuPG to verify this message

Jacob uitte de volgende tekst op 03/18/2006 12:03 AM:
I have compiled a set og unit testing
recommendations based on my own experience
on the concept.

Feedback and suggestions for improvements
are appreciated:

http://geosoft.no/development/unittesting.html

Nice work.

I don't totally agree with point 16: a throws statement means an
exception *might* be thrown, and the circumstances under which this can
happen should be documented. It is seldom that an exception must be thrown.

You might want to give some explanation about what you assertX methods do.

H.
--
Hendrik Maryns

==================
www.lieverleven.be
http://aouw.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)

iD4DBQFEHEUxe+7xMGD3itQRAsFwAKCBXX77fVjeMGJz3+AU0Bxe/pYnoQCY5z2F
Ti5C4PCNnHJBSHbz3tp2jQ==
=VsD2
-----END PGP SIGNATURE-----
 
D

Daniel T.

Hendrik Maryns said:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
NotDashEscaped: You need GnuPG to verify this message

Jacob uitte de volgende tekst op 03/18/2006 12:03 AM:

Nice work.

I don't totally agree with point 16: a throws statement means an
exception *might* be thrown, and the circumstances under which this can
happen should be documented. It is seldom that an exception must be thrown.

I agree. Only test what you actually want the client code to rely on.
Now if you want the client code to rely on the method throwing an
exception...
 
I

Ian Collins

Jacob said:
I have compiled a set og unit testing
recommendations based on my own experience
on the concept.

Feedback and suggestions for improvements
are appreciated:

http://geosoft.no/development/unittesting.html
I'd add point 0 - write the tests first.

8 - names should be more expressive, rather than testSaveAs(), how about
a series of tests, testSaveAsCreatesANewFile(),
testSaveAsSavesCuentdataInNewFile() etc. Often tests with a broad name
attempt to test too much ad don't express their intent.

Point 0 covers point 11.

13 - take care with random numbers, they can lead to failures that are
hard to reproduce. I'd use a pseudo-random sequence that is repeatable
with a given seed.

0 and 8 covers 14.

0 covers 17.

0 covers 20.
 
T

Timo Stamm

Jacob said:
I have compiled a set og unit testing
recommendations based on my own experience
on the concept.

Feedback and suggestions for improvements
are appreciated:


| 7. Keep tests close to the class being tested
|
| If the class to test is Foo the test class should be called FooTest
| and kept in the same package (directory) as Foo. The build environment
| must be configured so that the test classes doesn't make its way into
| production code.

It is necessary to have test classes in the same package as the tested
class in order to test package private methods.

But you don't have to put the classes in the same directory. Most IDEs
support several source folders. You can setup two source folders. For
example: "src" for your application source, "test" for your test source.
If you use the same package structure in the test source folder, you can
test package private methods and it is very easy to deploy only
application code.


Timo
 
I

Ian Collins

Timo said:
| 7. Keep tests close to the class being tested
|
| If the class to test is Foo the test class should be called FooTest
| and kept in the same package (directory) as Foo. The build environment
| must be configured so that the test classes doesn't make its way into
| production code.

It is necessary to have test classes in the same package as the tested
class in order to test package private methods.
Another view that tests that require access to private methods are a
design smell. Often these can be refactored into objects that can be
tested in isolation.

In C++, it's very tempting to make the test class a friend of the class
under test. I've found that I end up with a better design by resisting
this temptation.
 
H

Hendrik Maryns

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
NotDashEscaped: You need GnuPG to verify this message

Ian Collins uitte de volgende tekst op 03/19/2006 12:13 AM:
Another view that tests that require access to private methods are a
design smell. Often these can be refactored into objects that can be
tested in isolation.

I was about to answer the same: shouldn't problems in package private
methods spill through to public methods? Then why test them separately?
Find an error in a public method and retrace it with you favorite
debugger to the package private method, I'd say (without much
experience, so correct me if I'm wrong).

H.

--
Hendrik Maryns

==================
www.lieverleven.be
http://aouw.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)

iD8DBQFEHJyze+7xMGD3itQRAuV5AJ9RMb89yiGxxknFqT7XCaTjHnJJ9QCfXrZ3
7iuZP99HYR0uR9FuffXLeLk=
=KKUv
-----END PGP SIGNATURE-----
 
B

Bent C Dalager

I was about to answer the same: shouldn't problems in package private
methods spill through to public methods? Then why test them separately?

It makes it more time-consuming to find out where the error is.
Find an error in a public method and retrace it with you favorite
debugger to the package private method, I'd say (without much
experience, so correct me if I'm wrong).

I prefer my unit tests to have obvious failure modes so that I can
basically tell from which test failed, exactly where in my source the
bug is. This means I don't have to muck around with a debugger, I can
just fix it and get on with things.

For this to be the case, however, the methods that I test need to be
reasonably small and not do a whole lot. These are my private helper
methods that I invoke from my more involved algorithm methods. Many
are one or two liners and they generally don't make sense to have
publicly accessible since they're really just internal building blocks
for constructing other more interesting methods.

Cheers
Bent D
 
T

Timo Stamm

Ian said:
Another view that tests that require access to private methods are a
design smell. Often these can be refactored into objects that can be
tested in isolation.

Not "private", but "package private".

Package private classes are only visible within the same package (same
directory). They are useful in large APIs where you have a lot of
functionality, but only want to expose a small interface.


Timo
 
T

Timo Stamm

Timo said:
| 7. Keep tests close to the class being tested
|
| If the class to test is Foo the test class should be called FooTest
| and kept in the same package (directory) as Foo. The build environment
| must be configured so that the test classes doesn't make its way into
| production code.

It is necessary to have test classes in the same package as the tested
class in order to test package private methods.

But you don't have to put the classes in the same directory. Most IDEs
support several source folders. You can setup two source folders. For
example: "src" for your application source, "test" for your test source.
If you use the same package structure in the test source folder, you can
test package private methods and it is very easy to deploy only
application code.


Oops, I didn't realize that the guidelines aren't java-specific and that
this thread is on c.l.java.p as well as c.l.c++.

My objection is specific to java. I doubt that the same applies to c++.
 
I

Ian Collins

Timo said:
Not "private", but "package private".

Package private classes are only visible within the same package (same
directory). They are useful in large APIs where you have a lot of
functionality, but only want to expose a small interface.
I see, a concept not shared with C++.
 
I

Ian Collins

Hendrik said:
I was about to answer the same: shouldn't problems in package private
methods spill through to public methods? Then why test them separately?
Find an error in a public method and retrace it with you favorite
debugger to the package private method, I'd say (without much
experience, so correct me if I'm wrong).
As Brent said, you are testing too much with your tests. A golden rule
is not to rely on indirect tests.

I've recently come to the conclusion (while working with PHP which
doesn't have a handy debugger) that resorting to the debugger is a
strong indicator that your tests aren't fine grained enough. Try working
without one for a while and see your tests improve!
 
A

Adam Maass

Jacob said:
I have compiled a set og unit testing
recommendations based on my own experience
on the concept.

Feedback and suggestions for improvements
are appreciated:

http://geosoft.no/development/unittesting.html

Thanks.

I strongly object to number 13. Unit-tests, especially in an automated
framework, should be repeatable. (When a test fails, you need to know on
what inputs it failed. Once you fix the failure, you should hard-code the
inputs it failed on so that subsequent changes do not cause a regression of
the error.)

I don't necessarily object to looping over large numbers of inputs and
testing each one for expected outputs. But a unit test should contain no
randomness at all. (Or at least should have a way of specifying the seed for
the randomness generator(s).)


-- Adam Maass
 
J

Jacob

Hendrik said:
I don't totally agree with point 16: a throws statement means an
exception *might* be thrown, and the circumstances under which this can
happen should be documented. It is seldom that an exception must be thrown.

I assume the conditions for when an exception is thrown
is deterministic and well documented (though it to a large
extent depend on *documentation* rather than language syntax
which is a problem as documentation is inherently inaccurate).

The simple example is the java List.get(int index) method that
is documented to throw an exception if index < 0. This is the
contract, and this is one of the things I want to test in a
unit test.

Recommendation 16 just indicate how this is done in practice.
 
J

Jacob

Ian said:
I'd add point 0 - write the tests first.

Personaly find the XP approach to unit testing a bit too restrictive
and therefore left the issue intentionally open. I really like
more feedback on it though, as though I have practiced unit testing
for years, I never adopted this practice myself.
8 - names should be more expressive, rather than testSaveAs(), how about
a series of tests, testSaveAsCreatesANewFile(),
testSaveAsSavesCuentdataInNewFile() etc. Often tests with a broad name
attempt to test too much ad don't express their intent.

Agree. I think this is basically what's in #8 without being to
verbose.
Point 0 covers point 11.

I am not sure it does, and I wanted to define the two concepts
"execution coverage" and "test coverage" anyway. There is a blurred
distinction between the two in the literature as far as I have been
able to dig up.
13 - take care with random numbers, they can lead to failures that are
hard to reproduce. I'd use a pseudo-random sequence that is repeatable
with a given seed.

0 and 8 covers 14.

To some degree, but I'd include them even if #0 was there. I don't
see "testing first" as a silver bullet, but more as a different
process aproach.
0 covers 17.

Not necesserily. #0 states when to write the tests. #17 states that
the *code* should be written so that the workload of the unit testing
is minimized.
0 covers 20.

Yes, assuming everything is tested always. But in that case
it is covered without #0 as well. What I see in the industry today
is a major shift in adding unit testing to legacy code. I added
#20 as a suggestion to start this work at the bottom level.
 
J

Jacob

Ian said:
Another view that tests that require access to private methods are a
design smell. Often these can be refactored into objects that can be
tested in isolation.

In C++, it's very tempting to make the test class a friend of the class
under test. I've found that I end up with a better design by resisting
this temptation.

This is my experience as well, and the reason why I added
recommendation #9 "Test public API".

That something is technically feasable (private method testing through
reflection or by other means) doesn't necesserily mean it is a good idea.

You need to draw the line somewhere, and the public API seems quite
natural in this case. This is also more robust agains changes, in that it
will be more stable and require less testing maintainance during code
refactoring.
 
J

Jacob

Timo said:
Not "private", but "package private".

Package private classes are only visible within the same package (same
directory). They are useful in large APIs where you have a lot of
functionality, but only want to expose a small interface.

I regard this as "private" in this context. An error in the
inner logic between classes of the same package (or *friends*
in C++ syntax) will eventually reveal itself through the public
API.

I want to keep test classes close to the class being tested for
practical reasons rather than technical reasons.

I understand the objection of "testing too large chunks of code"
(Ian C.), but test code adds complexity and workload to your system
afterall, and I really want to keep it to a minimum. That's why I
reduce the public API of classes as much as possible (by heavy
use of package private methods for instance) and insist on testing
public API only.

But I don't clain that this is the only way, and it might well
depend on the nature of the project being tested.
 
J

Jacob

Adam said:
I strongly object to number 13. Unit-tests, especially in an automated
framework, should be repeatable. (When a test fails, you need to know on
what inputs it failed. Once you fix the failure, you should hard-code the
inputs it failed on so that subsequent changes do not cause a regression of
the error.)

I understand your objection, but this is actually one of the
mechanisms that have helped me found some of the hardest to
trace and most subtle errors in the code. It has proven to be
extremely helpful. Also, it gives me lots of confidence
knowing that my test suite of several thousand tests
are executed every hour with different input each time.
It is like adding another dimension to unit testing.

But as tests must be reproducable I agree, I added #15 to ensure
that when a test fails, the test report will include the input
parameters if failed with exactly. Then you can add a test with
this explicit input and debug it from there.
 
I

Ian Collins

Jacob said:
Personaly find the XP approach to unit testing a bit too restrictive
and therefore left the issue intentionally open. I really like
more feedback on it though, as though I have practiced unit testing
for years, I never adopted this practice myself.
TDD is more than an approach to unit testing, it is an approach to the
full design-test-code cycle.
Agree. I think this is basically what's in #8 without being to
verbose.



I am not sure it does, and I wanted to define the two concepts
"execution coverage" and "test coverage" anyway. There is a blurred
distinction between the two in the literature as far as I have been
able to dig up.
TDD done well will give you 100% execution coverage for free. How good
your test coverage is depends on how good you are at thinking up edge
cases to test.
To some degree, but I'd include them even if #0 was there. I don't
see "testing first" as a silver bullet, but more as a different
process aproach.
Simple, incremental tests are the essence of good TDD.
Not necesserily. #0 states when to write the tests. #17 states that
the *code* should be written so that the workload of the unit testing
is minimized.
If you start with the tests,the code will have to be written that way.
Yes, assuming everything is tested always. But in that case
it is covered without #0 as well. What I see in the industry today
is a major shift in adding unit testing to legacy code. I added
#20 as a suggestion to start this work at the bottom level.

Very true.
 
A

Andrew McDonagh

Ian said:
TDD is more than an approach to unit testing, it is an approach to the
full design-test-code cycle.


More fundamentally, TDD is Design Methodology, Not a Testing Methodology.

It just happens to use Unit tests as its means of describing the design,
much like RUP uses UML.

Indeed, some TDD practitioners are starting to call it BDD - as in

http://www.google.co.uk/search?hl=en&q=behaviour+driven+development&btnG=Google+Search&meta=
TDD done well will give you 100% execution coverage for free.

I'd clarify that with 'TDD done *Correctly will give you 100% execution
coverage'

*Correctly = Write 1 failing Testcase,
Write only enough code to make test Pass,
Refactor to Remove Duplication,
Repeat

More commonly referred to as Red, Green, Refactor.
How good your test coverage is depends on how good you are at thinking up edge
cases to test.

Always starting with the test first, only allows for 100%.
Simple, incremental tests are the essence of good TDD.

These kind of tests are unit tests as in the TDD usage - they are stress
tests that happen to be written in the same framework as the TDD unit
tests.

However, looping over a random set of numbers isn't the best approach to
this style of testing. If the OP wants to do this style, then using one
of the various Agitating frameworks/products will give a better result.

These tools tend to use byte code manipulation to random change various
values which aren't just numbers, but anything: int, long, float,
Integer, Double, String, Boolean, boolean, introducing Nulls, etc.

See http://www.agitar.com/
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

Forum statistics

Threads
473,768
Messages
2,569,575
Members
45,054
Latest member
LucyCarper

Latest Threads

Top