JUnit et al approach - criticisms

M

Mike Schilling

Chris Uppal said:
Hmm. I would definitely call that odd use of the term. At least in an
environment where one has genuine paying customers, the question that
"acceptance testing" addresses is not "does the code work?", but "are we
going
to be paid?". And, for all the importance of the first question, the
second
has importance of a totally different order. I think few people would
want to
confuse the two ;-)

Group A located in X that's using subsystem B written in Y back in 200Z are
customers for the group from Y, as are A's end-users. I wish I had a full
acceptance test suite for, say, Xerces, so I'd know whether it's safe to
upgrade it. If there were more hours in a week, I would have one.
 
S

stevengarcia

It depends on how you write your tests and execute them, if you use the
JUnit task in Ant you can run all of your tests and see which ones
failed at the end.
 
S

stevengarcia

1. It is hard to get an entire project to adopt JUnit if not everyone
believes in automated unit tests.

Definitely true!
2. It is sometimes (often, in our case) difficult to write a *unit* test for
each class in isolation.

This might be true for 2% of all classes, but if you write your test
first and then your production class so it passes the test, you will
always avoid this problem. This is not always the easiest thing to do
though.
3. Doing the work to get adequate test coverage from a unit test suite is
generally a thankless job -- unless management buys in to the notion that it
is worthwhile and makes the time to get it done.

If you are beginning work on a legacy system this can be EXTREMELY
difficult and something I generally don't recommend. But all new code,
going forward, should have tests along side of it.
4. Maintaining the test suite along with the code it tests is sometimes
painful. (I just changed some code and I have a failing test. Is the test
failing because it's a real failure of the code or because the test case is
now incorrect?)

Yea, I agree with that. When you have 100 test classes and 100
production classes you are maintaining a big codebase. One of the
master skills of agile development is refactoring, not only production
code but also test classes. I was not good at it a year ago, now I'm
pretty good at this aspect of programming.
5. You have to train the people who write the tests what constitutes a good
test. (IE, random or pseudorandom inputs are a bad idea; fixed paths on a
filesystem are a bad idea; etc.)

Kind of like #1. BTW, random imputs in some cases are the proper way
to write unit tests. It depends on what test you are writing.
 
S

slippymississippi

The only negative thing I can come up with is that writing tests is
really, really boring.

I think that's where test-driven-development really proves its mettle.
You write your methods and tests as stubs. Then you flesh out a test
to satisfy your requirement, flesh out a method that satisfies the
test, and then run the Junit test. How it reacts will give you insight
into what problems you might have with your design. Refactor, then
test again, until your method is tight. Boom, thirty minutes have gone
by, and you move on to the next method. In one day, you can have a
complex class completely fleshed out and fully tested, with a suite of
JUnit tests that will let you know if anything gets broken by future
code changes. Because of this, you jump confidently into massive
refactoring exercises without a blink, where before you would
vascillate in the face of such an exercise for hours or even days.
 
T

Timbo

Chris said:
Only two points I'd add. One is that "unit testing" neither implies nor is
implied by the use of an xUnit framework. (I doubt if you confuse the two, but
I think some people do). The other is that given the fractal nature of
software designs, almost any test is simultaneously a unit test at one level
and an "integration test" at a lower level.

(I put "integration test" is scare quotes because that is not quite the usual
meaning of the term, but I don't know of anything closer in common usage.
Normally, in my experience, it is only used when a complete /system/ is being
tested -- full end-to-end functionality.)
I think you are right to use 'integration test' here. To me,
testing two units together is integration testing, and testing
full end-to-end functionality is just 'system testing'.
 
A

Andrew McDonagh

Timbo said:
I think you are right to use 'integration test' here. To me, testing two
units together is integration testing, and testing full end-to-end
functionality is just 'system testing'.

but what is a unit?

To TDDers, its a Class or group of classes depending upon how the code
finally looks, nothing to do with how the code starts out.

This is because TDD is an evolutionary design technique.

We:

1) Write a failing test case.
It doesn't compile because we haven't written the production code
yet. Write just enough to Make it compile - stub the methods only.
Watch the test fail - cause we haven't filled the method bodies out yet.

2) Now Write Just Enough of the method bodies to make the test pass.

3) Refactor the code to remove duplication - this is the big design
improving part - easily done because we have a passing test to tell us
if we made a mistake in our refactoring.

Repeat until finished.


Whilst it true that xUnit frameworks can be and are used by teams who
don't do TDD, its authors wrote it to facilitate TDD. Currently the
majority of JUnit users are TDDers .

Andrew
 
C

Chris Uppal

Mike said:
Group A located in X that's using subsystem B written in Y back in 200Z
are customers for the group from Y, as are A's end-users. I wish I had a
full acceptance test suite for, say, Xerces, so I'd know whether it's
safe to upgrade it.

Sounds as if we can at least agree that "acceptance testing" is done by the
consumer of the s/w rather than by the producer ?

-- chris
 
T

Timbo

Andrew said:
but what is a unit?

To TDDers, its a Class or group of classes depending upon how the code
finally looks, nothing to do with how the code starts out.
Using OO terminology, I define a unit as a class. How to test that
unit in isolation depends on its dependencies. If it depends on
other classes from within the application, those should be stubbed
during unit testing, otherwise it is integration testing. If you
feel the CUT's dependents are simple enough, you may not feel the
need to unit test it, and will probably just use the production
classes instead of stubs, but that, in my mind, is not unit testing.

Tim
 
M

Mike Schilling

Chris Uppal said:
Sounds as if we can at least agree that "acceptance testing" is done by
the
consumer of the s/w rather than by the producer ?

Sure; I never meant to suggest otherwise.
 
M

Mike Schilling

Timbo said:
Using OO terminology, I define a unit as a class. How to test that unit in
isolation depends on its dependencies. If it depends on other classes from
within the application, those should be stubbed during unit testing,
otherwise it is integration testing. If you feel the CUT's dependents are
simple enough, you may not feel the need to unit test it, and will
probably just use the production classes instead of stubs, but that, in my
mind, is not unit testing.

To take this to an extreme, are you saying that if I create a class that
represents XML QNames (i.e. that contains a namespace and a local name), and
I want to test another class that uses them, I need to stib out my QName
class to produce a true unit test? What if I have a class that contains
only static constants?
 
A

Andrew McDonagh

Timbo said:
Using OO terminology, I define a unit as a class. How to test that unit
in isolation depends on its dependencies. If it depends on other classes
from within the application, those should be stubbed during unit
testing, otherwise it is integration testing. If you feel the CUT's
dependents are simple enough, you may not feel the need to unit test it,
and will probably just use the production classes instead of stubs, but
that, in my mind, is not unit testing.

Tim

It depends upon how that class came into being, what it does and your
mood at the time.

In TDD we write a test, write the code, refactor.

now we normally start out by creating a class and its method we are
exercising. But after several test,code, refactor cycles we see that our
single class needs further simple refactorings because its really
several classes in one (e.g. a dispatcher type class which creates
instances of different command classes). but we started out with one
class and so far have one test class for it.

At this point we have a choice:

1) Leave the single test class as is - its testing our original class
and n other command classes.

2) Change the test class to test the original class, using fake/mock
Command objects. Then move the testcase methods from the original
testclass into new test classes, one for each Command class.
Here we end up with lots of small test classes, each testing an
individual class.

Both approaches are fine - both are unit tests - thats how they started
out.

For me the point at which a Unit Test stops becoming a unit test, is
when its difficult to setup a test. At this point the test code is
telling us - 'I'm a functional test'.

YMMV

Andrew
 
A

Andrew McDonagh

Chris said:
Mike Schilling wrote:




Sounds as if we can at least agree that "acceptance testing" is done by the
consumer of the s/w rather than by the producer ?

-- chris

Who said differently?
 
T

Timbo

Mike said:
To take this to an extreme, are you saying that if I create a class that
represents XML QNames (i.e. that contains a namespace and a local name), and
I want to test another class that uses them, I need to stib out my QName
class to produce a true unit test? What if I have a class that contains
only static constants?
If it's only static constants, then it's really a data structure,
so no. But for your QName example, if it has getter/setter methods
for example, then unit testing would require it to have it's own
tests. I'm not saying that you should necessarily test it, but
it's not pure unit testing otherwise.
 
T

Timbo

Andrew said:
It depends upon how that class came into being, what it does and your
mood at the time.

In TDD we write a test, write the code, refactor.

now we normally start out by creating a class and its method we are
exercising. But after several test,code, refactor cycles we see that our
single class needs further simple refactorings because its really
several classes in one (e.g. a dispatcher type class which creates
instances of different command classes). but we started out with one
class and so far have one test class for it.

At this point we have a choice:

1) Leave the single test class as is - its testing our original class
and n other command classes.

2) Change the test class to test the original class, using fake/mock
Command objects. Then move the testcase methods from the original
testclass into new test classes, one for each Command class.
Here we end up with lots of small test classes, each testing an
individual class.

Both approaches are fine - both are unit tests - thats how they started
out.
I agree that both approaches are fine, but I disagree that the
first choice constitutes unit testing. Firstly, it's not testing
in isolation; and secondly, either the command classes are not
being specifically tested (and so cannot be said to be unit
tested), or they are being tested with the top-level module unit
tests, which means the tests are testing outside of the module's
boundary, so it's not unit testing.

(The word 'test' came up far too many times in that last sentence!)

Tim
 
M

Mike Schilling

Timbo said:
If it's only static constants, then it's really a data structure, so no.
But for your QName example, if it has getter/setter methods for example,
then unit testing would require it to have it's own tests. I'm not saying
that you should necessarily test it, but it's not pure unit testing
otherwise.

Of course QName needs its own tests. My question is whether unit testing
one of its clients requires stubbing it out.
 
T

Timbo

Mike said:
Of course QName needs its own tests. My question is whether unit testing
one of its clients requires stubbing it out.
To be a unit test, it does. If you include it, you are not testing
the unit in isolation. However, with such a simple class,
personally I wouldn't bother stubbing it, because the stub is
likely to be just as complex as the the QName class itself. I'm
not saying there is anything wrong with this approach (I do it
quite often), but I would call that integration testing.
 
M

Mike Schilling

To be a unit test, it does. If you include it, you are not testing the
unit in isolation. However, with such a simple class, personally I
wouldn't bother stubbing it, because the stub is likely to be just as
complex as the the QName class itself.

Very true :)
I'm not saying there is anything wrong with this approach (I do it quite
often), but I would call that integration testing.

Thanks for the response. I think it's an odd distinction, myself, to say
"this is an integration test, because it depends on a 20-line class I wrote,
while this is a pure unit test because it depends only on the JDK classes,
the Apache Commons, and Xerces", but we can disagree about that.
 
A

Andrew McDonagh

Timbo said:
I agree that both approaches are fine, but I disagree that the first
choice constitutes unit testing.

Its ok to disagree.
Firstly, it's not testing in isolation;

Its impractical to test any class in isolation, as we'd have to wrap all
Java & 3rd party libraries in order to fake/mock them to do so (never
mind about Object.class). If your class uses a String object - you
don't stub that do you? There's no difference between those classes and
your own.
and secondly, either the command classes are not being specifically
tested (and so cannot be said to be unit tested), or they are being
tested with the top-level module unit tests, which means the tests are
testing outside of the module's boundary, so it's not unit testing.

maybe have missed something I said.... those command classes were
*extracted* from the single class. Every line of code within the
original single class was covered by a unit test. Every scenario for
that single class was captured as a unit test.

Its just that part of TDD is 'refactoring' where we look at the current
design and improve things. In this case, we saw that the single class
was not conforming to Single Responsibility Principle, so we extracted
those hidden classes from within it.
The result is several small classes, where every line of code for all of
them is still covered by tests for the original class - in fact we don't
touch those tests - they create a safety net for our refactorings.

So, the choice we made was to separate the logic into discrete classes,
but we could have left them as one class - do you really think this
means the previous unit test is now an integration test?
(The word 'test' came up far too many times in that last sentence!)

Always does ;)
 
I

iamfractal

A very minor and more-or-less OT-point.

"In TDD we write a test, write the code, refactor. "

I'm a morning person. So I'm most productive in the morning. By the
time 4pm swings around, I reach a creative low-ebb, and I just want to
do something mechanical and unchallenging.

So I tend to write all my code before mid-afternoon, and only then
start writing the test classes for the morning's output.

So I'm not a TDDer by definition, but pragmatism doesn't stop me aiming
for its ideals.

As you were ...

..ed
 
A

Andrew McDonagh

A very minor and more-or-less OT-point.

"In TDD we write a test, write the code, refactor. "

I'm a morning person. So I'm most productive in the morning. By the
time 4pm swings around, I reach a creative low-ebb, and I just want to
do something mechanical and unchallenging.

So I tend to write all my code before mid-afternoon, and only then
start writing the test classes for the morning's output.

So I'm not a TDDer by definition, but pragmatism doesn't stop me aiming
for its ideals.

As you were ...

..ed

:) thats most definitely NOT TDD, its more akin to normal unit testing.

The times within the TDD cycle are around....

Writing the test - around 2- 3 minutes.
Write the code to make the test pass - around 1 - 2 minutes.
Refactoring - around 2 -3 minutes.

So in total the cycle take somewhere around 5 - 8 minutes.

These times are examples, but quite normal. If you find you are
spending more than 10 minutes within each step of the cycle, then its an
indication of trying to test or implement or refactor too big a chunk,
or the code base hasn't been refactored enough.

HTH


Andrew
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,774
Messages
2,569,598
Members
45,152
Latest member
LorettaGur
Top