JUnit et al approach - criticisms

V

VisionSet

Does anyone have anything negative to say about using JUnit or a similar
approach in the overall lifecycle of their project?
 
T

Thomas Hawtin

VisionSet said:
Does anyone have anything negative to say about using JUnit or a similar
approach in the overall lifecycle of their project?

That's a difficult one. Issues with JUnit implementation of the concept,
certainly. I actually think unit testing has a very localised effect on
the running of a project, and needn't impact on other aspects of the
life-cycle. Kind of like having a common set of code formatting conventions.

The only negative thing I can come up with is that writing tests is
really, really boring. Pissing about with debuggers, traces, printfs and
bug reports is much more interesting (true).

Tom Hawtin
 
R

Roedy Green

Does anyone have anything negative to say about using JUnit or a similar
approach in the overall lifecycle of their project?

Frank Zappa named his daughter Moon Unit. This is a blatant rip off of
the Zappa name.
 
V

VisionSet

I actually think unit testing has a very localised effect on
the running of a project, and needn't impact on other aspects of the
life-cycle.

mmm, depends when you write the tests I suppose.
 
V

VisionSet

Thomas Hawtin said:
Together with the code is traditional.

Okay, it depends when you write the code!
Or what weight of RUP you are following - something I'm a bit at odds with,
the lighter weight approaches seem to emphasise unit testing more.
 
A

Andrew McDonagh

VisionSet said:
Does anyone have anything negative to say about using JUnit or a similar
approach in the overall lifecycle of their project?

Curious, why would testing in any form or with any framework be a
negative thing?

Keep in mind, JUnit (and its other language variants dUnit,cppUnit,
sUnit, NUnit, etc) are just unit testing frameworks and as such can be
used in may ways.

The reason I mention this, is because whilst there is value in unit
testing our code, these particular frameworks have come about to support
TestDrivenDevelopment (TDD).

TDD uses unit tests as a means of capturing the design that we create,
much like UML diagrams capture design.

TDD is a design methodology not a testing one.

The TDD cycle being:

1 Make a Failing test
2 Make the test pass
3 Refactor to remove duplication

However, every time this cycle has been completed, the existing tests
start to also be a regression test suite for the code at the unit level.

TDD's unit tests (or more right -Programmer tests) do not replace
acceptance/integration tests - they support them.

As for JUnit - I have only good things to say about it - I've learnt
more about real OO, design, development & testing in the 4 years I've
been using it, than in the previous 6 years developing.

Andrew
 
M

Mike Schilling

VisionSet said:
Does anyone have anything negative to say about using JUnit or a similar
approach in the overall lifecycle of their project?

The JUnit approach (that the first failure ends the test) seems unproductive
to me. At times it's apt, but there are other times where I'd like to
continue the test and generate a report of all the failures rather than only
be told about the first one.
 
A

Andrew McDonagh

Mike said:
The JUnit approach (that the first failure ends the test) seems unproductive
to me. At times it's apt, but there are other times where I'd like to
continue the test and generate a report of all the failures rather than only
be told about the first one.

Why?

Normally any subsequent failures are just by productions of the initial
failure. So, fixing the first failures usually fixes all of the others.

The later failures in these cases are just noise.

Andrew
 
M

Mike Schilling

Andrew McDonagh said:
Why?

Normally any subsequent failures are just by productions of the initial
failure. So, fixing the first failures usually fixes all of the others.

Often, that's true. Sometimes not. JUnit has no provision for the latter
case.
 
A

Adam Maass

VisionSet said:
Does anyone have anything negative to say about using JUnit or a similar
approach in the overall lifecycle of their project?

I *like* JUnit; I have a great deal of confidence that the code for which I
have JUnit tests works as expected.

Drawbacks to JUnit:

1. It is hard to get an entire project to adopt JUnit if not everyone
believes in automated unit tests.

2. It is sometimes (often, in our case) difficult to write a *unit* test for
each class in isolation.

3. Doing the work to get adequate test coverage from a unit test suite is
generally a thankless job -- unless management buys in to the notion that it
is worthwhile and makes the time to get it done.

4. Maintaining the test suite along with the code it tests is sometimes
painful. (I just changed some code and I have a failing test. Is the test
failing because it's a real failure of the code or because the test case is
now incorrect?)

5. You have to train the people who write the tests what constitutes a good
test. (IE, random or pseudorandom inputs are a bad idea; fixed paths on a
filesystem are a bad idea; etc.)
 
C

Chris Uppal

VisionSet said:
Does anyone have anything negative to say about using JUnit or a similar
approach in the overall lifecycle of their project?

I wouldn't want to put anyone off the idea of testing, but I do want to raise a
couple of small warning about automated tests.

Don't get me wrong, I /like/ automated tests -- what I want to say is that they
can have a downside too, and that you should be aware of it and compensate
accordingly.

[BTW, nothing in the following is Java-specific, even though I mention some
Java-based tools, they are only named as examples, and I may not even have used
them myself.]

Testing means many different things to different people and in different
contexts. There are several axes of variation, the two I'm interested in here
are granularity, and attitude.

A wise man once told me that "testing is a deliberate attempt to break the
system". I think that's a very good description of the attitude that should be
present when testing -- or rather, should be applied during /some/ testing.
And that's what I find is missing in almost any automated testing -- whether
the wildly faddish TDD or classical overnight test suites -- the purpose of the
test is to confirm that the system (or module) works. There is no active
intent to make the system break, there is no intelligent exploration of
corner-cases, there is no inspired guessing at possible-unanticipated
combinations of inputs. Above all, there is no exploration of the problem
space -- the suite tests the same combinations each time. (The big exception
to this -- which I have very rarely seen used in practise -- is when automation
is used for a brute-force, exhaustive, exploration of a significant sub-set of
the problem space). I like to see some real testing (in the above sense) as
part of the development effort. To me (and assuming that exhaustive testing is
infeasible) that means interactive testing. Write a test harness or use
something like BeanShell. Try things out, did they work ? Did they work
/exactly/ how you expected ? Push things a bit. Did the disk-light flash when
it shouldn't ? Did the screen flicker more than you expected during your GUIs
repaint ? Try wildly implausible combinations. You are trying to break your
own code -- which requires imagination and attention to detail. Some people
like to step through code under the debugger looking for stuff that "works",
but doesn't take quite the expected path through the cod; that's another
example of the same kind of thinking.

Anyway, the warning I wanted to give about automated testing isn't just that
it's not an adequate substitute for aggressive testing (in the above sense),
but also that there's a risk that it will /displace/ aggressive testing. Once
the automated tests are written (whether before, along-side, or after the code
they apply to), it takes a great deal of self-discipline[*] not just to rely on
those tests. Press the button, everything comes up green, "Good, it works!".
That's not /testing/ -- it has a great deal of value, but it's confirming that
the system works on some inputs, not trying to find the inputs that break it.

([*] /I/ don't have that much self-discipline, so -- although I've always put a
great deal of effort into testing -- I've settled into a working pattern where
I don't write automated tests until /after/ I'm satisfied that my stuff works.
So my tests are actually regression tests from the very first, since a newly
written test that fails is, according to my rules, a development failure
(either the test itself is buggy -- happens frequently -- or I'm writing tests
too soon). Actually, I quite often write brute-force/exhaustive tests at this
point too, and failures there /are/ permitted.)

Now, obviously, it's not a black-and-white situation. Automated test suites
can attempt to explore large chunks of the problem space -- if enough time is
devoted to it by sufficiently skilled and determined programmers. (I take it
as obvious that /comprehensive/ test suites are a mere wish-fulfilment
fantasy -- the combinatorial explosion kills the idea stone dead.) But
elaborate test suites have costs too. Which brings me to the second point --
granularity.

Code changes. Right from the start, it changes a lot. (I'm not talking about
changes in response to changing requirements -- that's a different issue). IMO
that change is a necessary part of getting a properly working and maintainable
system. Tests, quite obviously, add "weight" to that, in the sense that any
test for a piece of functionality that is changed will have to change too. If
you refactor stuff (and you've got any tests written) then you'll have to
refactor them too. Tests, of any sort, inhibit the natural process of change
as a software design grows -- to take a silly example, there's not much point
in testing the property-file handling you are writing this morning if this
afternoon you are going to rip it out and replace it with Windows registry
lookup... (Presumably a true TDDer would put that the other way around --
there's not much point in writing code to be tested by your property-file
handling tests if you are just going to rip them out and replace them with
Windows registry handling tests this afternoon ;-) You have to balance the
cost of testing sub-components against the benefits, taking into account the
near certainty of change. You can all-but eliminate the costs of testing stuff
that will later be scrapped by concentrating on end-to-end testing, testing a
single business function, rather than the components that interact to perform
it, but then (A) you are only exploring an extremely impoverished slice of the
problem space (not aggressive testing at all) and (B) you don't get the very
real benefits of unit tests. So you'll probably decide that it's better to
have at least /some/ unit testing, and try to optimise the amount so that it
doesn't add too much weight to your development process. Clearly, the easier
and simpler it is to invent, implement, and run each test, then more testing
you can do before you pass the break-even point. And that's where my second
warning comes in -- in my experience, writing scripted tests (JUnit, and
similar) takes a lot of work, about an order of magnitude more work than doing
the same tests in the kind of interactive environment I as talking about
earlier. So, in my experience, I can do about an order of magnitude /more/
testing of code-still-under-development (without compromising my development
fluidity) by avoiding the formal tools. The downside is that I do end up
writing scripted tests /as well/ -- but only for the final code, and (as I've
already said) I prefer to use two different styles of testing anyway.

-- chris
 
V

VisionSet

Chris Uppal said:
Code changes. Right from the start, it changes a lot. (I'm not talking about
changes in response to changing requirements -- that's a different issue). IMO
that change is a necessary part of getting a properly working and maintainable
system. Tests, quite obviously, add "weight" to that, in the sense that any
test for a piece of functionality that is changed will have to change too. If
you refactor stuff (and you've got any tests written) then you'll have to
refactor them too. Tests, of any sort, inhibit the natural process of change
as a software design grows

Yes absolutely and that is my main bug bear.
From what I've seen, and that is very little, those enthusiastic about JUnit
writing tests early, have some really bad OO designs. And I can understand
why, it's one thing mersillessly refactoring but when essentially that
requires rewrites of the tests it is demorallising beyond belief.
eXtreme programming extols the virtues of unit testing and stipulates
writing the tests first. Oh it also does away with much of the design stage
too and promotes evolving designs from the code - much in the way I work
presently. Doesn't sound like the two go hand in hand to me.
 
T

Thomas Hawtin

Adam said:
1. It is hard to get an entire project to adopt JUnit if not everyone
believes in automated unit tests.

And get them to believe that what they are doing isn't some kind of
magical special case.
2. It is sometimes (often, in our case) difficult to write a *unit* test for
each class in isolation.

It often requires that code is written in a "purer" style.
3. Doing the work to get adequate test coverage from a unit test suite is
generally a thankless job -- unless management buys in to the notion that it
is worthwhile and makes the time to get it done.

If you've got management who consider unit testing gold plating... then
you've probably got worse problems.

Tom Hawtin
 
T

timjowers

Even with Rational Robot and other test tools problem #4 hits you -
especially when the test team is separate from the development team and
swings their own hammer for the project mangement.

JUnit makes for great regression tests. Well, the test suite has to be
made up properly. Testing tools like JMeter and others should be used.
Automating tools like Automate, LoadRunner, etc. should be used.
Banging out JUnits when an automated test can be recorded much more
easily is assanine unless the JUnits strive towards full code and input
coverage. What are the best JUnit-oriented tools? E.g. I used to use
BlackIce and related SW which did memory profiling as well as generated
drivers for C++ classes to do input handling testing so would like a
tool that, given a Java class, would generate the JUnit and test with
normal input tests (e.g. Integer.MIN, MAX, -1, 0, 1, etc.)

Integration of JUnit into a J2EE environment is tough.

Testing and writing JUnit tests are not one and the same. Test Driven
Design ("TDD" was traditionally for Technical Design Document so the
TDD acronym is overly overloaded.) is really a way of saying to gather
requirements. The "tests" of Test Driven Design are really design
validation tests and mostly not breakage tests, input range tests, bad
input handling tests, performance, or integration tests.

Test Driven Design is another way of saying "Fail to plan, plan to
fail" or any other old adage about how running in without a strategy is
fatal. Likewise, saying "JUnit" is not a magic arrow to kill bugs.

Happy coding,
TimJowers
 
I

Ian Pilcher

VisionSet said:
Does anyone have anything negative to say about using JUnit or a similar
approach in the overall lifecycle of their project?

I only write code in my spare time, so take this accordingly. My main
objection to the JUnit "philosophy" is that it does nothing to make
testing non-observable behavior easier. Per the JUnit FAQs, they think
that only observable behavior should be unit tested most of the time. I
simply don't agree with this.
 
A

Andrew McDonagh

Ian said:
I only write code in my spare time, so take this accordingly. My main
objection to the JUnit "philosophy" is that it does nothing to make
testing non-observable behavior easier. Per the JUnit FAQs, they think
that only observable behavior should be unit tested most of the time. I
simply don't agree with this.

What do you mean by 'non observable behavior'?

All code should had some effect upon the system else by definition that
code is not doing anything and should be deleted.

Confused.

Andrew
 
T

Thomas Hawtin

Andrew said:
What do you mean by 'non observable behavior'?

All code should had some effect upon the system else by definition that
code is not doing anything and should be deleted.

The behaviour may be difficult to observe. For instance, if there is a
cache of some intermediate result, you'd have to do some performance
testing to check.

Tom Hawtin
 
A

Andrew McDonagh

Thomas said:
The behaviour may be difficult to observe. For instance, if there is a
cache of some intermediate result, you'd have to do some performance
testing to check.

Tom Hawtin

sure it may be difficult, but its still observable therefore its has
some effect upon the system which makes it testable.

Ian was saying that non-oberveable code isn't catered for - to which I
and others say, is not needed as non-oberveable means no effect, no
effect meaning whats the point in the code existing.
 
M

Mike Schilling

Thomas Hawtin said:
If you've got management who consider unit testing gold plating... then
you've probably got worse problems.

Have you never had managers who said things like "We're in crunch mode, so
we can't afford to do as much testing as we'd like to."? If so, you're a
very lucky man.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top