N
Noah Roberts
Pete said:Today, "regression test" seems to mean "run the tests you've run before
and see if anything got worse." I.e., run the test suite. Formally,
though, a regression test is a test you add to your test suite in
response to a user-reported defect, reproducing the user's conditions.
I believe you're wrong on this. All definitions of regression testing I
have seen are running the full suite to make sure you didn't break
anything. This would follow from the definition of "regression":
1. the act of going back to a previous place or state; return or reversion.
http://dictionary.reference.com/browse/regression
I don't think this is a change either. Wikipedia quotes Fred Brooks:
"Also as a consequence of the introduction of new bugs, program
maintenance requires far more system testing per statement written than
any other programming. Theoretically, after each fix one must run the
entire batch of test cases previously run against the system, to ensure
that it has not been damaged in an obscure way. In practice, such
regression testing must indeed approximate this theoretical idea, and it
is very costly." -- Fred Brooks, The Mythical Man Month (p 122)
That book is a couple decades old at least...
This is an important step to make even if expensive. Many times a fix
to a new bug can cause old bugs to reappear...for instance, sometimes a
fix introduces a new bug, which is found and reported...and then "fixed"
bringing back the old one that's fix introduced this new bug.
What you are talking about is heavily used in TDD and also hasn't gone
away or become less used. If it has a formal name I don't recall it.
Step one, new acceptance test for the bug...step two, find the cause,
step 3 write unit test to expose cause...step 4 fix...step 5 run new
tests...step 6 run regression.
So I don't see that anything has become meaningless here. Both new
tests for bugs and regression tests to be sure the program still passes
acceptance from previous versions are important steps to robust project
management.