Testing in C++

N

Noah Roberts

Pete said:
Today, "regression test" seems to mean "run the tests you've run before
and see if anything got worse." I.e., run the test suite. Formally,
though, a regression test is a test you add to your test suite in
response to a user-reported defect, reproducing the user's conditions.

I believe you're wrong on this. All definitions of regression testing I
have seen are running the full suite to make sure you didn't break
anything. This would follow from the definition of "regression":

1. the act of going back to a previous place or state; return or reversion.
http://dictionary.reference.com/browse/regression

I don't think this is a change either. Wikipedia quotes Fred Brooks:

"Also as a consequence of the introduction of new bugs, program
maintenance requires far more system testing per statement written than
any other programming. Theoretically, after each fix one must run the
entire batch of test cases previously run against the system, to ensure
that it has not been damaged in an obscure way. In practice, such
regression testing must indeed approximate this theoretical idea, and it
is very costly." -- Fred Brooks, The Mythical Man Month (p 122)

That book is a couple decades old at least...

This is an important step to make even if expensive. Many times a fix
to a new bug can cause old bugs to reappear...for instance, sometimes a
fix introduces a new bug, which is found and reported...and then "fixed"
bringing back the old one that's fix introduced this new bug.

What you are talking about is heavily used in TDD and also hasn't gone
away or become less used. If it has a formal name I don't recall it.
Step one, new acceptance test for the bug...step two, find the cause,
step 3 write unit test to expose cause...step 4 fix...step 5 run new
tests...step 6 run regression.

So I don't see that anything has become meaningless here. Both new
tests for bugs and regression tests to be sure the program still passes
acceptance from previous versions are important steps to robust project
management.
 
P

Phlip

dave_mikes said:
HeyPhlip- I checked out UnitTest++ and it's really cool.

You need to stop using it immediately; my arguments here are
fallacious.

Report back as soon as you stop... ;-)
One thing
I tried to do, though, that wasn't successful...I defined member
functions in my fixture to do certain checks that are performed across
several tests, but the CHECK* macros wouldn't compile (errors below).
When I moved the CHECKs to TEST_FIXTURE definitions they work fine,
but now I have redundant macro calls across several tests. Is this a
feature?

Yes; the macros hit private variables in the fixtures, so they are de-
facto members.

The point of the fixtures is for test cases to share common code, so
put the CHECKs into shared methods and share these between test
fixtures. Roughly...

class fixture
{
void methodOne()
CHECK(x)
void methodTwo()
CHECK(y)
}

TEST_FIXTURE(fixture, case_a)
{
methodOne()
methodTwo()
}

TEST_FIXTURE(fixture, case_b)
{
methodOne()
}

(I will now await Noah triumphantly declaring "that's not well-formed C
++!")
 
P

Phlip

Noah said:
I believe you're wrong on this. All definitions of regression testing I
have seen are running the full suite to make sure you didn't break
anything. This would follow from the definition of "regression":

I once worked a game project where the QA department kept saying "we
regressed levels 2 thru 7 today". They meant "we checked levels 2 thru
7 against regressions". Showing them the dictionary definition didn't
really help the situation. It was a high-stress shop, and if you
pushed too hard they would insist on using the wrong definition to
prove they can, as a territoriality thing.
What you are talking about is heavily used in TDD and also hasn't gone
away or become less used. If it has a formal name I don't recall it.

Capture bugs with tests.
So I don't see that anything has become meaningless here. Both new
tests for bugs and regression tests to be sure the program still passes
acceptance from previous versions are important steps to robust project
management.

Talking for Pete, he discusses customers sending in failing code. Such
code wasn't (in TDD terms) "grown with" your test rig, so calling into
it will be slow. Pete uses "regression" to indirectly mean tests that
run slowly in huge batches. So you TDD some new features, run the
entire regression suite, and deploy. Your customers, in theory, will
remain happy with their projects.
 
B

bnonaj

nw said:
Hi,

I have been asked to teach a short course on testing in C++. Until now
I have used my own testing classes (which from what I've seen seem
similar to the boost unit testing classes). Considering I have a
limited amount of time what do readers of this group think would be
useful to cover in this course? Is boost the way to go?

Sorry if this is off-topic in this group.

Many Thanks.
Testing should be part of the design process, not an after thought,
hence I think you should consider a design, how to implement it and
the various tests required for each class, the whole design and parts
thereof. This would be useful if you provide design errors in C++ code
that should be revealed by testing but are not immediately obvious.

JB
 
I

Ian Collins

nw said:
I've taken a look at the CppUnit documentation. It seems there is
quite a significant overhead in CppUnit in terms of creating classes/
methods to get a simple test running (as compared to other testing
frameworks) what advantages does this give you?
The overhead is very low, all you require is one overall TestRunner
object and a TestFixture. I use one TestFixture per class under test.
 
D

dave_mikesell

Yes; the macros hit private variables in the fixtures, so they are de-
facto members.

The point of the fixtures is for test cases to share common code, so
put the CHECKs into shared methods and share these between test
fixtures. Roughly...
<snip>

That's what I'm trying, to no avail. The following trivial example
fails compilation with the aforementioned errors in VC++ 6.0. If I
move the CHECK to the TEST_FIXTURE block, it works fine.

#include <UnitTest++.h>

using namespace UnitTest;

class Fixture {
public:
void method_one() { CHECK(1 == 1); }
};

TEST_FIXTURE(Fixture, test_one)
{
method_one();
}

int main()
{
return RunAllTests();
}

Unfortunately that's the only compiler I have access to at the client
site. I'll try with MinGW when I get home tonight.

Thanks for your help.
 
I

Ian Collins

Noah said:
I believe you're wrong on this. All definitions of regression testing I
have seen are running the full suite to make sure you didn't break
anything. This would follow from the definition of "regression":

1. the act of going back to a previous place or state; return or
reversion.
http://dictionary.reference.com/browse/regression

I don't think this is a change either. Wikipedia quotes Fred Brooks:

"Also as a consequence of the introduction of new bugs, program
maintenance requires far more system testing per statement written than
any other programming. Theoretically, after each fix one must run the
entire batch of test cases previously run against the system, to ensure
that it has not been damaged in an obscure way. In practice, such
regression testing must indeed approximate this theoretical idea, and it
is very costly." -- Fred Brooks, The Mythical Man Month (p 122)

That book is a couple decades old at least...
It shows, before we started using automated unit and acceptance test
frameworks, running all the tests on a product or application was an
expensive process. The arrival of well designed testing frameworks has
mitigated a great deal of that cost by removing the labour cost and
elapsed time costs from the process.
This is an important step to make even if expensive. Many times a fix
to a new bug can cause old bugs to reappear...for instance, sometimes a
fix introduces a new bug, which is found and reported...and then "fixed"
bringing back the old one that's fix introduced this new bug.
That is why it is important to fix every bug by adding a failing test or
tests to the test suite that reveal the bug and then getting these to
pass. The tests remain in the suite, so the risk of regressing an old
bug is eliminated.
What you are talking about is heavily used in TDD and also hasn't gone
away or become less used. If it has a formal name I don't recall it.
Step one, new acceptance test for the bug...step two, find the cause,
step 3 write unit test to expose cause...step 4 fix...step 5 run new
tests...step 6 run regression.
Steps 5 and 6 are one and the same, the new test become part of the test
suite.
So I don't see that anything has become meaningless here. Both new
tests for bugs and regression tests to be sure the program still passes
acceptance from previous versions are important steps to robust project
management.

The point may be that there isn't and division between new tests for
bugs and existing acceptance tests. The bug simply becomes another test
case.
 
P

Phlip

dave_mikes said:
class Fixture {

Make class Fixture inherit whatever TEST_FIXTURE is wrapping. Sorry I
forgot to mention that!

(And the correct verbiage has always been Suite and _SUITE here, but
that's beside the point.)

If that doesn't work, post to the UnitTest++ mailing list. I know they
take these architectural concerns quite seriously.
 
P

Pete Becker

Noah said:
I believe you're wrong on this.

I've only spent ten years as a test writer and manager, so it may be
that I don't know what I'm talking about, but I doubt it.

All definitions of regression testing I
have seen are running the full suite to make sure you didn't break
anything. This would follow from the definition of "regression":

1. the act of going back to a previous place or state; return or
reversion.
http://dictionary.reference.com/browse/regression

That's not a definition of "regression test."
I don't think this is a change either. Wikipedia quotes Fred Brooks:

"Also as a consequence of the introduction of new bugs, program
maintenance requires far more system testing per statement written than
any other programming. Theoretically, after each fix one must run the
entire batch of test cases previously run against the system, to ensure
that it has not been damaged in an obscure way. In practice, such
regression testing must indeed approximate this theoretical idea, and it
is very costly." -- Fred Brooks, The Mythical Man Month (p 122)

That book is a couple decades old at least...

And that's not a book about testing. For the true definition, see "The
Art of Software Testing," by Glenform Myers.
This is an important step to make even if expensive. Many times a fix
to a new bug can cause old bugs to reappear...for instance, sometimes a
fix introduces a new bug, which is found and reported...and then "fixed"
bringing back the old one that's fix introduced this new bug.

What you are talking about is heavily used in TDD and also hasn't gone
away or become less used. If it has a formal name I don't recall it.

Its formal name is "regression testing." Or was, until the "regression"
became vacuous.
Step one, new acceptance test for the bug...step two, find the cause,
step 3 write unit test to expose cause...step 4 fix...step 5 run new
tests...step 6 run regression.

So I don't see that anything has become meaningless here. Both new
tests for bugs and regression tests to be sure the program still passes
acceptance from previous versions are important steps to robust project
management.

As I said, "regression test" has come to mean "test." Too bad.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
 
P

Pete Becker

Ian said:
That is why it is important to fix every bug by adding a failing test or
tests to the test suite that reveal the bug and then getting these to
pass. The tests remain in the suite, so the risk of regressing an old
bug is eliminated.

No, the risk is not eliminated. That eliminates the risk that whatever
test you wrote in response to the defect report hasn't failed. That
doesn't mean that the customer's code will work right, because the
previous fix might not have gotten at the entire problem, and some other
untested thing might now cause the customer's same code to fail. There's
no substitute for real-world code.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
 
P

Pete Becker

bnonaj said:
Testing should be part of the design process, not an after thought,
hence I think you should consider a design, how to implement it and
the various tests required for each class, the whole design and parts
thereof. This would be useful if you provide design errors in C++ code
that should be revealed by testing but are not immediately obvious.

Yes, exactly. Tools come later.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
 
P

Phlip

Ian said:
That is why it is important to fix every bug by adding a failing test or
tests to the test suite that reveal the bug and then getting these to
pass. The tests remain in the suite, so the risk of regressing an old
bug is eliminated.

Tip: Suppose you have wall-to-wall tests, and a bug report. Test cases
make excellent platforms for debugging.

Now suppose the bug report comes with the high-level inputs that cause
the bug. You could create a new high-level test, pass the inputs in,
and indeed reproduce the bug.

However, if your new test case is very far from the target bug, you
have not yet "captured" the bug. You should then write a low-level
(faster) test case, directly on the bug's home class, and only then
kill the bug.
The point may be that there isn't and division between new tests for
bugs and existing acceptance tests. The bug simply becomes another test
case.

If the "regression" suite is very slow, and if you run it less often
than the TDD tests, then you should treat any failure in the
regression suite as an escape from the TDD suite. You should capture
_this_ bug with a test, before killing it. This tip increases the
value and power of the faster test suite. You can then TDD with more
confidence.
 
P

Pete Becker

Phlip said:
However, if your new test case is very far from the target bug, you
have not yet "captured" the bug. You should then write a low-level
(faster) test case, directly on the bug's home class, and only then
kill the bug.

Exactly. The low-level test case captures what the developer thought the
problem was. The regression test captures what the customer saw. They
aren't necessarily the same.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
 
I

Ian Collins

Phlip said:
Ian Collins wrote:




Tip: Suppose you have wall-to-wall tests, and a bug report. Test cases
make excellent platforms for debugging.

Now suppose the bug report comes with the high-level inputs that cause
the bug. You could create a new high-level test, pass the inputs in,
and indeed reproduce the bug.

However, if your new test case is very far from the target bug, you
have not yet "captured" the bug. You should then write a low-level
(faster) test case, directly on the bug's home class, and only then
kill the bug.
True, that's what I do. I should have stressed propagating the tests
down to the unit level. The failing acceptance test can be a good
pointer as to where the problem lies and where to add the filing unit tests.
 
I

Ian Collins

Pete said:
As I said, "regression test" has come to mean "test." Too bad.
Is arguably more accurate to say that "regression test" has come to mean
"acceptance test". One role of all test is to prevent regressions.
 
P

Pete Becker

Ian said:
True, it may fail somewhere else, but the conditions described in the
original defect report will not cause problems.

That's not true. The added test tests what the developer or the tester
thinks caused the problem, after distilling the original defect report.
You still need to run the original code now and then, in case the
problem was more complex than the developer realized.
A test suite is seldom,
if ever, perfect, but adding tests for field failure cases is an
excellent means of improving it. Not adding tests for field failure
cases is nothing short of negligent.

I don't know where "There's no substitute for real-world code" came
from, tests are always run against real code.

Real code, yes. Real-world code, no. Most tests are fairly short,
intended to isolate a particular part of a specification. That's
important, but it's also important to incorporate real users in the
testing cycle, for example, through a beta test. They typically don't
find anywhere near as many problems as the internal testers do (back
when I managed Borland's compiler testing, over 90% of the defects were
found by internal testers), but the ones they find are often killers.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
 
P

Pete Becker

Ian said:
Is arguably more accurate to say that "regression test" has come to mean
"acceptance test".

Well, back in the day, we had unit tests, integration tests, acceptance
tests, and regression tests.
One role of all test is to prevent regressions.

Yes, and that is apparently taken to mean that every test is a
regression test.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
 
N

Noah Roberts

Pete said:
I've only spent ten years as a test writer and manager, so it may be
that I don't know what I'm talking about, but I doubt it.



That's not a definition of "regression test."

No, it isn't. But it is a definition of regression. If you add test to
the end you can easily derive what the correct meaning should be from
the definition of the component words. Since regression means to move
back, then regression test must mean to test what was previously tested.

Now, this is the common use, it fits the definition of the component
words as they are used in the English language. If you want to change
that definition to mean something else and then claim that the other use
is somehow wrong be my guest; the fact that you find one book that
coincides with your use of little import. Your claim of authority
likewise does not mean you are correct; I fully know who I am arguing
with, I own your book on TR1, and I still say your definition is flawed.

I think I will stick with common use as based on the English language
myself. It's how most people use the words:

"[Regression testing] is a quality control measure to ensure that the
newly modified code still complies with its specified requirements and
that unmodified code has not been affected by the maintenance activity."

http://www.webopedia.com/TERM/R/regression_testing.html

"Any time you modify an implementation within a program, you should also
do regression testing. You can do so by rerunning existing tests against
the modified code to determine whether the changes break anything that
worked prior to the change and by writing new tests where necessary."

http://msdn2.microsoft.com/en-us/library/aa292167(VS.71).aspx

"In traditional terms, this is called regression testing. We
periodically run tests that check for known good behavior to find out
whether our software still works the way that it did in the past."

Feathers - Working Effectively with Legacy Code pg10

Only Beck seems to disagree with this more common usage.
> As I said, "regression test" has come to mean "test." Too bad.

I don't see how you can claim that. Regression testing is simply one
aspect of testing. It means you don't just test the new stuff you also
test what worked before. "Regression" simply spells out this necessity.
Yes, regression testing is often done at the same time as the new
tests but that is beside the point that it needs to be a part of your
testing process. It is also an important distinction because regression
testing might not be part of your immediate feedback cycle but done less
frequently because it may require many hours to run.
 
N

Noah Roberts

Ian said:
It shows, before we started using automated unit and acceptance test
frameworks, running all the tests on a product or application was an
expensive process. The arrival of well designed testing frameworks has
mitigated a great deal of that cost by removing the labour cost and
elapsed time costs from the process.

True, it is faster but still, running your regression suite can take
hours. Ours is parallel processed on many different PC's and still
takes 3+. So it's more expensive than running a set of tests for what
you're working on.
That is why it is important to fix every bug by adding a failing test or
tests to the test suite that reveal the bug and then getting these to
pass. The tests remain in the suite, so the risk of regressing an old
bug is eliminated.

I didn't say it isn't an important step to add a new test for the bug.
What I'm saying is that "regression testing" means something different
and is equally important.
Steps 5 and 6 are one and the same, the new test become part of the test
suite.

Eventually for sure. Immediately, not necessarily. It might be put in
the stack of immediate tests that run in every integration since it is
now something of current importance.
The point may be that there isn't and division between new tests for
bugs and existing acceptance tests. The bug simply becomes another test
case.

But there may be a division between what you're currently working on and
the vast supply of tests in your regression. The real point is that a
true complete test includes the regression and not just the new stuff.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,058
Latest member
QQXCharlot

Latest Threads

Top