A simple unit test framework

N

nw

Hi,

I previously asked for suggestions on teaching testing in C++. Based
on some of the replies I received I decided that best way to proceed
would be to teach the students how they might write their own unit
test framework, and then in a lab session see if I can get them to
write their own. To give them an example I've created the following
UTF class (with a simple test program following). I would welcome and
suggestions on how anybody here feels this could be improved:

Thanks for your time!

class UnitTest {
private:
int tests_failed;
int tests_passed;
int total_tests_failed;
int total_tests_passed;
std::string test_set_name;
std::string current_file;
std::string current_description;

public:

UnitTest(std::string test_set_name_in) : tests_failed(0),

tests_passed(0),

total_tests_failed(0),

total_tests_passed(0),

current_file(),

current_description(),

test_set_name(test_set_name_in) {
std::cout << "*** Test set : " << test_set_name << std::endl;
}

void begin_test_set(std::string description, const char *filename) {
current_description = description;
current_file = filename;
tests_failed = 0;
tests_passed = 0;
std::cout << "****** Testing: " << current_description <<
std::endl;
}

void end_test_set() {
std::cout << "****** Test : " << current_description << "
complete, ";
std::cout << "passed " << tests_passed << ", failed " <<
tests_failed << "." << std::endl;
}

template<class _TestType>
bool test(_TestType t1,_TestType t2,int linenumber) {
bool test_result = (t1 == t2);

if(!test_result) {
std::cout << "****** FAILED : " << current_file << "," <<
linenumber;
std::cout << ": " << t1 << " is not equal to " << t2 <<
std::endl;
total_tests_failed++;
tests_failed++;
} else { tests_passed++; total_tests_passed++; }
}

void test_report() {
std::cout << "*** Test set : " << test_set_name << " complete, ";
std::cout << "passed " << total_tests_passed;
std::cout << " failed " << total_tests_failed << "." << std::endl;
if(total_tests_failed != 0) std::cout << "*** TEST FAILED!" <<
std::endl;
}
};

int main(void) {
// create a rectangle at position 0,0 with sides of length 10
UnitTest ut("Test Shapes");

// Test Class Rectangle
ut.begin_test_set("Rectangle",__FILE__);
Rectangle r(0,0,10,10);
ut.test(r.is_square(),true,__LINE__);
ut.test(r.area(),100.0,__LINE__);

Rectangle r2(0,0,1,5);
ut.test(r2.is_square(),true,__LINE__);
ut.test(r2.area(),5.0,__LINE__);
ut.end_test_set();

// Test Class Circle
ut.begin_test_set("Circle",__FILE__);
Circle c(0,0,10);
ut.test(c.area(),314.1592654,__LINE__);
ut.test(c.circumference(),62.831853080,__LINE__);

ut.end_test_set();

ut.test_report();

return 0;
}
 
A

anon

nw said:
I previously asked for suggestions on teaching testing in C++. Based
on some of the replies I received I decided that best way to proceed
would be to teach the students how they might write their own unit
test framework, and then in a lab session see if I can get them to
write their own. To give them an example I've created the following
UTF class (with a simple test program following). I would welcome and
suggestions on how anybody here feels this could be improved:

http://cxxtest.sourceforge.net/
Here is a link to the c++ unit test framework I have been using. Take a
look - you might get an idea how to improve your unit test framework...
 
P

Pete Becker

nw said:
I previously asked for suggestions on teaching testing in C++. Based
on some of the replies I received I decided that best way to proceed
would be to teach the students how they might write their own unit
test framework, and then in a lab session see if I can get them to
write their own. To give them an example I've created the following
UTF class (with a simple test program following). I would welcome and
suggestions on how anybody here feels this could be improved:

A fool with a tool is still a fool. The challenge in testing is not test
management, but designing test cases to cover the possible failures in
the code under test. That's something that most developers don't do
well, because their focus is on getting the code to run. A good tester
focuses on getting the code to fail.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
 
N

nw

A fool with a tool is still a fool. The challenge in testing is not test
management, but designing test cases to cover the possible failures in
the code under test. That's something that most developers don't do
well, because their focus is on getting the code to run. A good tester
focuses on getting the code to fail.

Agreed. That was my motivation in providing a relatively simple small
class which is really just a comparison function that on failure
prints out the file and line the test failed in. So I was going to
spend about a half hour talking about the features of C++ they'll need
__LINE__, __FILE__ etc. and introducing a simple framework. Then
another half hour talking about designing tests to try and make their
code fail.
 
N

nw

A fool with a tool is still a fool. The challenge in testing is not test
management, but designing test cases to cover the possible failures in
the code under test. That's something that most developers don't do
well, because their focus is on getting the code to run. A good tester
focuses on getting the code to fail.

Agreed. That was my motivation in providing a relatively simple small
class which is really just a comparison function that on failure
prints out the file and line the test failed in. So I was going to
spend about a half hour talking about the features of C++ they'll need
__LINE__, __FILE__ etc. and introducing a simple framework. Then
another half hour talking about designing tests to try and make their
code fail.
 
N

nw

A fool with a tool is still a fool. The challenge in testing is not test
management, but designing test cases to cover the possible failures in
the code under test. That's something that most developers don't do
well, because their focus is on getting the code to run. A good tester
focuses on getting the code to fail.

Agreed. That was my motivation in providing a relatively simple small
class which is really just a comparison function that on failure
prints out the file and line the test failed in. So I was going to
spend about a half hour talking about the features of C++ they'll need
__LINE__, __FILE__ etc. and introducing a simple framework. Then
another half hour talking about designing tests to try and make their
code fail.
 
N

nw

A fool with a tool is still a fool. The challenge in testing is not test
management, but designing test cases to cover the possible failures in
the code under test. That's something that most developers don't do
well, because their focus is on getting the code to run. A good tester
focuses on getting the code to fail.

Agreed. That was my motivation in providing a relatively simple small
class which is really just a comparison function that on failure
prints out the file and line the test failed in. So I was going to
spend about a half hour talking about the features of C++ they'll need
__LINE__, __FILE__ etc. and introducing a simple framework. Then
another half hour talking about designing tests to try and make their
code fail.
 
G

Gianni Mariani

nw said:
Agreed. That was my motivation in providing a relatively simple small
class which is really just a comparison function that on failure
prints out the file and line the test failed in. So I was going to
spend about a half hour talking about the features of C++ they'll need
__LINE__, __FILE__ etc. and introducing a simple framework. Then
another half hour talking about designing tests to try and make their
code fail.

Saying it four times doesn't make your point any stronger :)

I would only suggest that you also try to add a test registry and some
macros so that __FILE__ and __LINE__ are not used in test cases.

In the Austria C++ unit test system, I use exceptions to indicate
failure. It's usually silly to continue with a test if part of it has
failed.

Is Austria C++ there is also an assert macro "AT_TCAssert" for "Test
Case Assert" which is somewhat similar to:

ut.test(r.is_square(),true,__LINE__);

AT_TCAssert throws a "at::TestCase_Exception" when the assert fails and
provides a string descrbing the error.

Here is an example:

AT_TCAssert( m_value == A_enum, "Failed to get correct type" )

#define AT_TCAssert( x_expr, x_description ) \
if ( !( x_expr ) ) { \
throw TestCase_Exception( \
AT_String( x_description ), \
__FILE__, \
__LINE__ \
); \
} \
// end of macro

.... now that I think about it, that should be a while() not an if() or
an if wrapped in a do {}.

TestCase_Exception also grabs a stack trace and can print out a trace of
the place it is thrown.
 
I

Ian Collins

Pete said:
A fool with a tool is still a fool. The challenge in testing is not test
management, but designing test cases to cover the possible failures in
the code under test. That's something that most developers don't do
well, because their focus is on getting the code to run.
Unless the test are written first!
 
P

Pete Becker

Ian said:
Unless the test are written first!

You can't do coverage analysis or any other form of white-box testing on
code that hasn't been written. There is a big difference between a
tester's minset and a develper's mindset, and it's very hard for one
person to do both.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
 
A

anon

Pete said:
You can't do coverage analysis or any other form of white-box testing on
code that hasn't been written. There is a big difference between a
tester's minset and a develper's mindset, and it's very hard for one
person to do both.

The latest trends are to write tests first which demonstrates the
requirements, then code (classes+methods). In this case you will not
have to do a coverage, but it is a plus. This way, the code you write
will be minimal and easier to understand and maintain.
I agree this way looks harder (and I am not using it), but I am sure
once you get use to it - your programming skills will improve drastically
 
G

Gianni Mariani

anon wrote:
....
The latest trends are to write tests first which demonstrates the
requirements, then code (classes+methods). In this case you will not
have to do a coverage, but it is a plus. This way, the code you write
will be minimal and easier to understand and maintain.
I agree this way looks harder (and I am not using it), but I am sure
once you get use to it - your programming skills will improve drastically

*I* term this Test Driven Design, or TDD. TDD is used to mean different
things.

The minimal deliverable of a design is a doxygen documented compilable
header file and compilable (not linkable) unit test cases that
demonstrate the use of the API (not coverage - that comes as part of the
development).

I personally employ the TDD part of development almost exclusively. I
think I used some form of this technique since around 1983. It results
in code that works and is easy (relatively speaking) to use.
 
I

Ian Collins

Pete said:
You can't do coverage analysis or any other form of white-box testing on
code that hasn't been written.

If you code test first (practice Test Driven Design/Development), you
don't have to do coverage analysis because your code has been written to
pass the tests. If code isn't required to pass a test, it simply
doesn't get written. Done correctly, TDD will give you full test
coverage for free.
There is a big difference between a
tester's minset and a develper's mindset, and it's very hard for one
person to do both.
I don't deny that. Always let testers write the black box product
acceptance tests. That way you get the interpretation of two differing
groups on the product requirements.
 
I

Ian Collins

anon said:
The latest trends are to write tests first which demonstrates the
requirements, then code (classes+methods). In this case you will not
have to do a coverage, but it is a plus. This way, the code you write
will be minimal and easier to understand and maintain.
I agree this way looks harder (and I am not using it), but I am sure
once you get use to it - your programming skills will improve drastically

I find it makes the design and code process easier and way more fun. No
more debugging sessions!
 
P

Pete Becker

anon said:
The latest trends are to write tests first which demonstrates the
requirements, then code (classes+methods). In this case you will not
have to do a coverage, but it is a plus.

That's not what coverage analysis refers to. Coverage analysis takes the
test cases that you've written and measures (speaking a bit informally)
how much of the code is actually tested by the test set. You can't make
that measurement until you've written the code.
This way, the code you write
will be minimal and easier to understand and maintain.
I agree this way looks harder (and I am not using it), but I am sure
once you get use to it - your programming skills will improve drastically

When you write tests before writing code you're only doing black box
testing. Black box testing has some strengths, and it has some
weaknesses. White box testing (which includes coverage analysis)
complements black box testing. Excluding it because of some dogma about
only writing tests before writing code limits the kinds of things you
can discover through testing.

The problem is that, in general, you cannot test every possible set of
input conditions to a function. So you have to select two sets of test
caes: those that check that mainline operations are correct, and those
that are most likely to find errors. That second set requires knowledge
of how the code was written, so that you can probe its likely weak spots.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
 
P

Pete Becker

Ian said:
If you code test first (practice Test Driven Design/Development), you
don't have to do coverage analysis because your code has been written to
pass the tests.

No, that's not what coverage analysis means. See my other message.
If code isn't required to pass a test, it simply
doesn't get written. Done correctly, TDD will give you full test
coverage for free.

Nope. Test driven design cannot account for the possibility that a
function will use an internal buffer that holds N bytes, and has to
handle the edges of that buffer correctly. The specification says
nothing about N, just what output has to result from what input. What
are the chances that input data chosen from the specification will just
happen to be the right length to hit that off by one error? If you know
what N is, you can test N-1, N, and N+1 input bytes, with a much higher
chance of hitting bad code.
I don't deny that. Always let testers write the black box product
acceptance tests. That way you get the interpretation of two differing
groups on the product requirements.

Good testers do far more than write black box tests (boring) and run
test suites (even more boring, and mechanical). Good testers know how to
write tests that finds bugs, and once the code has been fixed so that it
passes all the tests, they start over again.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
 
I

Ian Collins

Pete said:
That's not what coverage analysis refers to. Coverage analysis takes the
test cases that you've written and measures (speaking a bit informally)
how much of the code is actually tested by the test set. You can't make
that measurement until you've written the code.
If you apply TDD correctly, you only write code to pass tests, so all of
your code is covered.
When you write tests before writing code you're only doing black box
testing. Black box testing has some strengths, and it has some
weaknesses. White box testing (which includes coverage analysis)
complements black box testing. Excluding it because of some dogma about
only writing tests before writing code limits the kinds of things you
can discover through testing.
TDD advocates will (at lease they should) aways be acceptance test
advocates. These provide your white box testing.
The problem is that, in general, you cannot test every possible set of
input conditions to a function. So you have to select two sets of test
caes: those that check that mainline operations are correct, and those
that are most likely to find errors. That second set requires knowledge
of how the code was written, so that you can probe its likely weak spots.
Possibly, but doing so may not increase the code coverage. It may well
flush out failure modes. Once found, these should be fixed by adding a
unit test to reproduce the error.
 
I

Ian Collins

Pete said:
Good testers do far more than write black box tests (boring) and run
test suites (even more boring, and mechanical). Good testers know how to
write tests that finds bugs, and once the code has been fixed so that it
passes all the tests, they start over again.
I never underestimate the ingenuity of good testers. My testers had
complete freedom to test the product how they wanted. They ended up
producing an extremely sophisticated test environment, which being fully
automated, they didn't have to run!
 
P

Pete Becker

Ian said:
If you apply TDD correctly, you only write code to pass tests, so all of
your code is covered.

Suppose you're writing test cases for the function log, which calculates
the logarithm of its argument. Internally, it will use different
techniques for various ranges of argument values. But the specification
for log, of course, doesn't tell you this, so your test cases aren't
likely to hit each of those ranges, and certainly won't make careful
probes near their boundaries. It's only by looking at the code that you
can write these tests.
TDD advocates will (at lease they should) aways be acceptance test
advocates. These provide your white box testing.

Acceptance tests can be white box tests, but they can also be black box
tests.
Possibly, but doing so may not increase the code coverage. It may well
flush out failure modes. Once found, these should be fixed by adding a
unit test to reproduce the error.

Well, yes, but the point is that focused testing based on knowledge of
the internals of the code (i.e. white box testing) is more likely to
find some kinds of bugs than tests written without looking into the
code, which is the only kind of test you can write before you've written
the code.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
 
I

Ian Collins

Pete said:
Suppose you're writing test cases for the function log, which calculates
the logarithm of its argument. Internally, it will use different
techniques for various ranges of argument values. But the specification
for log, of course, doesn't tell you this, so your test cases aren't
likely to hit each of those ranges, and certainly won't make careful
probes near their boundaries. It's only by looking at the code that you
can write these tests.
Pete, I think you are missing the point of TDD.

It's easy for those unfamiliar with the process to focus on the "T" and
ignore the "DD". TDD is a tool for delivering better code, the tests
drive the design, they are not driven by it. So if I were tasked with
writing he function log, I'd start with a simple test, say log(10) and
then add more tests to cover the full range of inputs. These tests
would specify the behavior and drive the internals of the function.

Remember, if code isn't required to pass a test, it doesn't get written.
Acceptance tests can be white box tests, but they can also be black box
tests.


Well, yes, but the point is that focused testing based on knowledge of
the internals of the code (i.e. white box testing) is more likely to
find some kinds of bugs than tests written without looking into the
code, which is the only kind of test you can write before you've written
the code.
That's where we disagree, I view the testers as the customer's
advocates. They should be designing acceptance tests with and for the
customer that validate the product to the customer's satisfaction. In
situations where there is an identifiable customer, here a some
excellent automated acceptance test frameworks available to enable
customer to design their own tests (FIT for example).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,007
Latest member
obedient dusk

Latest Threads

Top