C unit testing and regression testing

B

Bart van Ingen Schenau

I have a mock C function generator I use to replace external (to the
code under test) functions. The ability to providing generic mock
functions is where C++ really makes a difference.

I have seen you write about this mock generator before and I am
interested in learning more about it.
Is this generator available somewhere?

Bart v Ingen Schenau
 
J

Jorgen Grahn

When such an assert_eq discovers a mismatch how informative is it about the
cause? I see it has no parameter for descriptive text.

In my case, not very informative. int n=3; assert_eq(3.14, n) will generate
something like

"test Foo: FAIL because the assert_eq(3.14, 3) doesn't hold"

I can do something like this if I want:

if(3.14 != n) throw Failure("some descriptive text");

but in practice I rarely do, unless I'm writing a custom assertion,
assert_sorted(sequence) or something ...

Then I also have an option (at runtime) to trigger a core dump
instead, so I can check it in a debugger. I rarely do that.

I know unit test frameworks like CPPUNIT typically have assert-like
macros similar to my assert_eq(), but which also provide file and line
number information. I *do* miss that sometimes, but in the end it's not
vital enough for me to implement it.
That gives me an idea. I wonder if C's macros could be a boon here in that
they could be supplied with any needed parameters to generate good testing
code. Possibly something like the following

EXPECT(function(parms), return_type, expected_result, "text describing the
test")
EXPECT(function(parms), return_type, expected_result, "text describing the
test")

Then a whole series of such EXPECT calls could carry out the simpler types
of test. For any that fail the EXPECT call could state what was expected,
what was received, and produce a relevant message identifying the test which
failed such as

Expected 4, got 5: checking the number of zero elements in the array

where the text at the end comes from the last macro argument.

That would work. Although I'm personally not so fond of extra
descriptions: I want as little as possible to distract from writing
and reading the test cases. Readable messages when a test fails is
less important to me.
Of course, the test program could write the number of successes and failures
at the end.



It's a good idea but I'm not sure that longjmp would be possible without
modifying the code under test.

You're probably right. There's always the convention "a function
which returns 0 is a failing test case" of course ...

/Jorgen
 
L

Les Cargill

James said:
For a number of reasons. Ad-hoc code is bespoke, unfamiliar, irregular.

That's simply a specialization of the principle "all code is bad." Well,
Other People's Code is *worse*.

I am using cost as the factor of optimization here... not purtchase
cost, I mean uncertainty cost.
More
structured approaches, on the other hand, are easier to understand, modify
and develop.

Unless they aren't. Unless it is decided that test is on a different
stovepipe from development.
Also, most activities which have a common thread can be made
more regular and the common parts abstracted out to make further similar
work shorter and easier.

I understand the theory, and have significant quantities of evidence to
be skeptical of it. Building your own framework doesn't cost much, and
you know the internals of it that way.

Bad software isn't a technology problem - it's a culture problem.
 
L

Les Cargill

James said:
It's an interesting approach to test whole programs. Some time ago I wrote a
tester that could be controlled by a grid where each row was a test and the
columns represented 1) the command, 2) what would be passed to the program
under test via its stdin, 3) what to look for coming out of the program on
its stdout and stderr, and 4) what return code to expect.

That approach did have some advantages. It could test command line programs
written in any language. The language did not matter as all it cared about
was their inputs and outputs. Further, being able to see and edit tests in a
grid form (e.g. via a spreadsheet) it was very easy to understand which
tests were present and which were missing. There was also a script interface
to the same tester.

However, that's got to be an unusual approach hasn't it? I imagine it's more
normal to compile test code with the program being tested and interact with
it at a lower level. That would certainly be more flexible.

James


Just IMO - but the Big Gain here is that it is command line to begin
with. I can generate permutations and limited range executions
reasonably easily with any scripting language ( Tcl seems better
at this because it has lists as a first class object ).

You're then creating an "engine" that can be run in a pipe by Tcl.
so perhaps you now wish to make it have multiple "channels"; so
you run it over sockets. One for commands/responses, perhaps
another for asynchronous events.

If a GUI is needed, then make the GUI run the command line "engine".

This, of course, runs counter to how development is taught, but I
have much less trouble with this approach than others.
 
I

Ian Collins

Bart said:
I have seen you write about this mock generator before and I am
interested in learning more about it.
Is this generator available somewhere?

Most places I have worked...

I probably should open source it, but so far time pressures have got in
the way. I'll have another look at moving it to somewhere like GitHub
or BitBucket.
 
R

Roberto Waltman

James said:
What do you guys use for testing C code? From web searches there seem to be
common ways for some languages but for C there doesn't seem to be any
particular consensus.

I am looking for an approach in preference to a package or a framework.
Simple and adequate is better than comprehensive and complex.

If I have to use a package (which is not cross-platform) this is to run on
Linux. But as I say an approach would be best - something I can write myself
without too much hassle. At the moment I am using ad-hoc code but there has
to be a better way.

You may, (or nor,) find some answers in this book:
"Test Driven Development for Embedded C"

http://pragprog.com/book/jgade/test-driven-development-for-embedded-c

Cannot comment. I have it, but did not read it yet. From a summary
page leafing it seems to have good material, one you get pass the
ideology and dogmas of test driven development. (Write test cases
first, write code only to pass a failing test, etc.)
 
J

James Harris

....
....


If it's anything like common unit test frameworks, it would output
something like "failure in test whatever at line bla, got this, expected
that" which is all you really need.


There is a common tool called "expect (http://expect.sourceforge.net )
which is frequently used for acceptance testing.

I used that Expect years ago to automate some terminal operations. I must
say I hated it! I can't remember the details but suspect it was because of
the influence of TCL. (I subsequently used the telnetlib expect method in
Python. That, by contrast, was very easy to use and was well thought out.)

I was thinking of a check-expect clause. Maybe instead of EXPECT I should
call the macro CHECK or similar as the former term does sound like a
terminal control mechanism.

James
 
J

James Harris

....
That would work. Although I'm personally not so fond of extra
descriptions: I want as little as possible to distract from writing
and reading the test cases. Readable messages when a test fails is
less important to me.

I find such things much easier to read if they can be expressed on one line.
That's particularly true where there is a series of tests one after the
other without any code in between. That helps to see differences between
tests and to spot if any are missing.

BTW, what I have tried so far is OK for testing scalars but sometimes we
want to test composite objects. For example, if generating an array of
values the test should be that the array (or part thereof) has the values we
expect.

Do you guys have any good way of testing such composites? Or do you just
check each item individually? Say you had an array which, at some point in
the tesing, should have two or three elements with specific values. Would
you just write a check for each element or do you have a more clever way of
testing such things?

James
 
J

Jorgen Grahn

I used that Expect years ago to automate some terminal operations. I must
say I hated it! I can't remember the details but suspect it was because of
the influence of TCL.

Not /influence/ -- expect is an extension to the Tcl language.
(I subsequently used the telnetlib expect method in
Python. That, by contrast, was very easy to use and was well thought out.)

I'm no friend of expect either, but I doubt I'd like this telnetlib
thing more. It's an inherently fragile approach, unless you have a
full spec for the interactive program you're automating.

Still, when you have to use it you're happy to have it. Kind of like a
heavy revolver when you're in the Arctic -- you may run into a hungry
polar bear ...

/Jorgen
 
J

Jorgen Grahn


[Attributions lost, but this is probably the "what I have tried so
far" below:]
....
I find such things much easier to read if they can be expressed on one line.
That's particularly true where there is a series of tests one after the
other without any code in between. That helps to see differences between
tests and to spot if any are missing.

BTW, what I have tried so far is OK for testing scalars but sometimes we
want to test composite objects. For example, if generating an array of
values the test should be that the array (or part thereof) has the values we
expect.

Do you guys have any good way of testing such composites? Or do you just
check each item individually? Say you had an array which, at some point in
the tesing, should have two or three elements with specific values. Would
you just write a check for each element or do you have a more clever way of
testing such things?

As far as I'm concerned, it's normal programming. Do it in whatever
way that seems maintainable. Use helper functions if it seems that
would help. That's why I'm suspicious of the EXPECT macro above --
you cannot generally squeeze normal code into such a template, and you
can't squeeze unit tests into it either.

If in a bunch of tests it's often useful to assert that a struct Foo
has some specific properties, I'd write a helper

void assert_Foo_has_properties(const struct Foo*, properties ...);

/Jorgen
 
I

Ian Collins

James said:
....


I find such things much easier to read if they can be expressed on one line.
That's particularly true where there is a series of tests one after the
other without any code in between. That helps to see differences between
tests and to spot if any are missing.

Having multiple test cases in one test function is generally not a good
idea. It makes spotting the actual failure harder unless you are very
careful with your failure reports.
BTW, what I have tried so far is OK for testing scalars but sometimes we
want to test composite objects. For example, if generating an array of
values the test should be that the array (or part thereof) has the values we
expect.

Do you guys have any good way of testing such composites? Or do you just
check each item individually? Say you had an array which, at some point in
the tesing, should have two or three elements with specific values. Would
you just write a check for each element or do you have a more clever way of
testing such things?

With CppUnit (and I assume other test frameworks) you can define your
own equivalence operator for a given type.
 
J

James Harris

Jorgen Grahn said:

[Attributions lost, but this is probably the "what I have tried so
far" below:]
...
I find such things much easier to read if they can be expressed on one
line.
That's particularly true where there is a series of tests one after the
other without any code in between. That helps to see differences between
tests and to spot if any are missing.

BTW, what I have tried so far is OK for testing scalars but sometimes we
want to test composite objects. For example, if generating an array of
values the test should be that the array (or part thereof) has the values
we
expect.

Do you guys have any good way of testing such composites? Or do you just
check each item individually? Say you had an array which, at some point
in
the tesing, should have two or three elements with specific values. Would
you just write a check for each element or do you have a more clever way
of
testing such things?

As far as I'm concerned, it's normal programming. Do it in whatever
way that seems maintainable. Use helper functions if it seems that
would help. That's why I'm suspicious of the EXPECT macro above --
you cannot generally squeeze normal code into such a template, and you
can't squeeze unit tests into it either.

I'm not convinced by the idea of such a macro either. It was just put in as
a possibility, and the implementation has issues with different types. That
said, there may still be some mileage in it. I'm not sure yet.
If in a bunch of tests it's often useful to assert that a struct Foo
has some specific properties, I'd write a helper

void assert_Foo_has_properties(const struct Foo*, properties ...);

For sure that would be good if you had one such function per struct type and
you wanted to test all the fields each time.

Here's a case to illustrate why I asked about testing composites. The unit
under test is a module which manipulates a list. It supports operations such
as get, set, insert and delete. The tests might be

initialise the list
check that list has length 0
insert 10
check that list is just the value 10
insert 11 at the end
check that list is 10,11
insert 9 at the beginning
check that list is 9,10,11
etc.

The last test shown above (to check that the list is 9,10,11) could be

check that list has length 3
check that list[0] is 9
check that list[1] is 10
check that list[2] is 11

but that is quite long-winded and the resulting set of tests could become
hard to maintain as we carry out further operations on the list such as
deletions and sets. Perhaps more importantly, having to test each item
individually somewhat obscures what we really want to test for in the "list
is" clause. It would be helpful to have a way to match the whole list in a
single test.

James
 
J

James Harris

Ian Collins said:
Having multiple test cases in one test function is generally not a good
idea. It makes spotting the actual failure harder unless you are very
careful with your failure reports.

I can't quite see this. If we wanted to test the response of a function to
certain inputs wouldn't it be best to group those test cases together as in

check that func(1,2,3) returns 4;
check that func(9,10,11) returns 8;
check that func(7,0,7) returns -1;
etc

With CppUnit (and I assume other test frameworks) you can define your own
equivalence operator for a given type.

Maybe I should look at CUnit (though I don't think anyone has recommended
it). Or perhaps I should learn some C++ and try CppUnit. From comments on
this thread it looks like it could be used to test C code. However, I am
somewhat reticent to start getting into C++ at the moment.

James
 
I

Ian Collins

James said:
I can't quite see this. If we wanted to test the response of a function to
certain inputs wouldn't it be best to group those test cases together as in

check that func(1,2,3) returns 4;
check that func(9,10,11) returns 8;
check that func(7,0,7) returns -1;
etc

It sometimes is (especially if you are retrofitting tests) and it isn't
too bad if you can easily tell which of the conditions failed in the
test output. One drawback is with the frameworks I've used, if the
first condition fails, the whole test function fails and you end up with
more test/fix cycles than you would with smaller tests. When writing
test first, you tend to build up small incremental tests.
Maybe I should look at CUnit (though I don't think anyone has recommended
it). Or perhaps I should learn some C++ and try CppUnit. From comments on
this thread it looks like it could be used to test C code. However, I am
somewhat reticent to start getting into C++ at the moment.

You don't have to learn that much C++ to get going. I think most C
programmers would be happy with it. I have used writing unit tests for
C as a good way of introducing C++ to C programmers. Way back testing C
was how I first learned C++.

The IDE I use these days (NetBeans) has good support for working with
both C and CppUnit tests. So the programmer just writes test cases and
the IDE does the boilerplate. I haven't tried CUnit.
 
J

Jorgen Grahn

Here's a case to illustrate why I asked about testing composites.

Good with an example, because I think you may be making it harder by
looking at it from a too general perspective. I don't think you need
a general pattern for writing assertions than you need for, say, text
output.
The unit
under test is a module which manipulates a list. It supports operations such
as get, set, insert and delete. The tests might be

initialise the list
check that list has length 0
insert 10
check that list is just the value 10
insert 11 at the end
check that list is 10,11
insert 9 at the beginning
check that list is 9,10,11
etc.

I'd skip the intermediate checks. If the list is empty at the start
and 9,10,11 when you're done, you can be pretty damn sure it was ok
inbetween, too. (And presumably you have another test case for
initialising empty lists, so you only need the last check.)
The last test shown above (to check that the list is 9,10,11) could be

check that list has length 3
check that list[0] is 9
check that list[1] is 10
check that list[2] is 11

but that is quite long-winded and the resulting set of tests could become
hard to maintain as we carry out further operations on the list such as
deletions and sets.

Why not simply define an equality test for lists? That leaves you
with the problem of creating reference lists and not confusing them
with each other, but perhaps you can fix your tests so that when the
manipulations are done, the expected contents are 9,10,11. Then you
only need a 'static struct list ref_9_10_11'.

Or you could fix your tests so all expected outcomes are lists on the
form N, N+1, N+2, ... and have an assertion

void assert_list_is_sequence(const struct List*, int m, int n);

That would cover all interesting cases except those where you're
interested in having duplicates (e.g. you're testing a sort function).
Perhaps more importantly, having to test each item
individually somewhat obscures what we really want to test for in the "list
is" clause.

Yes. Test cases need to be as readable as possible.

/Jorgen
 
J

James Harris

Jorgen Grahn said:
Good with an example, because I think you may be making it harder by
looking at it from a too general perspective. I don't think you need
a general pattern for writing assertions than you need for, say, text
output.

Maybe. I have wondered if I was making a rod for my own back. My thought is
that if I can work out a general approach now it will make future testing
easier.

In fact I have got to a point I am happy enough with to ask for comments on
(if that's not too much a mix of trailing prepositions). It combines a
number of the suggestions people have made in this thread. Comments and
criticsms would be appreciated. Bear in mind that I'm not as familiar with C
as I would like so I would welcome suggestions even on small things like
choice of names, use of the elements of C, style etc. The purpose of the
approach is to use some fairly simple code (no large libraries etc) while
making it easy to write clear tests for arbitrary types of data.

The approach is intended to be generally usable but the main example of an
awkward case is to test manipulation of a list. As mentioned before, for a
list of, say, five elements, rather than having to write five tests, one for
each element and probably another one or two to confirm the limits of the
list, I felt it would be much clearer to test the list as a whole on a
single line.

First, the list module has an addional function which converts the list to a
string. It is named, naturally enough, list_to_striing(). The numbers that
make up the list are written in decimal with one space between them. At the
moment that converter is included only if debugging. I have phrased that
test as #ifndef NDEBUG so it is present by default. I don't know if that's
considered a good way to do it or not.

In practice, list_to_string() is quite short so it may be worth keeping in
the load module even when not debugging. It is limited but it might be handy
in other circumstances.

Second, there is a macro which compares two strings and produces a suitable
one-line report if they are not identical. The macro is called CHECK_SEQ for
check-string-equal (there is currently another macro to compare integers).
The string comparison macro is intended to be called as

CHECK_SEQ(function_call(args), "expected result", "descriptive text")

Then if the string returned from the function call does not match the
expected result a one-line report is printed saying what was expected and
what was received. The function call has to return a pointer to some
allocated memory.

The macro is as follows.

#define CHECK_SEQ(fcall, expected, note) \
check_buf = fcall; \
if (strcmp(check_buf, expected) != 0) { \
CHECK_PRINT_LINE_INFO \
fprintf(CHECK_STREAM, " expected %s", expected); \
fprintf(CHECK_STREAM, ", got %s", check_buf); \
CHECK_PRINT_NOTE(note) \
fprintf(CHECK_STREAM, ".\n"); \
check_errors++; \
} \
else { \
check_passes++; \
} \
free(check_buf);

The two helper macros it uses are

#define CHECK_PRINT_LINE_INFO \
/* fprintf(CHECK_STREAM, " file %s", __FILE__); */ \
fprintf(CHECK_STREAM, " line %i", __LINE__);

#define CHECK_PRINT_NOTE(note) \
if (strcmp(note, "") != 0) { \
fprintf(CHECK_STREAM, ", %s", note); \
}

Third, and finally, the above makes it easy to write tests (which is the
whole point of the exercise) such as the following.

Here are some tests to check the creation of and the effects of changes to a
list.

list_create(list, space, ELEMENTS);
CHECK_SEQ(list_to_string(list), "", "new list")

list_insert(list, 0, 0);
CHECK_SEQ(list_to_string(list), "0", "single-element list")

list_insert(list, 1, 1);
CHECK_SEQ(list_to_string(list), "0 1", "two elements")

list_insert(list, 2, 2);
list_insert(list, 1, 11);
list_insert(list, 0, 10);
CHECK_SEQ(list_to_string(list), "10 0 11 1 2", "many elements")

To me, the above is a bit wordy because the list needs to be manipulated
between tests. To try to illustrate why I think the macro part of the
approach is useful here are some tests which check the results of a
function. They can be one per line. IMO the brevity makes the tests easier
to write and easier to read. For the moment, at least, these compare
integers rather than strings.

CHECK_IEQ(list_scan_ae(list, 0, 4), 0, "scan from 0 for 4")
CHECK_IEQ(list_scan_ae(list, 3, 12), 4, "scan from 3 for 12")
CHECK_IEQ(list_scan_ae(list, 0, 55), 5, "scan from 0 for 55")

....
Why not simply define an equality test for lists? That leaves you
with the problem of creating reference lists and not confusing them
with each other, but perhaps you can fix your tests so that when the
manipulations are done, the expected contents are 9,10,11. Then you
only need a 'static struct list ref_9_10_11'.

I can see that it could be done and there is a possiblity that an equality
test will be needed anyway. One big advantage of strings, though, is that
although C doesn't support arbitrary use of manifest constants string
literals can be written directly anywhere they are needed. For the case in
point I can just compare the results of the call against "9 10 11". There is
no need to separately define a variable with 9,10,11 as content.

Also, strings should be immediately usable in other contexts apart from
lists of numbers so similar comparisons should work for other data types.
I'm aware that they wouldn't work well for comparisons other than equality
but testing whether the results of a call are equal to what was expected are
probably the most useful.
Or you could fix your tests so all expected outcomes are lists on the
form N, N+1, N+2, ... and have an assertion

void assert_list_is_sequence(const struct List*, int m, int n);

Agreed. I did think about that when I was trying to work out how best to do
this ... before strings came to the rescue. :)

James
 
J

Jorgen Grahn

....

You may, (or nor,) find some answers in this book:
"Test Driven Development for Embedded C"

http://pragprog.com/book/jgade/test-driven-development-for-embedded-c

Cannot comment. I have it, but did not read it yet. From a summary
page leafing it seems to have good material, one you get pass the
ideology and dogmas of test driven development. (Write test cases
first, write code only to pass a failing test, etc.)

Speaking of TDD, does anyone here use it together with C?

I meet people once in a while who like the idea, and I've read a
pamphlet by Kent Beck ... but I've never really done it according to
the dogmas. Or seen anyone else do it for that matter, in any
language ... but I tend to work in conservative places.

(I sometimes write tests before the code, but I understand this does
not qualify as TDD.)

/Jorgen
 
J

Jorgen Grahn

Maybe. I have wondered if I was making a rod for my own back. My thought is
that if I can work out a general approach now it will make future testing
easier.

Maybe. For me, sometimes such things work out. More often I need
concrete experience first and then maybe I can step back and see a
general pattern.
In fact I have got to a point I am happy enough with to ask for comments on
(if that's not too much a mix of trailing prepositions). It combines a
number of the suggestions people have made in this thread. Comments and
criticsms would be appreciated. Bear in mind that I'm not as familiar with C
as I would like so I would welcome suggestions even on small things like
choice of names, use of the elements of C, style etc. The purpose of the
approach is to use some fairly simple code (no large libraries etc) while
making it easy to write clear tests for arbitrary types of data.

The approach is intended to be generally usable but the main example of an
awkward case is to test manipulation of a list. As mentioned before, for a
list of, say, five elements, rather than having to write five tests, one for
each element and probably another one or two to confirm the limits of the
list, I felt it would be much clearer to test the list as a whole on a
single line.

First, the list module has an addional function which converts the list to a
string. It is named, naturally enough, list_to_striing(). The numbers that
make up the list are written in decimal with one space between them. At the
moment that converter is included only if debugging. I have phrased that
test as #ifndef NDEBUG so it is present by default. I don't know if that's
considered a good way to do it or not.

In practice, list_to_string() is quite short so it may be worth keeping in
the load module even when not debugging. It is limited but it might be handy
in other circumstances.

Second, there is a macro which compares two strings and produces a suitable
one-line report if they are not identical. The macro is called CHECK_SEQ for
check-string-equal (there is currently another macro to compare integers).
The string comparison macro is intended to be called as

CHECK_SEQ(function_call(args), "expected result", "descriptive text")

I didn't read all of it, but the basic idea seems to be to convert
values to strings, and use that in your checks/assertions.

Doesn't seem too bad. It's an attractive feature that one thing takes
care of both the assertion and the printing of "this is why your test
failed".

Just be prepared to run into cases where this isn't the best approach.

/Jorgen
 
I

Ian Collins

Jorgen said:
Speaking of TDD, does anyone here use it together with C?

Yes, I do. With C, C++, JavaScript and PHP. The same techniques apply
to any language.
I meet people once in a while who like the idea, and I've read a
pamphlet by Kent Beck ... but I've never really done it according to
the dogmas. Or seen anyone else do it for that matter, in any
language ... but I tend to work in conservative places.

(I sometimes write tests before the code, but I understand this does
not qualify as TDD.)

I find it really helps if I'm not sure of the best way to solve a
problem. I start with the very basics and build a solution. If I'm
doing a demonstration, I often use curve fitting, starting with the
gradient of a straight line, then moving on to a generic solution.

When it's all done, you have a full set of regression tests ready to go.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,482
Members
44,901
Latest member
Noble71S45

Latest Threads

Top