C unit testing

C

Collins

Hello List-

I will soon be starting a new project in ANSI C and hope to automate
much of the unit testing. I'm looking for feedback on test
frameworks.
I am looking at the list at

http://en.wikipedia.org/wiki/List_of_unit_testing_frameworks#C

Have people any concrete experience with any of these? Can they
recommend others? What are the pitfall's to be avoided in adopting
this approach to one part of testing the software?

Thanks

Collins
 
I

Ian Collins

Hello List-

I will soon be starting a new project in ANSI C and hope to automate
much of the unit testing. I'm looking for feedback on test
frameworks.
I am looking at the list at

http://en.wikipedia.org/wiki/List_of_unit_testing_frameworks#C

Have people any concrete experience with any of these? Can they
recommend others? What are the pitfall's to be avoided in adopting
this approach to one part of testing the software?

I use CppUnit or gtest (code.google.com/p/googletest/) for both C and
C++ projects.

For a new project, I'd stick with gtest, It's by for the easiest unit
test framework to get up and running. Don't be frightened by the C++
stuff, I've introduced it to a couple of C only projects and the teams
had no problems.
 
A

August Karlstrom

K

Keith Thompson

William Ahern said:
I've been looking forward to trying out KLEE

http://klee.llvm.org/

I generally dislike dependencies, even for unit testing. Usually I just
create ad hoc shell utilities to test my components--each compilation unit
has a main() that converts arguments to parmeters for interesting routines,
with test cases specified in a shell script.

But KLEE might be worth the bother. AFAIU, KLEE generates test cases for you
by analyzing the logic of the code and the constraints of the input types.
Take, for example, this case from their tutorial:

...

int my_islower(int x) {
if (x >= 'a' && x <= 'z')
return 1;
else return 0;
}

...

There are three paths through our simple function, one where x is
less than 'a', one where x is between 'a' and 'z' (so it's a
lowercase letter), and one where x is greater than 'z'. As expected,
KLEE informs us that it explored three paths in the program and
generated one test case for each path explored.

Pretty neat. I'm anxious to find out whether it works well in practice.

Interesting. Will it tell you that the test doesn't reliably check for
lower case letters on all systems? (In EBCDIC, the codes for the lower
case letters are not contiguous; for example, 'a' < '~' && '~' < 'z'.)

I presume the answer to that is "no", unless it magically infers what
it's supposed to be doing from the name. The function is perfectly
valid; it just doesn't necessarily do what its name implies.

I don't mean to imply that this is a flaw in the tool, just that
there are plenty of things that can't be tested automatically.
 
B

Ben Pfaff

Collins said:
I will soon be starting a new project in ANSI C and hope to automate
much of the unit testing. I'm looking for feedback on test
frameworks.

It's not worth worrying about test frameworks, in my opinion.
Just write tests and, if necessary, write a script to run them.
When you have enough tests that the script is getting big and the
tests are taking a long time to run, it might be worth using
something a bit more formal.
 
I

Ian Collins

It's not worth worrying about test frameworks, in my opinion.
Just write tests and, if necessary, write a script to run them.
When you have enough tests that the script is getting big and the
tests are taking a long time to run, it might be worth using
something a bit more formal.

I strongly disagree with that. Using something like gtest and a decent
mocking framework saves an awful lot of wasted wheel reinventing. With
gtest, all you do is write tests, the framework takes care of the
running and reporting.

I typically generate 10-15 tests an hour, so something ad hoc soon gets
out of hand.
 
B

Ben Pfaff

Ian Collins said:
I strongly disagree with that. Using something like gtest and a
decent mocking framework saves an awful lot of wasted wheel
reinventing. With gtest, all you do is write tests, the framework
takes care of the running and reporting.

I typically generate 10-15 tests an hour, so something ad hoc soon
gets out of hand.

It sounds like you are very effective writing tests. I'm mostly
reacting to the number of people I've seen spend so much time
screwing around with testing frameworks that they don't end up
writing any tests.
 
I

Ian Collins

It sounds like you are very effective writing tests.

I have to be, without tests, I can't write the code to pass them!
I'm mostly
reacting to the number of people I've seen spend so much time
screwing around with testing frameworks that they don't end up
writing any tests.

Ah yes, I've come across a few of those in my time.
 
I

Ian Collins

I would never use something like KLEE to the exclusion of my usual methods.
Unit testing in general, I think, is more about code coverage and catching
egregious errors--buffer and arithmetic overflows and underflows, etc. Unit
test frameworks introduce limitations that can make testing interesting
cases a PITA if you try to write the tests within their framework. The
my_islower() case is a poor example, although maybe I want my_islower() to
produce errant output for EBCDIC in order to stress some other component;
some frameworks might make this difficult depending on how tests are
co-located with the code or with other tests.

Along those lines, my preferred method for writing tests proves extremely
useful during development and debugging. In one of my projects for instance
the core daemon has 41 .c files, 23 of which have a conditionally defined
main() routine. I can develop and debug each component--RTP parser, Flash
transcoder, DNS library, etc.--individually. Each utility can accept input
on stdin in a convenient format for testing (RTP or DNS packets, FLV files,
etc). Even my intermediate components work this way. I can build the MP3 or
H264 components separately, but I can also build the higher level A/V
transcoder separately, too; I can build utilities further on up the stack
which leverage more and more of the components.

You could do the same using a framework, I do. Forty one files is a
small project so your approach is manageable, but things get out of hand
very quickly for bigger projects, especially for full regression testing.
No testing framework which facilites peppering my code with asserts (gtest)
or analyzing codeflow (KLEE) could prove half as useful here. I don't even
know of any testing framework that could even prove useful at all. And to
extent that I'd want to automate regression tests which utilize the
utilities, I could hardly do better than with shell scripts.

gtest (along with all the other frameworks I know) doesn't require any
instrumentation in your production code. Probably any test framework
could replicate what you have. The crudest approach would be to have
the tests in your conditionally compiled main in individual test
executables. I prefer to keep the two separate to keep the
"documentation" separate form the code. For some projects, I have more
of my IP tied up in the tests than in the deliverable source!
Also, writing code this way enforces good encapsulation and other design
elements at the _source_ level. Testing frameworks, OTOH, tend to bend over
backwards not to impose on the developer. If you can slap a bunch of asserts
all over the place where's the incentive to design ancillary components with
the care and precision one designs the core components?

I find the complete opposite. Designing for test or using test driven
design leads to better encapsulation to facilitate testing. If there is
too much coupling in a design, testing it is very difficult.
So, yes, KLEE might not catch interesting cases, but I'm not sure its
characteristics are shortcomings given the role of automated testing
_frameworks_. I don't need facilitation in writing the hard and precise
tests, I want it in faciliting the mundane tests, and KLEE excels here
because it automates the whole shebang.

I haven't tried it, but I probably will now.
 
W

Walter Banks

William said:
But KLEE might be worth the bother. AFAIU, KLEE generates test cases for you
by analyzing the logic of the code and the constraints of the input types.
Take, for example, this case from their tutorial:

...

int my_islower(int x) {
if (x >= 'a' && x <= 'z')
return 1;
else return 0;
}

...

There are three paths through our simple function, one where x is
less than 'a', one where x is between 'a' and 'z' (so it's a
lowercase letter), and one where x is greater than 'z'. As expected,
KLEE informs us that it explored three paths in the program and
generated one test case for each path explored.

Pretty neat. I'm anxious to find out whether it works well in practice.

This approach to unit testing has a problem. Anytime the tests are
in anyway tied to the written code there is potential for the tests
to pass but the code does not operate as required.

The second point is translation of the source from C to some other
form for verification on anything more than trivial programs is
unlikely to be useful. This is a linguistic issue most programmers
read program source as a precise language.

A lot of work was done by McCabe in extracting program structure
(which KLEE does) to identify the minimum number of test cases
needed to cover application code. However test cases should be
independent of the source.

Regards,


walter..
 
A

August Karlstrom

A lot of work was done by McCabe in extracting program structure
(which KLEE does) to identify the minimum number of test cases
needed to cover application code. However test cases should be
independent of the source.

Black box that is.


/August
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top