How to use CPPUnit effectively?

D

Dave Steffen

Phlip said:
Fixtures are a unit test Best Practice, so test rigs should enable them.

Yes, good point...
Both UnitTest++ and my TEST_() enable them without requiring them. Boost's
doesn't. But indeed do check it out; it _is_ Boost, after all!

... and using Boost::UnitTest I miss them on occasion. IIRC there
are ways to do _whole test suite_ setups and fixtures, but I haven't
hacked around enough to use them yet.

The thing that makes Boost's unit test framework so near and dear to
my heart is the extreme lack of typing overhead. I credit this for
the large number of unit tests that people in my organization are
writing (on their own, without prodding from anyone) these days; as
opposed to our old JUnit-ish thing, which took _loads_ of typing to
get anywhere, so people just tended to ignore it.
 
J

jolz

You should write tests for each new line (or two) of the code. Never write
new code without a failing test.

Allways?
How do I test concurrency?
How do I test that after adding new button my gui looks ugly?
How do I test proper excepion handling after disk failure?
How do I test efficiency of used algorithm?
There is a lot of code that making an unit test for is useless.
Unit test are really nice especially when code changes - if test passes
after the change that's a hint that maybe nothing was broken. But they
are also dangerous. Many times I'v heard that if test passes than
everything works. Actually often it means that there are to little
tests. Or maybe every single function works, but together they don't
work (especially true in multithread environment). Or maybe new
functionality that was added introduces several hundred of new states
in application that wans't considered when existing test was written.
Also probanly everybody who write tests spent a lot of time debuging
working code and finding that the mistake was in the test.
Yes, test are important, but I tkink that they often overrated. I'v
often heard that TDD's goal is to make it green. Beware of red. Make it
as simple as possible. But I don't recall hearing the word "design".
 
G

Gianni Mariani

jolz said:
Allways?
How do I test concurrency?

Easy - run many of them.
How do I test that after adding new button my gui looks ugly?
I don't know what you talk about - it looks good to me :)
How do I test proper excepion handling after disk failure?
Have you app code read/write to an abstracted layer that you can
simulate those kinds of failures. BTW - This would not be a test that
would be high on my priority list. If the unlikely event the customer
has a disk failure, it's unlikely that my code is going to be the only
one to blow up.

The idea here is be selective of the failure modes you test for. Make
sure they are going to be a priority to the customer.
How do I test efficiency of used algorithm?
Run the test on large data sets and limit the execution time.
There is a lot of code that making an unit test for is useless.
Unit test are really nice especially when code changes - if test passes
after the change that's a hint that maybe nothing was broken. But they
are also dangerous. Many times I'v heard that if test passes than
everything works. Actually often it means that there are to little
tests. Or maybe every single function works, but together they don't
work (especially true in multithread environment).

This one is easy - test them in a multithreaded environment. Many of
the classes I create that have to be thread safe are subject to what I
term a "Monte Carlo" test where I nail them with various random
requests. For example there is a "timer" module. I have tests where
timers are added, removed, fail on and on over a varying number of threads.

In an automated build system, the build run continuously and
occasionally there is a "failure" so we grab the core dumps and viola,
sometimes you nail it.

The motto of the story is, write many tests and run them as often as you
can and have a way to check the results of failure - usually with a core
dump. Many times I have resorted to having the program suspend itself
on failure and going into the automated test system and attaching a
debugger to the suspended code (usually on Windows).

Also, by doing an automated continuous build you see which changes to
the software possibly caused the problem.

Unit tests don't have to be small. I remember once where the whole
application was a class I could instantiate (only ever used const
globals), so I did one of my monte carlo tests and - ooh it failed due
to some very esoteric shut down before being fully initialized bug.
Everyone, including myself, thought this would never happen in real
life. .... Guess again. On the initial release that's exactly one of
the critical bugs that came back, which btw we knew what it was because
we did the test.
... Or maybe new
functionality that was added introduces several hundred of new states
in application that wans't considered when existing test was written.

Write more tests to try to make those states happen.
Also probanly everybody who write tests spent a lot of time debuging
working code and finding that the mistake was in the test.

So ? The tests can take some time to write but it is far less expensive
and much more useful to find the bugs before it gets to the customer.
Yes, test are important, but I tkink that they often overrated. I'v
often heard that TDD's goal is to make it green. Beware of red. Make it
as simple as possible. But I don't recall hearing the word "design".

What is this thing you call "design" ?

What is it's deliverable ? i.e. when you have finishes a design, what
is it that you have ?

If you think TDD, your design is the unit tests + the interface to the
application. BTW - not all the unit tests, just the ones that help
underpin the interface. For instance I would not do a monte carlo
stress test at the design stage, but I would do it for verification.

While you write the code, you're doing to come across issues that may
mean changing the interfaces and hence the unit tests so having a boat
load of tests to fix when your interfaces changes does not help.
 
J

jolz

How do I test concurrency?
Easy - run many of them.

And if 1000 passes succedes that means that everything is ok? I don't
tink so. For example:

#include <boost/thread/thread.hpp>

int counter = 0;
int c1;
int c2;

void t1() {
for (int i = 0; i < 1000; i++);
c1 = counter;
++counter;
}

void t2() {
for (int i = 0; i < 1000; i++);
c2 = counter;
++counter;
}

int main() {
for (int i = 0; i < 100000; i++) {
boost::thread thrd1(&t1);
boost::thread thrd2(&t2);
thrd1.join();
thrd2.join();
assert(c1 != c2);
}
}

passes, but the code is obviously wrong. And even if test woulh have
failed it doen't really mean anything. Real life examples are much more
subtle. Also the same test may work on one comuter + compiler +
operating system (for example test machine) and fail on every other.
It runs about a minute on my computer. If I would have to test this way
all functions in 500000 line application it would take a lot of time.
And usually 1 fuction have more than 1 test.
Notice that single look at the code proves that code is wrong, but
100000 passes of the test gave the illusion that everything is ok.
Have you app code read/write to an abstracted layer that you can
simulate those kinds of failures.

But this way I have to change my code. Well, it may lead to a better
design, but still the test tests the abstract layer and not the actual
problem.
Run the test on large data sets and limit the execution time.

Remember - first write test, than write code. So how do you guess what
is the correct time? And what happens if test is run on a faster
machine? I guess it will pass every time even is something was broken.

Those were only examples. In real life there are much more of those (so
far I think that we agree that gui is not unit testable, and there are
lots of developers that do only gui).
So ? The tests can take some time to write but it is far less expensive
and much more useful to find the bugs before it gets to the customer.

But even less expensive is simply looking into the code and thinking
what to do to make it right, not what to do to make some test pass.
Test will pass anyway.
What is this thing you call "design" ?

TDD says - write as little as posible to make something work. And don't
do anything that is not necessary. Than add something to make something
else work. Almoust always change the first code to cooperate with new
code. But often one nows that for example creating cont string will be
eventually necessary. Why don't start with this? UML is one way to
deign. But it usually doesn't focus on details. Unit tests focuses only
on details and nothing else. I think none of them is ultimate solution.

I think that unit test may be dangerous exactly because of everyting
you wrote. I get the impression that you think that if test passes that
it means that application is bug free. That may be true for very
limited number of applications (for example library that does matrix
calculations) but not in many real life applications.
 
P

Phlip

jolz said:
Remember - first write test, than write code.

And don't thread. Write an event-driven architecture that time-slices
things. That is, in turn, easy to switch to threading if you find a real
need. That real need will drive the test concerns. And you can't test-in
thread robustness; you must review and model it.
So how do you guess what
is the correct time?

You ask your business analyst, or you set the time comfortably close to what
the code currently does. Tests don't (generally) validate features. They
generally detect if your code strays from its original activities.
And what happens if test is run on a faster
machine? I guess it will pass every time even is something was broken.

But it must pass on every developer machine, too.
Those were only examples. In real life there are much more of those (so
far I think that we agree that gui is not unit testable,

GUIs are always unit testable. They are just a bad place to start learning!

http://www.zeroplayer.com/cgi-bin/wiki?TestFirstUserInterfaces
and there are
lots of developers that do only gui).

Oh, I only do the variables that start with [L-Z], but I don't put that on
my resume!
But even less expensive is simply looking into the code and thinking
what to do to make it right, not what to do to make some test pass.
Test will pass anyway.

Short-term, tests make development faster by avoiding debugging, including
all the "program in the debugger" that everyone does with modern editors.

Long-term, tests make development faster by recording decisions about
behavior in a place where they are easy to review, and generally automatic.
They shorten the QA pipeline between programmers and users, and they help
turn the decision to release into strictly a business decision.

Uh, TDD used to stand for "Test Driven Design". It now stands for
"Development", because TDDers try to express every aspect of development -
analysis, design, coding, integration, productization, etc. - as a form of
test.
TDD says - write as little as posible to make something work. And don't
do anything that is not necessary. Than add something to make something
else work. Almoust always change the first code to cooperate with new
code. But often one nows that for example creating cont string will be
eventually necessary. Why don't start with this? UML is one way to
deign. But it usually doesn't focus on details. Unit tests focuses only
on details and nothing else. I think none of them is ultimate solution.

UML is where you draw a bunch of circles and arrows, each one of which
represents an opportunity for a bug. If we treat UML as a methodology (which
it is not), then we might draw a design, then code the design without any
behavior. So it's all bugs. Then we go thru whomping down each bug by adding
the behaviors to each box and arrow.

Software is about behavior. Design is the support for behavior; it's less
important. It should be always easy to change. (I read a nice job listing
recently, that said "Refactoring is rare due to good design practices". My
clairvoyancy skills are not as accurate as they require!)

So implement the behavior first, then refactor the design while preserving
that behavior.
I think that unit test may be dangerous exactly because of everyting
you wrote. I get the impression that you think that if test passes that
it means that application is bug free. That may be true for very
limited number of applications (for example library that does matrix
calculations) but not in many real life applications.

It shows the program passed tests that it used to pass. Such tests might
have been reviewed. The behaviors they test were certainly reviewed. So
preserving the tests will preserve this investment into reviewing the
behavior. This frees up everyones time, to concentrate on implementing
features and exploratory testing.
 
P

Phlip


If you can't think of the test, how will you think of the line of code?

The test doesn't need to perfectly prove the line will work. In the
degenerate case, the test simply checks the line exists. Another line could
fool the test. Don't do that, and add different tests with different attack
angles, to triangulate the code you need.
How do I test concurrency?

Developer tests are only import for rapid development. Not for (shocked
gasp) proving a program works. If you simply must go concurrent, write a
good structure for your semaphores, enforce this _structure_ with tests, and
then soak-test your final application, just to check for CPU bugs and such.
How do I test that after adding new button my gui looks ugly?

If that's important (it usually _isn't_), you make reviewing the GUI very
easy:

http://www.zeroplayer.com/cgi-bin/wiki?TestFlea
How do I test proper excepion handling after disk failure?

Sometimes you let things like this slide. Yes, you occassionally write a
line of code without a test. The difference is A> you could have written the
test, and B> your code follows community agreements about robustness.

For example, an in-house tool application should simply die and tell the
user to report to the help-desk. A commercial application, by contrast,
shouldn't make those assumptions.

A test on disk failure should use a Mock Object to mock out the file system.
That will return an error on command, so a test can trivially ensure the
error gets handled correctly.
How do I test efficiency of used algorithm?

You don't need to do that to test-first each line of the algorithm. You need
to know what the algorithm is, so you can implement it under test. If you
don't know the algorithm yet, test-first might help discover it, but you
still need other design techniques.

If the algorithm is important, then it should have tests that catch when any
detail of its internal operations change. So only implement important things
via test-first.
There is a lot of code that making an unit test for is useless.

Uh, then why write the code?

If you assume a unit test is expensive, then you will never get over that
curve and make them cheap.

If, instead, you start with unit tests, they accelerate development. For
each new line of code, the unit test is easier to write than debugging that
line would be.

Don't tell my bosses this, but I have programs that I almost never manually
drive. So, for me, there's a lot of code that manual testing is useless!
Unit test are really nice especially when code changes - if test passes
after the change that's a hint that maybe nothing was broken. But they
are also dangerous.

So get them reviewed as often as you review the code. Also, tests make an
excellent way to document the code, and you should help your business
analysts review this code.
Many times I'v heard that if test passes than
everything works.

Nobody should believe that, even when you have 100% statement coverage, and
it appears true!
Actually often it means that there are too little
tests. Or maybe every single function works, but together they don't
work (especially true in multithread environment).

Don't multithread. Do write tests that re-use lots of objects in
combination. (Don't abuse Mock Objects, for example.) Your design should be
so loosely coupled that you can create any object under test without any
other object.
Or maybe new
functionality that was added introduces several hundred of new states
in application that wans't considered when existing test was written.

Yep. But the odds you _know_ you did that are very high.
Also probanly everybody who write tests spent a lot of time debuging
working code and finding that the mistake was in the test.

Uh, "debugging"?

Under test-first, if the tests fail unexpectedly, you hit Undo until they
pass. That's where most debugging goes.

If you try the feature again, and the tests fail again, you Undo and then
start again, at a smaller scale, with a _smaller_ change. If this passes, do
it again. You might find the error in the tests like this, because it will
relate to the line of code you are trying to change.

This is like debugging, but with one major difference. Each and every time
you get a Green Bar, you could integrate and ship your code.

So a test-rich environment is like having a big button on your editor called
"Kill Bug". Invest in the power of that button.
Yes, test are important, but I tkink that they often overrated. I'v
often heard that TDD's goal is to make it green. Beware of red. Make it
as simple as possible. But I don't recall hearing the word "design".

If you are reading the blogs and such, they often preach to the choir. They
know you know it's about "design".

Google for "emergent design" to learn the full power of the system.
 
P

Phlip

Dave said:
Yes, good point...


... and using Boost::UnitTest I miss them on occasion. IIRC there
are ways to do _whole test suite_ setups and fixtures, but I haven't
hacked around enough to use them yet.

The thing that makes Boost's unit test framework so near and dear to
my heart is the extreme lack of typing overhead.

Boost made a module that's easy to type??! ;-)

Seriously, all the TEST() macro systems limit typing in some way, better
than raw CppUnit. However...
I credit this for
the large number of unit tests that people in my organization are
writing (on their own, without prodding from anyone) these days; as
opposed to our old JUnit-ish thing, which took _loads_ of typing to
get anywhere, so people just tended to ignore it.

That is /so/ much more important than convenient fixture systems! (And you
could trivially add them, with another TEST_FIXTURE() macro.)

Carry on!
 
P

Phlip

VJ said:
We are using http://cxxtest.sourceforge.net/guide.html and it is good and
easy to understand

I don't understand the brief amount of documentation I read.

Given test<3>, do I guess right that we have to increment that 3 by hand,
each time we

I want to smoke crack while coding, and having to remember what number I'm
up to interferes with my lifestyle. I can't even think of a macro that fixes
this (with a compile-time constant).

So does cxxtest really make me increment that number? Or should I, like,
read more of the documentation before passing judgement?
 
J

jolz

And don't thread.

So how do I write gui application? Do you really write for example
database communicatin from gui thread? If so, how do you allow user to
stop the operation. Or resize window. Baically do anything other than
staring in a gray window. Application logic should be in a network
thread? Or maybe parallel algorithm should be in a single thread?
or you set the time comfortably close to what
the code currently does.

What is the purpose of test if I write test so it allways passes?
And what about changing hardware? Rewrite all tests? Change 1 global
settings and pray that it is changed with correct proportion? Or maybe
make new settings so all test passes?
But it must pass on every developer machine, too.

Does really tests that run few hours/days are run on all machines in a
company?
GUIs are always unit testable. They are just a bad place to start learning!

I'v seen tester trying to write a gui test. Heard about others. It was
allways a lot of work with minor advantages. It was only testing
behaviour. Never how application looks. And it was an easier version -
java gui. I hava no idea how can't be it even worse in language without
gui in a standart.
Short-term, tests make development faster by avoiding debugging, including
all the "program in the debugger" that everyone does with modern editors.

Debugger also won't work in any of situations I presented. The fact
that test isn't worse than debugger doesn't mean that it usefull.
UML is where you draw a bunch of circles and arrows, each one of which
represents an opportunity for a bug. If we treat UML as a methodology (which
it is not), then we might draw a design, then code the design without any
behavior. So it's all bugs.

I didn't quite get how uml causes bugs. But let's not start another
off-topic from this one.
It shows the program passed tests that it used to pass.

Yes. And it is the only thing that makes test usefull for me. It means
that I didn't screwed up to much. But It certainly doesn't mean that
application works. It doesn't even mean that it isn't worse that
before.
If you can't think of the test, how will you think of the line of code?

I'v already presented a code that passes test and doesn't work. I know
how to correct the code, but have no idea how to write a test that
validetes the code.
Each and every time
you get a Green Bar, you could integrate and ship your code.

Again the thing that scares me the most. Green = good. Don't think
about anythink else. If it's green so it must work. Well, it doesn't.I
have nothing against tests. Sometimes they are usefull. But they don't
solve all developer's problems.
 
G

Gianni Mariani

jolz said:
And if 1000 passes succedes that means that everything is ok? I don't
tink so.

I tink that would be a bad test.
... For example:
example of bad test removed. I can write bad tests too...
...really mean anything.

Many people beg to differ. Sure, you can write a bogus test, bets are
however that if you don't do it you're likely going to find a problem as
a customer report which is far more costly.
... Real life examples are much more
subtle.

My experience is the opposite.
... Also the same test may work on one comuter + compiler +
operating system (for example test machine) and fail on every other.

Exactly, so run it on a bunch of different test machines. I reccomend a
dual platform development for exactly this reason. Linux-posix + Win32
is a great combination.
It runs about a minute on my computer. If I would have to test this way
all functions in 500000 line application it would take a lot of time.

again - bad test. You have to run a dual CPU or better to run
multithreaded tests. Also your test has too much overhead, you create
new threads all the time. I usually create a parameterized number of
threads make them all arrive at a barrier and then unleash them all at
the same time. Then I run the test with those threads throughout the
entire test.
And usually 1 fuction have more than 1 test.
Notice that single look at the code proves that code is wrong, but
100000 passes of the test gave the illusion that everything is ok.

Again, your clever, write a better test.

This argument sounds like: "Doctor doctor, it hurts when I point a gun
at my foot and shoot". Well, duh !
But this way I have to change my code. Well, it may lead to a better
design, but still the test tests the abstract layer and not the actual
problem.

You can't test every situation all the time, nobody argues that. The
question is, what is the cost/benefit of doing some. Clearly if you
have no tests you're running blind. If you have some tests you're
better off and so on until there is a diminishing return. Maybe you can
use this principle. "If I write a unit test and it finds no bugs, then I
stop and move onto the next project."
Remember - first write test, than write code. So how do you guess what
is the correct time? And what happens if test is run on a faster
machine? I guess it will pass every time even is something was broken.

Use common sense at first and be a little conservative. It depends on
the app.
Those were only examples. In real life there are much more of those (so
far I think that we agree that gui is not unit testable, and there are
lots of developers that do only gui).

There are frameworks for testing GUI. I'm not familiar with the state
of the art. I was hinting at being careful about being distinct from
function and appearance. I agree that you can't test if something is
aesthetic, but you can test if somthing works.
But even less expensive is simply looking into the code and thinking
what to do to make it right, not what to do to make some test pass.
Test will pass anyway.

I think you're wrong here. I have yet to meet a programmer that can
write code without bugs.
 
P

Phlip

jolz said:
So how do I write gui application?

Briefly, sometimes you need threads. Fight them, because they are bad. They
are like 'goto'. Don't thread just because "my function runs too long, and I
want to do something else at the same time". Fix the function so you can
time-slice its algorithm. That overwhelmingly improves your design, anyway.

The tutorials for threads often give lame examples that only need simpler
fixes. The tutorials often don't examine the full rationale for threading.
If you drive with one hand and eat a sandwich with another, you might need a
thread. Internet Explorer probably uses threads. If our apps are simpler, we
shouldn't need them, and should resist adding them until we explore all
possible alternatives.
Do you really write for example
database communicatin from gui thread?

The question here is simple: Why and how does communication block your GUIs
thread?

Winsock, for example, fixed that (for 16-bit Windows without much thread
support) by turning network input into a window message. If you have input
communication, you can often _research_ to find the right function that
multiplexes its events together with your GUI events.

If you can't find such a function, then you must use a thread, and you
should use PostMessage() or similar to send events to your GUI thread. If
you thread, make sure your code in both threads is sufficiently event-driven
that you don't need to abuse your semaphores.

Threading violates encapsulation when one thread must remain "aware" of
another thread's intimate details, just to get something done.
If so, how do you allow user to
stop the operation.

I didn't say "lock the GUI up early and often". I just said "don't thread".
There's a lot of middle ground.
What is the purpose of test if I write test so it allways passes?

A test here and there that can't fail is mostly harmless. Note that, under
TDD, you often write tests and show them failing at least once, before
passing them.

The purpose of my time example is to instantly fail if someone "upgrades"
the tested code and exceeds the time limit. They ought to see the failure,
Undo their change, and try again. Even if the test defends nothing, the
upgrades shouldn't be sloppy.
And what about changing hardware? Rewrite all tests?

Nobody said time-test all tests.
Change 1 global
settings and pray that it is changed with correct proportion? Or maybe
make new settings so all test passes?

To continue your digression here, one might ask what we expect to do without
tests when we change 1 global variable (besides not use global variables).
Should we debug every function in our program? Of course not, we just run
out of time. So we debug a few of them, and _then_ we pray.

That is the most common form of implementation, and it is seriously broken.
Does really tests that run few hours/days are run on all machines in a
company?

Yes, because developers are running all the tests before integrating their
code. If any test fails, they don't submit. So the same tests must pass on
every machine, many times a day.

What do you do before integrating? Spot check a few program features,
manually?
I'v seen tester trying to write a gui test.

That is QA, and test-last. That means it's a professional tester, trying to
write a test that will control quality. And she or he is using the program
from the outside, not changing its source to write the test.

That's all downhill from the kind of testing we describe here. And without a
unit test layer, it's bloody impossible. But they keep trying!
Heard about others. It was
allways a lot of work with minor advantages. It was only testing
behaviour. Never how application looks. And it was an easier version -
java gui. I hava no idea how can't be it even worse in language without
gui in a standart.

And that is probably describing Capture/Playback testing, which is the worst
of the lot.

Now if I can write a test, in the same language as the tested code, that
tests how our program drives the database, or the file system, why can't I
write a test that checks how we drive the GUI? What's so special about GUIs?
Debugger also won't work in any of situations I presented. The fact
that test isn't worse than debugger doesn't mean that it usefull.

That is a different topic. I mean that TDD prevents the need to run the
debugger. Programming teams who switch to TDD routinely report that they
never even turn their debugger on; never set a breakpoint; never inspect a
variable.
I didn't quite get how uml causes bugs. But let's not start another
off-topic from this one.

UML doesn't cause bugs. Drawing a UML diagram, then converting the diagram
to code full of empty classes with no behavior, causes bugs. All the classes
are empty, but they inherit and delegate properly! Then you must debug, over
and over again, to put the behavior in.
Again the thing that scares me the most. Green = good. Don't think
about anythink else. If it's green so it must work. Well, it doesn't.I
have nothing against tests. Sometimes they are usefull. But they don't
solve all developer's problems.

Nobody said that, so don't extrapolate from it.

Under TDD, sometimes you predict the next Bar will be Red. The point of the
exercise is you constantly predict the next Bar color, and you are almost
always right. Predictability = good. It shows that your understanding of the
code matches what the code actually does.
 
I

Ian Collins

Phlip said:
jolz wrote:




Briefly, sometimes you need threads. Fight them, because they are bad. They
are like 'goto'.

Even if you application benefits from concurrency?

I never used goto, but I often use threads and I don't have any issues
with doing TDD with an MT application.
 
P

Phlip

Ian said:
Even if you application benefits from concurrency?

If your app benefits from goto, use it. Various techniques have various
cost-benefit ratios.

Like goto, the cost-benefit ratio for threads is known to be suspicious.
They are hard to test for a reason; that's not the unit tests' fault!!

Fight them, by learning how to avoid them, if at all possible. Such research
will generally lead to a better event-driven design. This design, in turn,
will be easy to thread if you then prove the need.
I never used goto, but I often use threads and I don't have any issues
with doing TDD with an MT application.

That's because TDD is not the same thing as a formal QA effort to determine
your defect rate.

And I suspect there are those who have mocked their kernel and CPU just to
put unit tests on their threads!
 
I

Ian Collins

Phlip said:
Ian Collins wrote:




If your app benefits from goto, use it. Various techniques have various
cost-benefit ratios.
I still think the analogy is too strong, I can't think of any situation
where I'd resort to goto.
Like goto, the cost-benefit ratio for threads is known to be suspicious.
They are hard to test for a reason; that's not the unit tests' fault!!

Fight them, by learning how to avoid them, if at all possible. Such research
will generally lead to a better event-driven design. This design, in turn,
will be easy to thread if you then prove the need.
Most of the work I do does a lot of processing and I/O. The
applications invariably run on multi-core systems. Keeping all the
cores busy get the job done quicker.

While they can either be abused or used inappropriately, threads can
often simplify a design.
That's because TDD is not the same thing as a formal QA effort to determine
your defect rate.
I didn't say it was.
And I suspect there are those who have mocked their kernel and CPU just to
put unit tests on their threads!
I wouldn't go that far, but mocking the thread library to run the units
tests in a single threaded application is a useful technique. From my
experience, MT unit tests are unnecessary and tend to end in tears.
 
P

Phlip

Ian said:
I still think the analogy is too strong, I can't think of any situation
where I'd resort to goto.

Not even "goto another stack and CPU context"?

That's what a thread does; goes from the middle of one function to the
middle of another.
Most of the work I do does a lot of processing and I/O. The
applications invariably run on multi-core systems. Keeping all the
cores busy get the job done quicker.

My hostility to thread abuse dates from the 1980s, with single-processor
machines. I worked for many years at a shop stuck with a design invented by
a consultant who read the lame tutorials I mention, and used threads where
he should have used good structure. The threads enabled the bad structure,
which should have been fully event-driven. I got to see the drag this
imposed on our product - and all the thread bugs that went with it.

Granted, the bad structure wasn't directly the threads' fault. But a good
structure could have made the threads easier to live with - or replace!

On a modern multi-context CPU, don't thread and then lock everything down.
You will naturally have a single-threaded, multi-CPU application!

Yet if those threads are indeed independent of each other, then you merely
have a multi-process situation. That's very different from the juggling-act
you get if you use threads for trivial reasons, such as GUI concurrency!
I didn't say it was.

You can TDD the functions your threads call. I suspect that writing a
TDD-style test case that juggles threads is very hard, so I would slack off
on that. (It's not strictly required to generate the code.) So TDD's
incidental power of QA will suffer in this situation.
I wouldn't go that far, but mocking the thread library to run the units
tests in a single threaded application is a useful technique. From my
experience, MT unit tests are unnecessary and tend to end in tears.

And still not a reason not to write unit tests! ;-)
 
I

Ian Collins

Phlip said:
Ian Collins wrote:




Not even "goto another stack and CPU context"?
Boy some people like their hairs split pretty fine :)
Yet if those threads are indeed independent of each other, then you merely
have a multi-process situation.

With all the benefits and pitfalls of a shared data model.
And still not a reason not to write unit tests! ;-)
Indeed, but I'd advise against writing MT units tests. As you have
pointed out before, that level of testing doesn't help with TDD and is
best left to the acceptance tests.
 
G

Gianni Mariani

Ian Collins wrote:
....
Indeed, but I'd advise against writing MT units tests. As you have
pointed out before, that level of testing doesn't help with TDD and is
best left to the acceptance tests.

Unless you're designing something is multi threaded by design. For
example, If the type is performing some kind of asynchronous event
management, it's kind of useless to write a TDD unit test for it that
does not show the MT aspects...
 
I

Ian Collins

Gianni said:
Ian Collins wrote:
....



Unless you're designing something is multi threaded by design. For
example, If the type is performing some kind of asynchronous event
management, it's kind of useless to write a TDD unit test for it that
does not show the MT aspects...
You can test the logic without running in an MT environment.

Same with testing something like a signal handler or interrupt service
routine, you don't have to send an event to test the code. You know the
underlying OS/hardware will deliver the event, your job is to build the
code that processes it.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,058
Latest member
QQXCharlot

Latest Threads

Top