A simple unit test framework

P

Phlip

Pete said:
Yup. Typical developer-written test: I don't understand testing well
enough to do it right, so I'll do something random and hope to hit a
problem. <g>

Nobody said the developers would write soak tests! What part of "write a
thing incorrectly called a 'test' that fails because the next line of code
is not there" are you stuck on?
 
P

Phlip

Ian said:
If you code test first (practice Test Driven Design/Development), you
don't have to do coverage analysis because your code has been written to
pass the tests.

Each line has coverage. That's still not the ultimate coverage, where tests
cover every path through the entire program. But the effect on code makes
such coverage less important. The tests force the code to be simpler,
because it must pass simple tests.
 
P

Phlip

James said:
Note that the two steps above work together. The people doing the code
review also review the tests, and verify that your choice of test cases
was appropriate for the code you wrote.

Oh, you must have read Rule #42 in the Official TDD Handbook:

"After writing simple code to pass the test, you must then immediately
release your code, without letting anyone review it or add more tests to
it."
 
P

Phlip

Ian said:
No, The first embedded product my team developed using TDD has had about
half a dozen defects reported in 5 years and several thousand units of
field use. They were all in bits of the code with poor tests.

Ain't it sad when former Thought Leaders reach the next bend in the road of
progress, and we discover how ossified their geriatric brains have become?
;-)
 
P

Phlip

The problem with Google:

They use TDD up the wazoo.

It sure shows in their pitiful quality, long time to market, customer
disatisfaction, and snowed executives, huh?

;-)
 
I

Ian Collins

Phlip said:
Ain't it sad when former Thought Leaders reach the next bend in the road of
progress, and we discover how ossified their geriatric brains have become?
;-)
I had wondered where you were hiding!
 
G

Gianni Mariani

James Kanze wrote:
....
There's nothing to disagree with. It's a basic definition. If
a MC test finds the error, and a hand crafted test doesn't, the
hand crafted test isn't well designed or carefully done.

We disagree on the practicality of your claim.
I have more than anecdotal evidence that there are significant
errors which will slip through MC testing. Admittedly, the
most significant ones also slip through the most carefully
crafted tests as well.

.... ok, so we don't disagree.
... It is, in fact, impossible to write a
test for them which will reliably fail.

This is why no shop serious about quality will rely solely on
testing.

Can we limit the scope of discussion to unit testing. If you want to go
to complete product life-cycle we will be here forever.
Certainly. It's just the often, generating enough random data
is more effort than doing things correctly, and beyond a very
low level, doesn't raise the confidence level very much.

OK -- we'll have to disagree again. I've been able to find bugs using
MC tests that were never found by "well crafted tests".
... If we
take Pete's example of a log() function, testing with a million
random values doesn't really give me much more confidence than
testing with a hundred, and both give significantly less
confidence than a good code review, accompanied by testing with
a few critical values. (Which values are critical, of course,
being determined by the code review.)

Right. Very good. So this is your proof that MC tests are all bad ?
 
I

Ian Collins

James said:
If your goal is to rip off the customer, yes.

So by helping them to get what they really wanted, rather than forcing
them to commit to what they thought the wanted, I'm ripping them off?
The person I'm ripping off is me, I'm doing my self out of all the bug
fixing and rework jobs.

Man you have a strange view of customer focused development.
Prototyping and interacting with the customer are nothing new.
They've been part of every software development process for the
last 30 or 40 years.

There's a huge difference between demonstrating a prototype and
delivering production quality code every week or so.
 
G

Gianni Mariani

James said:
I'm familiar with the theory. Regretfully, it doesn't work out
in practice.

What part ?
Bullshit. I've seen just too many cases of code which is wrong,
but for which no test suite is capable of reliably triggering
the errors.

Example ?
You mean where you confused the mathematical function "log()"
with a function used to log error messages?

Cute !
 
G

Gianni Mariani

James said:
You must be significantly older than me, then, because they were
the rule when I was learning software development, in the
1970's. (In fact, they were the rule where my father worked,
in the 1960's.)

What does short mean to you ?
 
P

Phlip

Ian said:
Have you tried it? Not having to hold code reviews was one of the
biggest real savings for us.

What you are up against here is Kanze is one of the most aggressive and
competent reviewers out there, and he leads by reviewing. This explains his
devastating accuracy on C++ questions. So by claiming two dumb people can
get by with pairing and TDD, instead of submitting their code to him for
review, you are taking him out of his Comfort Zone.

About those other companies, yes many losers call themselves XP. If we could
get in touch with them, we would ... review their process. Pairing? TDD?
Frequent Releases? Common Workspace? etc.

Many companies call themselves XP as an excuse to not write any
documentation...

Oh, and my day job uses pure XP. Our executives recently let my supervisor
know that the programming department was the group they had the _least_
issues with. And we do code (and test) reviews after each feature.
 
I

Ian Collins

James said:
So what were the before and after measures?

You should at least publish this, since to date, all published
hard figures (as opposed to annecdotal evidence) goes in the
opposite sence. (For that matter, the descriptions of eXtreme
Programming that I've seen didn't provide a means of actually
measuring anything.)
I don't have the exact before figures, but there were dozens of bugs in
the system for the previous version of the product and they took a
significant amount of developer and test time. The lack of unit tests
made the code extremely hard to fix without introducing new bugs.
Comprehensive unit tests are the only way to break out of this cycle.

We didn't bother tracking bugs for the replacement product, there were
so few of them and due to their minor nature, they could be fixed within
a day of being reported. We had about 6 in the first year.
Yes, we tried it. It turns out the effective code (and design)
review is the single most important element of producing quality
code.

Sounds like you weren't pairing.
Actually, it's not an opinion. It's a result of concrete
measurement.
So you practiced full on XP for a few months and measured the results?
 
I

Ian Collins

James said:
I'm familiar with the theory. Regretfully, it doesn't work out
in practice.
Works well for me. Again, it's clear you have never tried it.
Bullshit. I've seen just too many cases of code which is wrong,
but for which no test suite is capable of reliably triggering
the errors.

Now that I'd like to see.
 
P

Phlip

Gianni said:
What part ?

The rules of TDD are "write a simple test and write code to pass the test".

But some people are very smart, and know better than to do "simple" things.
So they insist on "write a QA-quality test case that completely constrains
production code before writing it".

When that doesn't work, they blame TDD.
 
P

Phlip

Ian said:
Now that I'd like to see.

He will play the complexity card: Multi threading, encryption, security,
etc.

Based, again, on insisting that "test" can only mean "high-level QA soak
test".
 
P

Pete Becker

Phlip said:
Nobody said to fire all the testers. You made that up, then replied to your
presumption.

Agreed: nowhere has anyone said to fire all the testers. The text you
quote does not in any way address such a (non-existent) assertion.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
 
P

Phlip

Pete said:
Agreed: nowhere has anyone said to fire all the testers. The text you
quote does not in any way address such a (non-existent) assertion.

Okay. You are saying "developers can't write the best tests, so therefor
they shouldn't write any tests". Am I closer?
 
P

Pete Becker

Phlip said:
Each line has coverage. That's still not the ultimate coverage, where tests
cover every path through the entire program. But the effect on code makes
such coverage less important. The tests force the code to be simpler,
because it must pass simple tests.

As a practical matter, you can't test every path through the code. You
get killed by combinatorial explosions.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,044
Latest member
RonaldNen

Latest Threads

Top