I
Ian Collins
You should either educate them, or be them!Gianni said:Yes - I've done it. Short release cycles - I invented them. Management
still fsck's up all the time, every time.
You should either educate them, or be them!Gianni said:Yes - I've done it. Short release cycles - I invented them. Management
still fsck's up all the time, every time.
Gianni said:first and second.
I don't see a practical distinction between tester, developer and
designer. While the overall design of the system needs an "architect",
the job of the architect is to provide a framework that inevitably needs
extensibility.
Ian said:You should either educate them, or be them!
The "architect" is the development team.
Automated acceptance tests (some call them *Customer* acceptance) tests
are a critical part of any product development process and someone has
to design and develop them. That someone can either be the customer, or
testers working as their proxy.
That's why it is essential to have proxy. For a consumer product, thatGianni said:Ian Collins wrote:
....
The customer is always ill defined.
Ian said:That's why it is essential to have proxy. For a consumer product, that
should be the internal product owner.
Ian said:There are plenty of situations where a Monte Carlo test isn't
appropriate or even possible. A good test developer has the knack of
thinking like a user, where a developer thinks like a developer.
James Kanze wrote:
Common misconception.
1. Testability of code is a primary objective. (i.e. code that can't be
tested is unfit for purpose)
2. Any testing (MT or not) is about a level of confidence, not absoluteness.
I have discovered that MT test cases that push the limits of the code
using random input does provide sufficient coverage to produce a level
of confidence that makes the target "testable".
If you consider what happens when you have multiple processors
interacting randomly in a consistent system, you end up testing more
possibilities than can present themselves in a more systematic system.
However, with threading, it's not really systematic because external
events cause what would normally be systematic to be random. Now
consider what happens in a race condition failure. This normally
happens when two threads enter sections of code that should be mutually
exclusive. Usually there are a few thousand instructions in your test
loop (for a significant test). The regions that can fail are usually
10's of instructions, sometimes 100's.
If you are able to push randomness, how many times do you need
to reschedule one thread to hit a potential problem.
Given cache latencies, pre-emption from other threads, program
randomness (like memory allocation variances) you can achieve
pretty close to full coverage of every possible race condition
in about 10 seconds of testing. There are some systematic
start-up effects that may not be found, but you mitigate that
by running automated testing. (In my shop, we run unit tests
on the build machine around the clock - all the time.)
So that leaves us with the level of confidence point. You can't achieve
perfect testing all the time, but you can achieve high level of
confidence testing all of the time.
It does require a true multi processor system to test adequately. I
have found a number of problems that almost always fail on a true MP
system that hardly ever fail on a SP system. Very rarely have I found
problems on 4 processor or more systems that were not also found on a 2
processor system, although, I would probably spend the money on a 4 core
CPU for developer systems today just to add more levels of confidence.
In practice, I have never seen a failure in the wild that could not be
discovered with a properly crafted MC+MT test.
I have met very few customers that know what a spec is even if it
smacked them up the side of the head. Sad. Inevitably it leads to
pissed off customer.
Gianni Mariani wrote:
Welcome to the club!
Any agile process (XP, Scrum or whatever) is ideal for this situation.
This situation is one of the main reasons these processes have evolved.
The sort release cycles keep the customer in the loop and give them the
flexibility to change their mind without incurring significant rework
costs.
Short release cycles - I invented them.
James Kanze wrote:
We certainly did - field defect reports and the internal cost of
correcting them.
Have you tried it? Not having to hold code reviews was one of the
biggest real savings for us.
That's your opinion and you are entitled to it.
James Kanze wrote:
Look up TDD.
Actually, it does meet the requirements by definition since the test
case demonstrates how the requirements should be met.
See my "log"ging example.
We will have to agree to disagree on this.
I have one anecdotal evidence which suggests that no-one is capable of
truly foreseeing the full gamut of issues that can be found in a well
designed MC test.
A pass on an MC test raises the level of confidence which is always a
good thing.
If I read between the lines here, I think you're saying that we need
test developers to conceive every kind of possible failure. I have yet
to meet anyone who could do that consistently and I have been developing
software for a very long time.
I don't think your premise (if I read it correctly) is achievable.
I lean toward getting making the computer do as much work as possible
because it is much more consistent than a developer.
Ian said:Unless the test are written first!
If your goal is to rip off the customer, yes.
Prototyping and interacting with the customer are nothing new.
I'm familiar with the theory. Regretfully, it doesn't work out
in practice.
Pete said:Nope. Test driven design cannot account for the possibility that a
function will use an internal buffer that holds N bytes, and has to
handle the edges of that buffer correctly. The specification says
nothing about N, just what output has to result from what input.
Bo said:So Pete will pass your first test with "return 1;".
How many more tests do you expect to write, before you are sure that
Pete's code is always no more than one unit off in the last decimal?
Pete said:When I was a QA manager I'd bristle whenever any developer said they'd
"let testers" do something. That's simply wrong. Testing is not an adjunct
to development. Testing is a profession, with technical challenges that
differ from, and often exceed in complexity, those posed by development.
The skills required to do it well are vastly different, but no less
sophisticated, than those needed to write the product itself. Most
developers think they can write good tests, but in reality, they simply
don't have the right skills, and test suites written by developers are
usually naive. A development manager who gives testers "complete freedom"
is missing the point: that's not something the manager can give or take,
it's an essential part of effective testing.
Gianni said:The most successful teams I worked with were teams that wrote their own
tests.
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.