A simple unit test framework

A

Alf P. Steinbach

* Alf P. Steinbach:
* Ian Collins:

James is (for good reasons) posting via Google, which ingeniously strips
off the space at the end of the signature delimiter.

The problem with Google: just like Microsoft there seems to be a
high-level management decision to gather as much feedback as possible,
countered by low-level management decisions to sabotage that so that it
looks like it's in place, but actually impossible for anyone to report
anything (you're redirected to irrelevant places, submit buttons don't
work, nothing is actually received, no relevant category is listed, no
mail address available, and so on ad nauseam).

Hence, Google Earth places Norway in Sweden, Google Groups strips off
significant spaces, and so on and so forth, and even though thousands
and tens of thousands /try/ to report this, Google's as unwise as ever
about its failings. The price of becoming a behemoth company, and what
I'm speculating is probably the reason: lying cheating weasels are
attracted like moths to a flame, and form the lower management echelons.
Oh, sorry, this' off-topic in clc++m, but it sure felt good to get that
off my chest!

(James: no, I didn't follow up on you-know-what.)
 
I

Ian Collins

James said:
The latest trend where? Certainly not in any company concerned
with good management, or quality software.
Have you ever been in charge of a company's software development? I
have and the best thing I ever did to improve both the productivity of
the teams and quality of the code was to introduce eXtreme Programming,
which includes TDD as a core practice.

Our delivery times and field defect rates more than vindicated the change.
 
I

Ian Collins

Pete said:
I do, too, because those particular terms suggest a false hierarchy. A
better distinction might be between an application programmer and a test
programmer. The fact remains that developers rarely have the skills to
write good tests or the mindset to write good tests.
So we do agree! That's pretty much the job title I gave my test
developers. They were just as much developers as those who developed
the application code. Their key skills were designing good tests and
knowing what tools to use to write, run and report on those tests.
 
I

Ian Collins

Alf said:
* Ian Collins:

James is (for good reasons) posting via Google, which ingeniously strips
off the space at the end of the signature delimiter.

The problem with Google: just like Microsoft there seems to be a
high-level management decision to gather as much feedback as possible,
countered by low-level management decisions to sabotage that so that it
looks like it's in place, but actually impossible for anyone to report
anything (you're redirected to irrelevant places, submit buttons don't
work, nothing is actually received, no relevant category is listed, no
mail address available, and so on ad nauseam).

Hence, Google Earth places Norway in Sweden, Google Groups strips off
significant spaces, and so on and so forth, and even though thousands
and tens of thousands /try/ to report this, Google's as unwise as ever
about its failings. The price of becoming a behemoth company, and what
I'm speculating is probably the reason: lying cheating weasels are
attracted like moths to a flame, and form the lower management echelons.
Oh, sorry, this' off-topic in clc++m, but it sure felt good to get that
off my chest!
A rant a day keeps the ulcers away!
 
G

Gianni Mariani

Pete said:
Well, no. log() in this case is the logarithm function that I used as an
example earlier in this thread.

Oh - well that's why you need a spec !!!!
 
G

Gianni Mariani

Pete Becker wrote:
....
Yup. Typical developer-written test: I don't understand testing well
enough to do it right, so I'll do something random and hope to hit a
problem. <g>

I have yet to meet a "test" developer that can beat the monte carlo test
for coverage.

OK - I agree, there are cases where a monte-carlo test will never be
able to test adequately, but as a rule, it is better to have a MC test
than not. I have uncovered more legitimate problems from the MC test
than from carefully crafted tests.
As I've said several times, developing and testing involve two distinct
sets of skills. Developers think they're good at testing, but any
professional tester will tell you that they aren't.

I challenge you. I don't think of myself as a tester. I believe you
can't do a better job than I in testing my code. Let's use this "range
map" as an example.

I have attached the header file and the test cases.

Your clain that I "don't understand testing well enough" and so I use an
MC test I think is short sighted.

For example, MC tests are the only tests that I have been able to truly
test multithreaded code. It is nigh impossible for me (or any human) to
truly understand all the interactions in a MT scenario. MC tests almost
always will push every edge of the problem space.

Again, it's not to say that there are no systematic errors that can't be
discovered using random tests, but these types of errors are exactly
the kind that a good developer knows exist and tests for or even designs
around.


//
// The Austria library is copyright (c) Gianni Mariani 2004.
//
// Grant Of License. Grants to LICENSEE the non-exclusive right to use the Austria
// library subject to the terms of the LGPL.
//
// A copy of the license is available in this directory or one may be found at this URL:
// http://www.gnu.org/copyleft/lesser.txt
//
/**
* at_rangemap.h
*
*/

#ifndef x_at_rangemap_h_x
#define x_at_rangemap_h_x 1

#include "at_exports.h"
#include "at_os.h"
#include "at_assert.h"

#include <map>

// Austria namespace
namespace at
{


// ======== TypeRange =================================================
/**
* TypeRange describes the range of a particular type
*
*/

template <typename w_RangeType>
class TypeRange
{
public:

// range type
typedef w_RangeType t_RangeType;


// ======== Adjacent ==============================================
/**
* Adjacent returns true if the two parameters are "one apart"
*
* @param i_lesser is the lesser of the two values
* @param i_greater is the greater of the two
* @return true is no other elements exist between i_lesser and i_greater
*/

static bool Adjacent(
const t_RangeType & i_lesser,
const t_RangeType & i_greater
) {

t_RangeType l_greater_less( i_greater );

-- l_greater_less; // go to the earlier element

// deal with wrapping
if ( i_greater < l_greater_less )
{
return false;
}

return !( i_lesser < l_greater_less );
}
};


// ======== RangeMap ==================================================
/**
* RangeMap is a template that defines ranges.
*
*/

template <typename w_RangeType, typename w_RangeTraits=TypeRange<w_RangeType> >
class RangeMap
{
public:

// range type
typedef w_RangeType t_RangeType;
typedef w_RangeTraits t_RangeTraits;

// index on the end of the range
typedef std::map< t_RangeType, t_RangeType > t_Map;
typedef typename t_Map::iterator t_Iterator;


// ======== AddRange ==============================================
/**
* Add a segment to the range.
*
* @param i_begin The beginning of the range (inclusive)
* @param i_end The end of the range (inclusive)
* @return nothing
*/

void AddRange( const t_RangeType & i_begin, const t_RangeType & i_end )
{
const bool l_less_than( i_end < i_begin );

const t_RangeType & l_begin = ! l_less_than ? i_begin : i_end;
const t_RangeType & l_end = l_less_than ? i_begin : i_end;

// deal with an empty map here
if ( m_map.empty() )
{
// shorthand adding the first element into the map
m_map[ l_end ] = l_begin;
return;
}

// see if there is a segment to merge - find the element that preceeds
// l_begin

t_Iterator l_begin_bound = m_map.lower_bound( l_begin );

if ( l_begin_bound == m_map.end() )
{
// l_begin is after the last element

-- l_begin_bound;

if ( t_RangeTraits::Adjacent( l_begin_bound->first, l_begin ) )
{

// yes, they are mergable
t_RangeType l_temp = l_begin_bound->second;
m_map.erase( l_begin_bound );
m_map[ l_end ] = l_temp;

return;
}

// not mergable - add the segment at the end

m_map[ l_end ] = l_begin;
return;
}

// if the end of the segment being inserted is not beyond this one
if ( ( l_end < l_begin_bound->second ) && ! t_RangeTraits::Adjacent( l_end, l_begin_bound->second ) )
{
// NOT mergable with subsequent segments

if ( l_begin_bound == m_map.begin() )
{
// There is no previous segment

m_map[ l_end ] = l_begin;
return;
}

// The segment being inserted can't be merged at the end

// see if it can be merged with the previous one

t_Iterator l_previous = l_begin_bound;
-- l_previous;

AT_Assert( l_previous->first < l_begin );

if ( ! t_RangeTraits::Adjacent( l_previous->first, l_begin ) )
{
// not overlapping with previous and not mergable

m_map[ l_end ] = l_begin;
return;
}
else
{
// we are mergable with the previous element

// yes, they are mergable
t_RangeType l_temp = l_previous->second;
m_map.erase( l_previous );
m_map[ l_end ] = l_temp;
return;
}

}

if ( l_begin_bound == m_map.begin() )
{
if ( l_end < l_begin_bound->first )
{
if ( l_end < l_begin_bound->second )
{
if ( t_RangeTraits::Adjacent( l_end, l_begin_bound->second ) )
{
l_begin_bound->second = l_begin;
return;
}
else
{
m_map[ l_end ] = l_begin;
return;
}
}
else
{
if ( l_begin < l_begin_bound->second )
{
l_begin_bound->second = l_begin;
}
return;
}
}
else
{

t_RangeType l_new_begin = l_begin;

if ( l_begin_bound->second < l_begin )
{
l_new_begin = l_begin_bound->second;
}

// Check to see what segment is close to the end
t_Iterator l_end_bound = m_map.lower_bound( l_end );

if ( l_end_bound == m_map.end() )
{
// erase all the segments from l_previous to the end and
// replace with one

m_map.erase( l_begin_bound, l_end_bound );

m_map[ l_end ] = l_new_begin;
return;
}

if ( l_end < l_end_bound->second && ! t_RangeTraits::Adjacent( l_end, l_end_bound->second ) )
{
m_map.erase( l_begin_bound, l_end_bound );
m_map[ l_end ] = l_new_begin;
return;
}

// merge with the current end

m_map.erase( l_begin_bound, l_end_bound ); // erase segments in between
l_end_bound->second = l_new_begin;
return;
}
}

if ( l_begin_bound == m_map.begin() )
{
// no previous ranges

// see if we can merge with the current range


}

// find the previous iterator
t_Iterator l_previous = l_begin_bound;
-- l_previous;

t_RangeType l_new_begin = l_begin;

if ( t_RangeTraits::Adjacent( l_previous->first, l_begin ) )
{
l_new_begin = l_previous->second;
}
else
{
++ l_previous;

if ( l_previous->second < l_new_begin )
{
l_new_begin = l_previous->second;
}
}

t_RangeType l_new_end = l_end;


// Check to see what segment is close to the end
t_Iterator l_end_bound = m_map.lower_bound( l_end );

if ( l_end_bound == m_map.end() )
{
// erase all the segments from l_previous to the end and
// replace with one

m_map.erase( l_previous, l_end_bound );

m_map[ l_end ] = l_new_begin;
return;
}

if ( l_end < l_end_bound->second && ! t_RangeTraits::Adjacent( l_end, l_end_bound->second ) )
{
m_map.erase( l_previous, l_end_bound );
m_map[ l_end ] = l_new_begin;
return;
}

// merge with the current end

m_map.erase( l_previous, l_end_bound ); // erase segments in between
l_end_bound->second = l_new_begin;

return;
}


// ======== SubtractRange =========================================
/**
* SubtractRange removes the range. (opposite of Add)
*
*
* @param i_begin Beginning of range to subtract
* @param i_end End of range to subtract
* @return nothing
*/

void SubtractRange( const t_RangeType & i_begin, const t_RangeType & i_end )
{
const bool l_less_than( i_end < i_begin );

const t_RangeType & l_begin = ! l_less_than ? i_begin : i_end;
const t_RangeType & l_end = l_less_than ? i_begin : i_end;

// deal with an empty map here
if ( m_map.empty() )
{
// Nothing to remove
return;
}

// See if we find a segment
// l_begin

t_Iterator l_begin_bound = m_map.lower_bound( l_begin );

if ( l_begin_bound == m_map.end() )
{
// this does not cover any segments
return;
}

if ( l_begin_bound->second < l_begin )
{
// this segment is broken up

t_RangeType l_newend = l_begin;

-- l_newend;

m_map[ l_newend ] = l_begin_bound->second;

l_begin_bound->second = l_begin;
}

t_Iterator l_end_bound = m_map.lower_bound( l_end );

if ( l_end_bound == m_map.end() )
{
// erase all the segments from the beginning to end
m_map.erase( l_begin_bound, l_end_bound );
return;
}

if ( !( l_end < l_end_bound->first ) )
{
// the segment end must be equal the segment given

++ l_end_bound;

m_map.erase( l_begin_bound, l_end_bound );
return;
}

// need to break up the final segment

m_map.erase( l_begin_bound, l_end_bound );

if ( !( l_end < l_end_bound->second ) )
{
t_RangeType l_newbegin = l_end;

++ l_newbegin;

l_end_bound->second = l_newbegin;
}

return;

}

// ======== IsSet =================================================
/**
* Checks to see if the position is set
*
* @param i_pos
* @return True if the position is set
*/

bool IsSet( const t_RangeType & i_pos )
{
t_Iterator l_bound = m_map.lower_bound( i_pos );

if ( l_bound == m_map.end() )
{
// this does not cover any segments
return false;
}

return !( i_pos < l_bound->second );
}

t_Map m_map;

};



}; // namespace

#endif // x_at_rangemap_h_x





#include "at_rangemap.h"

#include "at_unit_test.h"

#include <iostream>
#include <vector>
#include <cstdlib>

using namespace at;

namespace RangemapTest {

//


// ======== TestBitMap ================================================
/**
* TestBitMap is like a range map but the logic is far easier.
*
*/

template <typename w_RangeType, typename w_RangeTraits=TypeRange<w_RangeType> >
class TestBitMap
{
public:

// range type
typedef w_RangeType t_RangeType;
typedef w_RangeTraits t_RangeTraits;


// ======== AddRange ==============================================
/**
* Add a segment to the range.
*
* @param i_begin The beginning of the range (inclusive)
* @param i_end The end of the range (inclusive)
* @return nothing
*/

void AddRange( const t_RangeType & i_begin, const t_RangeType & i_end )
{
const bool l_less_than( i_end < i_begin );

const t_RangeType & l_begin = ! l_less_than ? i_begin : i_end;
const t_RangeType & l_end = l_less_than ? i_begin : i_end;

CheckSize( l_end );

for ( unsigned i = l_begin; i <= l_end; ++i )
{
m_bitmap[ i ] = true;
}
}

// ======== SubtractRange =========================================
/**
* SubtractRange removes the range. (opposite of Add)
*
*
* @param i_begin Beginning of range to subtract
* @param i_end End of range to subtract
* @return nothing
*/

void SubtractRange( const t_RangeType & i_begin, const t_RangeType & i_end )
{
const bool l_less_than( i_end < i_begin );

const t_RangeType & l_begin = ! l_less_than ? i_begin : i_end;
const t_RangeType & l_end = l_less_than ? i_begin : i_end;

CheckSize( l_end );

for ( unsigned i = l_begin; i <= l_end; ++i )
{
m_bitmap[ i ] = false;
}

}


// ======== IsSet =================================================
/**
* Checks to see if the position is set
*
* @param i_pos
* @return True if the position is set
*/

bool IsSet( const t_RangeType & i_pos )
{
if ( m_bitmap.size() < std::size_t( i_pos ) )
{
return false;
}
return m_bitmap[ i_pos ];
}


void CheckSize( const t_RangeType & i_end )
{
if ( m_bitmap.size() < std::size_t(i_end + 1) )
{
m_bitmap.resize( i_end + 1 );
}
}

std::vector<bool> m_bitmap;
};



// ======== TestBoth ==================================================
/**
*
*
*/

template <typename w_RangeType, typename w_RangeTraits=TypeRange<w_RangeType> >
class TestBoth
{
public:
// range type
typedef w_RangeType t_RangeType;
typedef w_RangeTraits t_RangeTraits;


// ======== AddRange ==============================================
/**
* Add a segment to the range.
*
* @param i_begin The beginning of the range (inclusive)
* @param i_end The end of the range (inclusive)
* @return nothing
*/

void AddRange( const t_RangeType & i_begin, const t_RangeType & i_end )
{
m_rangemap_pre = m_rangemap;
m_rangemap.AddRange( i_begin, i_end );
m_bitmap.AddRange( i_begin, i_end );

Verify( "Add", i_begin, i_end );
}

// ======== SubtractRange =========================================
/**
* SubtractRange removes the range. (opposite of Add)
*
*
* @param i_begin Beginning of range to subtract
* @param i_end End of range to subtract
* @return nothing
*/

void SubtractRange( const t_RangeType & i_begin, const t_RangeType & i_end )
{
m_rangemap_pre = m_rangemap;
m_rangemap.SubtractRange( i_begin, i_end );
m_bitmap.SubtractRange( i_begin, i_end );

Verify( "Sub", i_begin, i_end );
}

void Verify( const char * i_op, const t_RangeType & i_begin, const t_RangeType & i_end )
{
unsigned l_elems = m_bitmap.m_bitmap.size();

typename at::RangeMap< w_RangeType, w_RangeTraits >::t_Iterator l_bound;
typename at::RangeMap< w_RangeType, w_RangeTraits >::t_Iterator l_previous;
bool l_previous_set = false;


for ( unsigned i = 0; i < (l_elems); ++ i )
{
l_bound = m_rangemap.m_map.lower_bound( i );

bool l_rangemap_val = l_bound == m_rangemap.m_map.end() ? false : !( i < l_bound->second );
bool l_bitmap_val = m_bitmap.IsSet(i);
bool l_segment_fail = false;

if ( l_previous_set )
{
if ( l_rangemap_val )
{
l_segment_fail = l_previous != l_bound;
}
}
else
{
}

l_previous_set = l_rangemap_val;

if ( l_rangemap_val && ! l_segment_fail )
{
l_previous = l_bound;
}

if ( ( l_rangemap_val != l_bitmap_val ) || l_segment_fail )
{
if ( l_segment_fail )
{
std::cout << "Segments ( " << l_bound->second << ", " << l_bound->first << " ) and \n";
std::cout << " ( " << l_previous->second << ", " << l_previous->first << " ) and \n";
}
std::cout << "Operation = " << i_op << " i_begin = " << i_begin << " i_end = " << i_end << "\n";
std::cout << "Pre operation ";
DumpRanges( m_rangemap_pre.m_map );
std::cout << "Post operation ";
DumpRanges( m_rangemap.m_map );
std::cout << "rangemap " << w_RangeType(i) << " - l_rangemap_val = " << l_rangemap_val << ", l_bitmap_val = " << l_bitmap_val << "\n";
}

AT_TCAssert( m_rangemap.IsSet(i) == m_bitmap.IsSet(i), "Bitmap differs" );
}
}

void DumpRanges()
{
DumpRanges( m_rangemap.m_map );
}


void DumpRanges( typename at::RangeMap< w_RangeType, w_RangeTraits >::t_Map & l_map )
{
typename at::RangeMap< w_RangeType, w_RangeTraits >::t_Iterator l_iterator;

for ( l_iterator = l_map.begin(); l_iterator != l_map.end(); ++ l_iterator )
{
std::cout << "( " << l_iterator->second << ", " << l_iterator->first << " )";
}
std::cout << "\n";
}



at::RangeMap< w_RangeType, w_RangeTraits > m_rangemap;
at::RangeMap< w_RangeType, w_RangeTraits > m_rangemap_pre;
TestBitMap< w_RangeType, w_RangeTraits > m_bitmap;
};




AT_TestArea( RangeMap, "Rangemap object tests" );

AT_DefineTest( RangeMap, RangeMap, "Basic RangeMap test" )
{

void Run()
{

{
TestBoth<unsigned char> l_rm;

l_rm.AddRange( 'a','x' );

l_rm.AddRange( 'A','X' );

l_rm.AddRange( 'z','z' );

l_rm.AddRange( 'Y','Z' );

l_rm.DumpRanges();
}
{
TestBoth<unsigned char> l_rm;

l_rm.AddRange( '0','0' );

l_rm.AddRange( 'a','a' );

l_rm.AddRange( 'c','c' );

l_rm.AddRange( 'h','h' );

l_rm.AddRange( 'b','g' );

l_rm.DumpRanges();
}
{
TestBoth<unsigned char> l_rm;

l_rm.AddRange( '0','0' );

l_rm.AddRange( 'a','a' );

l_rm.AddRange( 'c','c' );

l_rm.AddRange( 'h','h' );

l_rm.AddRange( 'b','i' );

l_rm.DumpRanges();

l_rm.AddRange( 'c','c' );

l_rm.DumpRanges();

l_rm.SubtractRange( 'b','b' );
l_rm.SubtractRange( 'c','c' );
l_rm.SubtractRange( 'd','d' );
l_rm.SubtractRange( 'c','c' );

l_rm.DumpRanges();
}
{
TestBoth<unsigned char> l_rm;


l_rm.AddRange( 'a','a' );

l_rm.AddRange( 'c','c' );

l_rm.AddRange( 'h','h' );

l_rm.AddRange( 'b','i' );

l_rm.AddRange( '0','0' );

l_rm.SubtractRange( '0','0' );

l_rm.AddRange( '0', 'i' );

l_rm.SubtractRange( '0','i' );

l_rm.AddRange( 'A','Z' );

l_rm.SubtractRange( 'M','M' );

l_rm.DumpRanges();
}
}

};

AT_RegisterTest( RangeMap, RangeMap );


AT_DefineTest( MonteRangeMap, RangeMap, "Basic RangeMap test" )
{

void Run()
{

TestBoth<unsigned> l_rm;

std::srand( 39000 );

int range = 60;

for ( int i = 0; i < 10000; ++i )
{

unsigned l_begin = std::rand() % range;
unsigned l_end = std::rand() % range;

bool operation = ( 2 & std::rand() ) == 0;

if ( operation )
{
l_rm.SubtractRange( l_begin, l_end );
}
else
{
l_rm.AddRange( l_begin, l_end );
}

}
}

};

AT_RegisterTest( MonteRangeMap, RangeMap );


} // RangeMap Test namespace
 
I

Ian Collins

Gianni said:
Pete Becker wrote:
....

I have yet to meet a "test" developer that can beat the monte carlo test
for coverage.

OK - I agree, there are cases where a monte-carlo test will never be
able to test adequately, but as a rule, it is better to have a MC test
than not. I have uncovered more legitimate problems from the MC test
than from carefully crafted tests.
There are plenty of situations where a Monte Carlo test isn't
appropriate or even possible. A good test developer has the knack of
thinking like a user, where a developer thinks like a developer. They
also see the bigger picture, you know your part of a system in detail,
but they know the overall system which enables them to think up more
imaginative usage scenarios.
I challenge you. I don't think of myself as a tester. I believe you
can't do a better job than I in testing my code. Let's use this "range
map" as an example.

I have attached the header file and the test cases.
Not a good idea on Usenet!
Your clain that I "don't understand testing well enough" and so I use an
MC test I think is short sighted.

For example, MC tests are the only tests that I have been able to truly
test multithreaded code. It is nigh impossible for me (or any human) to
truly understand all the interactions in a MT scenario. MC tests almost
always will push every edge of the problem space.
While I generally agree with your comments on MT code, reproducing a
failure induced by random testing can be very difficult.

The tests you posted don't appear to do any MT testing, or am I missing
something?
 
G

Gianni Mariani

Ian said:
There are plenty of situations where a Monte Carlo test isn't
appropriate or even possible.

ya - we agree. I kind of said that in the first sentence.

.... A good test developer has the knack of
thinking like a user, where a developer thinks like a developer.

My BS meter jut pegged. A developer had better think like a user or
they're a crappy developer IMHO.

.... They
also see the bigger picture, you know your part of a system in detail,
but they know the overall system which enables them to think up more
imaginative usage scenarios.

I guess I don't see any value in a developer taking a myopic view of the
product they work on.
Not a good idea on Usenet!

My newsreader messes with the code otherwise ... :-(
While I generally agree with your comments on MT code, reproducing a
failure induced by random testing can be very difficult.

The tests you posted don't appear to do any MT testing, or am I missing
something?

No, they don't I was just making the point that sometimes, the best test
is the MC test. There is no simple and easy rule as to the test
approach, you need to adopt the test for the problem at hand.
 
I

Ian Collins

Gianni said:
Ian Collins wrote:

.... A good test developer has the knack of

My BS meter jut pegged. A developer had better think like a user or
they're a crappy developer IMHO.
While ideal, that can be difficult when the developer is part of a large
team working on a component of complex system. Sure everyone should
have some degree of domain knowledge, but it isn't always possible.
Many project teams I have worked on had a large number of contact staff
employed for their coding skill rather than product knowledge (I know, I
frequently was one!).

This was why in my shop, the test developers worked with the customer(s)
to design and implement the acceptance tests.
 
J

James Kanze

Have you ever been in charge of a company's software development? I
have and the best thing I ever did to improve both the productivity of
the teams and quality of the code was to introduce eXtreme Programming,
which includes TDD as a core practice.
Our delivery times and field defect rates more than vindicated the change.

I've worked with the people in charge. We evaluated the
procedure, and found that it simply didn't work. Looking at
other companies as well, none practicing eXtreme Programming
seem to be shipping products of very high quality. In fact, the
companies I've seen using it generally don't have the mechanisms
in place to actually measure quality or productivity, so they
don't know what the impact was.

When I actually talk to the engineers involved, it turns out
that e.g. they weren't using any accepted means of achieving
quality before. It's certain that adopting TDD will improve
things if there was no testing what so ever previously.
Similarly, pair programming is more cost efficient that never
letting a second programmer look at, or at least understand,
another programmer's code, even if it is a magnitude or more
less efficient than a well run code review. Compared to
established good practices, however, most of the suggestions in
eXtreme Programming represent a step backwards.
 
G

Gianni Mariani

James Kanze wrote:
....
Yes, but nobody but an idiot would pay you for such a thing.
Thread safety, to site but the most obvious example, isn't
testable, so you just ignore it?

Common misconception.

1. Testability of code is a primary objective. (i.e. code that can't be
tested is unfit for purpose)

2. Any testing (MT or not) is about a level of confidence, not absoluteness.

I have discovered that MT test cases that push the limits of the code
using random input does provide sufficient coverage to produce a level
of confidence that makes the target "testable".

If you consider what happens when you have multiple processors
interacting randomly in a consistent system, you end up testing more
possibilities than can present themselves in a more systematic system.
However, with threading, it's not really systematic because external
events cause what would normally be systematic to be random. Now
consider what happens in a race condition failure. This normally
happens when two threads enter sections of code that should be mutually
exclusive. Usually there are a few thousand instructions in your test
loop (for a significant test). The regions that can fail are usually
10's of instructions, sometimes 100's. If you are able to push
randomness, how many times do you need to reschedule one thread to hit a
potential problem. Given cache latencies, pre-emption from other
threads, program randomness (like memory allocation variances) you can
achieve pretty close to full coverage of every possible race condition
in about 10 seconds of testing. There are some systematic start-up
effects that may not be found, but you mitigate that by running
automated testing. (In my shop, we run unit tests on the build machine
around the clock - all the time.)

So that leaves us with the level of confidence point. You can't achieve
perfect testing all the time, but you can achieve high level of
confidence testing all of the time.

It does require a true multi processor system to test adequately. I
have found a number of problems that almost always fail on a true MP
system that hardly ever fail on a SP system. Very rarely have I found
problems on 4 processor or more systems that were not also found on a 2
processor system, although, I would probably spend the money on a 4 core
CPU for developer systems today just to add more levels of confidence.

In practice, I have never seen a failure in the wild that could not be
discovered with a properly crafted MC+MT test.

So to truly get the coverage you want, the test needs to perform as much
randomness which means running more threads than processors and pushing
random inputs. Then run these tests all the time (after every automated
build) and make it so it stops dead when there is a problem discovered
so you can debug the issue at the point of failure. (Which is one of the
reasons I don't like exceptions thrown when a programming error is
found. It helps immensely to see the complete context of the error in
finding the problem.)
My customers want to know what the code will do, and how much
development will cost, before they allocate the resources to
develope it. Which means that I have a requirements
specification which has to be met.

I have met very few customers that know what a spec is even if it
smacked them up the side of the head. Sad. Inevitably it leads to
pissed off customer.
 
G

Gianni Mariani

James Kanze wrote:
....
The latest trend where? Certainly not in any company concerned
with good management, or quality software.

Look up TDD.
And will not necessarily meet requirements, or even be useful.

Actually, it does meet the requirements by definition since the test
case demonstrates how the requirements should be met.

See my "log"ging example.
 
J

James Kanze

Pete Becker wrote:
I have yet to meet a "test" developer that can beat the monte carlo test
for coverage.
OK - I agree, there are cases where a monte-carlo test will never be
able to test adequately, but as a rule, it is better to have a MC test
than not. I have uncovered more legitimate problems from the MC test
than from carefully crafted tests.

Which proves that you don't have anyone who knows how to write
tests. A carefully crafted test will, by definition, find any
problem that a MC test will find.

In my experience, the main use of MC tests is to detect when
your tests aren't carefully crafted. Just as the main use of
testing is to validate your process---anytime a test reveals an
error, it is a sign that there is a problem in the process, and
that the process needs improvement.
 
G

Gianni Mariani

Ian said:
While ideal, that can be difficult when the developer is part of a large
team working on a component of complex system. Sure everyone should
have some degree of domain knowledge, but it isn't always possible.
Many project teams I have worked on had a large number of contact staff
employed for their coding skill rather than product knowledge (I know, I
frequently was one!).

This was why in my shop, the test developers worked with the customer(s)
to design and implement the acceptance tests.

This is a very inefficient development model. It is somewhat outdated.
 
I

Ian Collins

James said:
I've worked with the people in charge. We evaluated the
procedure, and found that it simply didn't work. Looking at
other companies as well, none practicing eXtreme Programming
seem to be shipping products of very high quality. In fact, the
companies I've seen using it generally don't have the mechanisms
in place to actually measure quality or productivity, so they
don't know what the impact was.
We certainly did - field defect reports and the internal cost of
correcting them.
When I actually talk to the engineers involved, it turns out
that e.g. they weren't using any accepted means of achieving
quality before. It's certain that adopting TDD will improve
things if there was no testing what so ever previously.
Similarly, pair programming is more cost efficient that never
letting a second programmer look at, or at least understand,
another programmer's code, even if it is a magnitude or more
less efficient than a well run code review.

Have you tried it? Not having to hold code reviews was one of the
biggest real savings for us.
Compared to
established good practices, however, most of the suggestions in
eXtreme Programming represent a step backwards.
That's your opinion and you are entitled to it. Mine, through direct
experience, is diametrically opposed.
 
G

Gianni Mariani

James said:
Which proves that you don't have anyone who knows how to write
tests. A carefully crafted test will, by definition, find any
problem that a MC test will find.

We will have to agree to disagree on this.

I have one anecdotal evidence which suggests that no-one is capable of
truly foreseeing the full gamut of issues that can be found in a well
designed MC test.

A pass on an MC test raises the level of confidence which is always a
good thing.
In my experience, the main use of MC tests is to detect when
your tests aren't carefully crafted. Just as the main use of
testing is to validate your process---anytime a test reveals an
error, it is a sign that there is a problem in the process, and
that the process needs improvement.

If I read between the lines here, I think you're saying that we need
test developers to conceive every kind of possible failure. I have yet
to meet anyone who could do that consistently and I have been developing
software for a very long time.

I don't think your premise (if I read it correctly) is achievable.

I lean toward getting making the computer do as much work as possible
because it is much more consistent than a developer. (no problems with
headaches.) Case in pount, if you look at MakeXS it's as simple as put
a cpp file in a folder and running "make" - header files are found
automatically for you, the idea being make the development environment
as easy as possible.

Again, I am not saying that the MC test is the only test you need to
write. I am, however, making the observation that I have yet to meet
anyone that can find all the problems found by a well crafted MC test.

Said another way, there is a large set in the intersection of the issues
found in an MC test and the issues found by a competent test developer.
I'd rather the competent test developer push the envelope on tests
that a well crafted MC test can't find (i.e. very systematic edge cases)
and let the MC test do the hard work on the rest.

i.e.

+--------------------------------------------+
| MC Test Discoverable Set |
| +--------------------------------------+--------+
| | Intersection of Test Dev + MC Test | |
| | Discoverable set | |
+-----+--------------------------------------+ |
| Test Dev Discoverable set |
+-----------------------------------------------+
 
I

Ian Collins

Gianni said:
I have met very few customers that know what a spec is even if it
smacked them up the side of the head.

Welcome to the club!
Sad. Inevitably it leads to pissed off customer.

Any agile process (XP, Scrum or whatever) is ideal for this situation.
This situation is one of the main reasons these processes have evolved.
The sort release cycles keep the customer in the loop and give them the
flexibility to change their mind without incurring significant rework
costs. I use a one week cycle with one particularly indecisive client!
 
G

Gianni Mariani

Ian said:
Welcome to the club!


Any agile process (XP, Scrum or whatever) is ideal for this situation.
This situation is one of the main reasons these processes have evolved.
The sort release cycles keep the customer in the loop and give them the
flexibility to change their mind without incurring significant rework
costs. I use a one week cycle with one particularly indecisive client!

Yes - I've done it. Short release cycles - I invented them. Management
still fsck's up all the time, every time.
 
G

Gianni Mariani

Ian said:
Which, the first paragraph or the second?

first and second.

I don't see a practical distinction between tester, developer and
designer. While the overall design of the system needs an "architect",
the job of the architect is to provide a framework that inevitably needs
extensibility.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,482
Members
44,901
Latest member
Noble71S45

Latest Threads

Top