unittests with different parameters

U

Ulrich Eckhardt

Hi!

I'm writing tests and I'm wondering how to achieve a few things most
elegantly with Python's unittest module.

Let's say I have two flags invert X and invert Y. Now, for testing these, I
would write one test for each combination. What I have in the test case is
something like this:

def test_invert_flags(self):
"""test flags to invert coordinates"""
tests = [((10, 20), INVERT_NONE, (10, 20)),
((10, 20), INVERT_X, (-10, 20)),
((10, 20), INVERT_Y, (10, -20))]
for input, flags, expected in tests:
res = do_invert(input, flags)
self.assertEqual(res, expected,
"%s caused wrong results" % (flags,))

So, what I do that I test the function 'do_invert' for different input
combinations and verify the result. The ugly thing is that this will abort
the whole test if one of the tests in the loop fails. So, my question is
how do I avoid this?

I know that I could write a common test function instead:

def _test_invert_flags(self, input, flags, expected):
res = do_invert(input, flags)
self.assertEqual(res, expected)

def test_invert_flags_non(self):
"""test not inverting coordinates"""
self._test_invert_flags((10, 20), INVERT_NONE, (10, 20))

def test_invert_flags_x(self):
"""test inverting X coordinates"""
self._test_invert_flags((10, 20), INVERT_X, (-10, 20))

def test_invert_flags_y(self):
"""test inverting Y coordinates"""
self._test_invert_flags((10, 20), INVERT_Y, (10, -20))

What I don't like here is that this is unnecessarily verbose and that it
basically repeats information. Also, I'd rather construct the error message
from the data instead of maintaining it in different places, because
manually keeping those in sync is another, errorprone burden.


Any suggestions?

Uli
 
R

Richard Thomas

Hi!

I'm writing tests and I'm wondering how to achieve a few things most
elegantly with Python's unittest module.

Let's say I have two flags invert X and invert Y. Now, for testing these, I
would write one test for each combination. What I have in the test case is
something like this:

  def test_invert_flags(self):
      """test flags to invert coordinates"""
      tests = [((10, 20), INVERT_NONE, (10, 20)),
               ((10, 20), INVERT_X, (-10, 20)),
               ((10, 20), INVERT_Y, (10, -20))]
      for input, flags, expected in tests:
          res = do_invert(input, flags)
          self.assertEqual(res, expected,
                           "%s caused wrong results" % (flags,))

So, what I do that I test the function 'do_invert' for different input
combinations and verify the result. The ugly thing is that this will abort
the whole test if one of the tests in the loop fails. So, my question is
how do I avoid this?

I know that I could write a common test function instead:

  def _test_invert_flags(self, input, flags, expected):
      res = do_invert(input, flags)
      self.assertEqual(res, expected)

  def test_invert_flags_non(self):
      """test not inverting coordinates"""
      self._test_invert_flags((10, 20), INVERT_NONE, (10, 20))

  def test_invert_flags_x(self):
      """test inverting X coordinates"""
      self._test_invert_flags((10, 20), INVERT_X, (-10, 20))

  def test_invert_flags_y(self):
      """test inverting Y coordinates"""
      self._test_invert_flags((10, 20), INVERT_Y, (10, -20))

What I don't like here is that this is unnecessarily verbose and that it
basically repeats information. Also, I'd rather construct the error message
from the data instead of maintaining it in different places, because
manually keeping those in sync is another, errorprone burden.

Any suggestions?

Uli

You could have a parameter to the test method and some custom
TestLoader that knows what to do with it. See http://docs.python.org/library/unittest.html.
I would venture that unit tests are verbose by their very nature; they
are 100% redundant. The usual argument against unnecessary redundancy,
that of ease of maintenance, really doesn't apply to unit tests.
Anyway, good luck with your efforts.

Chard.
 
R

Roy Smith

Ulrich Eckhardt said:
def test_invert_flags(self):
"""test flags to invert coordinates"""
tests = [((10, 20), INVERT_NONE, (10, 20)),
((10, 20), INVERT_X, (-10, 20)),
((10, 20), INVERT_Y, (10, -20))]
for input, flags, expected in tests:
res = do_invert(input, flags)
self.assertEqual(res, expected,
"%s caused wrong results" % (flags,))

So, what I do that I test the function 'do_invert' for different input
combinations and verify the result. The ugly thing is that this will abort
the whole test if one of the tests in the loop fails. So, my question is
how do I avoid this?

Writing one test method per parameter combination, as you suggested, is
a reasonable approach, especially if the number of combinations is
reasonably small. Another might be to make your loop:

failCount = 0
for input, flags, expected in tests:
res = do_invert(input, flags)
if res != expected:
print "%s caused wrong results" % (flags,)
failCount += 1
self.assertEqual(failCount, 0, "%d of them failed" % failCount)

Yet another possibility is to leave it the way you originally wrote it
and not worry about the fact that the loop aborts on the first failure.
Let it fail, fix it, then re-run the test to find the next failure.
Perhaps not as efficient as finding them all at once, but you're going
to fix them one at a time anyway, so what does it matter? It may also
turn out that all the failures are due to a single bug, so fixing one
fixes them all.
 
U

Ulrich Eckhardt

Roy said:
Writing one test method per parameter combination, as you suggested, is
a reasonable approach, especially if the number of combinations is
reasonably small.

The number of parameters and thus combinations are unfortunately rather
large. Also, sometimes that data is not static but rather computed from a
loop instead. There are a few optimised computations, where I compute the
expected result with the slow but simple version, in those cases I want to
check a whole range of inputs using a loop.

I'm wondering, classes aren't as static as I'm still used to from C++, so
creating the test functions dynamically with a loop outside the class
declaration should be another possibility...
Yet another possibility is to leave it the way you originally wrote it
and not worry about the fact that the loop aborts on the first failure.
Let it fail, fix it, then re-run the test to find the next failure.
Perhaps not as efficient as finding them all at once, but you're going
to fix them one at a time anyway, so what does it matter?

Imagine all tests that use INVERT_X fail, all others pass. What would your
educated guess be where the code is wrong? ;)

Thanks Roy!

Uli
 
U

Ulrich Eckhardt

Richard Thomas wrote:
[batch-programming different unit tests]
You could have a parameter to the test method and some custom
TestLoader that knows what to do with it.

Interesting, thanks for this suggestion, I'll look into it!

Uli
 
R

Roy Smith

Yet another possibility is to leave it the way you originally wrote it
and not worry about the fact that the loop aborts on the first failure.
Let it fail, fix it, then re-run the test to find the next failure.
Perhaps not as efficient as finding them all at once, but you're going
to fix them one at a time anyway, so what does it matter?

Imagine all tests that use INVERT_X fail, all others pass. What would your
educated guess be where the code is wrong? ;)[/QUOTE]

Well, let me leave you with one last thought. There's really two kinds
of tests -- acceptance tests, and diagnostic tests.

I tend to write acceptance tests first. The idea is that if all the
tests pass, I know my code works. When some test fails, that's when I
start digging deeper and writing diagnostic tests, to help me figure out
what went wrong.

The worst test is a test which is never written because it's too hard to
write. If it's easy to write a bunch of tests which verify correct
operation but don't give a lot of clues about what went wrong, it might
be worth doing that first and seeing what happens. If some of the tests
fail, then invest the time to write more detailed tests which give you
more information about each failure.
 
I

Ian Kelly

Let's say I have two flags invert X and invert Y. Now, for testing these, I
would write one test for each combination. What I have in the test case is
something like this:

def test_invert_flags(self):
"""test flags to invert coordinates"""
tests = [((10, 20), INVERT_NONE, (10, 20)),
((10, 20), INVERT_X, (-10, 20)),
((10, 20), INVERT_Y, (10, -20))]
for input, flags, expected in tests:
res = do_invert(input, flags)
self.assertEqual(res, expected,
"%s caused wrong results" % (flags,))

So, what I do that I test the function 'do_invert' for different input
combinations and verify the result. The ugly thing is that this will abort
the whole test if one of the tests in the loop fails. So, my question is
how do I avoid this?

I know that I could write a common test function instead:

def _test_invert_flags(self, input, flags, expected):
res = do_invert(input, flags)
self.assertEqual(res, expected)

def test_invert_flags_non(self):
"""test not inverting coordinates"""
self._test_invert_flags((10, 20), INVERT_NONE, (10, 20))

def test_invert_flags_x(self):
"""test inverting X coordinates"""
self._test_invert_flags((10, 20), INVERT_X, (-10, 20))

def test_invert_flags_y(self):
"""test inverting Y coordinates"""
self._test_invert_flags((10, 20), INVERT_Y, (10, -20))

What I don't like here is that this is unnecessarily verbose and that it
basically repeats information.

The above code looks perfectly fine to me for testing. I think the
question you should ask yourself is whether the different combinations
you are testing represent tests of distinct behaviors, or tests of the
same behavior on a variety of data. If the former case, as in the
sample code you posted, then these should probably have separate tests
anyway, so that you can easily see that both INVERT_X and INVERT_BOTH
are failing, but INVERT_Y is not, which may be valuable diagnostic data.

On the other hand, if your test is trying the INVERT_X behavior on nine
different points, you probably don't need or want to see every
individual point that fails. It's enough to know that INVERT_X is
failing and to have a sample point where it fails. In that case I would
say just run them in a loop and don't worry that it might exit early.
Also, I'd rather construct the error message
from the data instead of maintaining it in different places, because
manually keeping those in sync is another, errorprone burden.

I'm not sure I follow the problem you're describing. If the factored
out workhorse function receives the data to test, what prevents it from
constructing an error message from that data?

Cheers,
Ian
 
U

Ulrich Eckhardt

Ian said:
I'm not sure I follow the problem you're describing. If the factored
out workhorse function receives the data to test, what prevents it from
constructing an error message from that data?

Sorry, unprecise description of what I want. If you define a test function
and run the tests with "-v", the framework prints the first line of the
docstring of that function followed by okay/fail/error, which is much
friendlier to the reader than the exception dump afterwards. Using multiple
very similar functions requires equally similar docstrings that repeat
themselves. I'd prefer creating these from the input data.

Thanks for your suggestion, Ian!

Uli
 
J

Jonathan Hartley

Hi!

I'm writing tests and I'm wondering how to achieve a few things most
elegantly with Python's unittest module.

Let's say I have two flags invert X and invert Y. Now, for testing these, I
would write one test for each combination. What I have in the test case is
something like this:

  def test_invert_flags(self):
      """test flags to invert coordinates"""
      tests = [((10, 20), INVERT_NONE, (10, 20)),
               ((10, 20), INVERT_X, (-10, 20)),
               ((10, 20), INVERT_Y, (10, -20))]
      for input, flags, expected in tests:
          res = do_invert(input, flags)
          self.assertEqual(res, expected,
                           "%s caused wrong results" % (flags,))

So, what I do that I test the function 'do_invert' for different input
combinations and verify the result. The ugly thing is that this will abort
the whole test if one of the tests in the loop fails. So, my question is
how do I avoid this?

I know that I could write a common test function instead:

  def _test_invert_flags(self, input, flags, expected):
      res = do_invert(input, flags)
      self.assertEqual(res, expected)

  def test_invert_flags_non(self):
      """test not inverting coordinates"""
      self._test_invert_flags((10, 20), INVERT_NONE, (10, 20))

  def test_invert_flags_x(self):
      """test inverting X coordinates"""
      self._test_invert_flags((10, 20), INVERT_X, (-10, 20))

  def test_invert_flags_y(self):
      """test inverting Y coordinates"""
      self._test_invert_flags((10, 20), INVERT_Y, (10, -20))

What I don't like here is that this is unnecessarily verbose and that it
basically repeats information. Also, I'd rather construct the error message
from the data instead of maintaining it in different places, because
manually keeping those in sync is another, errorprone burden.

Any suggestions?

Uli


The following is a bit ghastly, I'm not sure I'd recommend it, but if
you are determined, you could try dynamically adding test methods to
the test class. The following is untested - I suspect I have made a
schoolboy error in attempting to make methods out of functions - but
something like it might work:


class MyTestClass(unittest.TestCase):
pass

testdata = [
(INPUTS, EXPECTED),
(INPUTS, EXPECTED),
(INPUTS, EXPECTED),
]

for index, (input, expected) in enumerate(testdata):
# the following sets an attribute on MyTestClass
# the names of the attributes are 'test_1', 'test_2', etc
# the value of the attributes is a test method that performs the
assert
setattr(
MyTestClass,
'test_%d' % (index,),
lambda s: s.assertEquals(METHOD_UNDER_TEST(*input), expected)
)
 
U

Ulrich Eckhardt

Short update on what I've settled for generating test functions for various
input data:

# test case with common test function
class MyTest(unittest.TestCase):
def _test_invert_flags(self, input, flags, expected):
      res = do_invert(input, flags)
      self.assertEqual(res, expected)

# test definitions for the various invert flags
tests = [((10, 20), INVERT_NONE, (10, 20)),
         ((10, 20), INVERT_X, (-10, 20)),
         ((10, 20), INVERT_Y, (10, -20))]

# add test to the test case class
for input, flags, expected in tests:
def test(self):
self._test_invert_flags(input, flags, expected)
test.__doc__ = "testing invert flags %s" % flags
setattr(MyTest, "test_invert_flags_%s" % flags, test)


Yes, the names of the test functions would clash if I tested the same flags
twice, in the real code that doesn't happen (enumerate is my friend!).

Thanks all!

Uli
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,583
Members
45,074
Latest member
StanleyFra

Latest Threads

Top