unittests with different parameters

Discussion in 'Python' started by Ulrich Eckhardt, Nov 22, 2010.

  1. Hi!

    I'm writing tests and I'm wondering how to achieve a few things most
    elegantly with Python's unittest module.

    Let's say I have two flags invert X and invert Y. Now, for testing these, I
    would write one test for each combination. What I have in the test case is
    something like this:

    def test_invert_flags(self):
    """test flags to invert coordinates"""
    tests = [((10, 20), INVERT_NONE, (10, 20)),
    ((10, 20), INVERT_X, (-10, 20)),
    ((10, 20), INVERT_Y, (10, -20))]
    for input, flags, expected in tests:
    res = do_invert(input, flags)
    self.assertEqual(res, expected,
    "%s caused wrong results" % (flags,))

    So, what I do that I test the function 'do_invert' for different input
    combinations and verify the result. The ugly thing is that this will abort
    the whole test if one of the tests in the loop fails. So, my question is
    how do I avoid this?

    I know that I could write a common test function instead:

    def _test_invert_flags(self, input, flags, expected):
    res = do_invert(input, flags)
    self.assertEqual(res, expected)

    def test_invert_flags_non(self):
    """test not inverting coordinates"""
    self._test_invert_flags((10, 20), INVERT_NONE, (10, 20))

    def test_invert_flags_x(self):
    """test inverting X coordinates"""
    self._test_invert_flags((10, 20), INVERT_X, (-10, 20))

    def test_invert_flags_y(self):
    """test inverting Y coordinates"""
    self._test_invert_flags((10, 20), INVERT_Y, (10, -20))

    What I don't like here is that this is unnecessarily verbose and that it
    basically repeats information. Also, I'd rather construct the error message
    from the data instead of maintaining it in different places, because
    manually keeping those in sync is another, errorprone burden.


    Any suggestions?

    Uli

    --
    Domino Laser GmbH
    Geschäftsführer: Thorsten Föcking, Amtsgericht Hamburg HR B62 932
    Ulrich Eckhardt, Nov 22, 2010
    #1
    1. Advertising

  2. On Nov 22, 11:38 am, Ulrich Eckhardt <>
    wrote:
    > Hi!
    >
    > I'm writing tests and I'm wondering how to achieve a few things most
    > elegantly with Python's unittest module.
    >
    > Let's say I have two flags invert X and invert Y. Now, for testing these, I
    > would write one test for each combination. What I have in the test case is
    > something like this:
    >
    >   def test_invert_flags(self):
    >       """test flags to invert coordinates"""
    >       tests = [((10, 20), INVERT_NONE, (10, 20)),
    >                ((10, 20), INVERT_X, (-10, 20)),
    >                ((10, 20), INVERT_Y, (10, -20))]
    >       for input, flags, expected in tests:
    >           res = do_invert(input, flags)
    >           self.assertEqual(res, expected,
    >                            "%s caused wrong results" % (flags,))
    >
    > So, what I do that I test the function 'do_invert' for different input
    > combinations and verify the result. The ugly thing is that this will abort
    > the whole test if one of the tests in the loop fails. So, my question is
    > how do I avoid this?
    >
    > I know that I could write a common test function instead:
    >
    >   def _test_invert_flags(self, input, flags, expected):
    >       res = do_invert(input, flags)
    >       self.assertEqual(res, expected)
    >
    >   def test_invert_flags_non(self):
    >       """test not inverting coordinates"""
    >       self._test_invert_flags((10, 20), INVERT_NONE, (10, 20))
    >
    >   def test_invert_flags_x(self):
    >       """test inverting X coordinates"""
    >       self._test_invert_flags((10, 20), INVERT_X, (-10, 20))
    >
    >   def test_invert_flags_y(self):
    >       """test inverting Y coordinates"""
    >       self._test_invert_flags((10, 20), INVERT_Y, (10, -20))
    >
    > What I don't like here is that this is unnecessarily verbose and that it
    > basically repeats information. Also, I'd rather construct the error message
    > from the data instead of maintaining it in different places, because
    > manually keeping those in sync is another, errorprone burden.
    >
    > Any suggestions?
    >
    > Uli
    >
    > --
    > Domino Laser GmbH
    > Geschäftsführer: Thorsten Föcking, Amtsgericht Hamburg HR B62 932


    You could have a parameter to the test method and some custom
    TestLoader that knows what to do with it. See http://docs.python.org/library/unittest.html.
    I would venture that unit tests are verbose by their very nature; they
    are 100% redundant. The usual argument against unnecessary redundancy,
    that of ease of maintenance, really doesn't apply to unit tests.
    Anyway, good luck with your efforts.

    Chard.
    Richard Thomas, Nov 22, 2010
    #2
    1. Advertising

  3. Ulrich Eckhardt

    Roy Smith Guest

    In article <>,
    Ulrich Eckhardt <> wrote:

    > def test_invert_flags(self):
    > """test flags to invert coordinates"""
    > tests = [((10, 20), INVERT_NONE, (10, 20)),
    > ((10, 20), INVERT_X, (-10, 20)),
    > ((10, 20), INVERT_Y, (10, -20))]
    > for input, flags, expected in tests:
    > res = do_invert(input, flags)
    > self.assertEqual(res, expected,
    > "%s caused wrong results" % (flags,))
    >
    > So, what I do that I test the function 'do_invert' for different input
    > combinations and verify the result. The ugly thing is that this will abort
    > the whole test if one of the tests in the loop fails. So, my question is
    > how do I avoid this?


    Writing one test method per parameter combination, as you suggested, is
    a reasonable approach, especially if the number of combinations is
    reasonably small. Another might be to make your loop:

    failCount = 0
    for input, flags, expected in tests:
    res = do_invert(input, flags)
    if res != expected:
    print "%s caused wrong results" % (flags,)
    failCount += 1
    self.assertEqual(failCount, 0, "%d of them failed" % failCount)

    Yet another possibility is to leave it the way you originally wrote it
    and not worry about the fact that the loop aborts on the first failure.
    Let it fail, fix it, then re-run the test to find the next failure.
    Perhaps not as efficient as finding them all at once, but you're going
    to fix them one at a time anyway, so what does it matter? It may also
    turn out that all the failures are due to a single bug, so fixing one
    fixes them all.
    Roy Smith, Nov 22, 2010
    #3
  4. Roy Smith wrote:
    > Writing one test method per parameter combination, as you suggested, is
    > a reasonable approach, especially if the number of combinations is
    > reasonably small.


    The number of parameters and thus combinations are unfortunately rather
    large. Also, sometimes that data is not static but rather computed from a
    loop instead. There are a few optimised computations, where I compute the
    expected result with the slow but simple version, in those cases I want to
    check a whole range of inputs using a loop.

    I'm wondering, classes aren't as static as I'm still used to from C++, so
    creating the test functions dynamically with a loop outside the class
    declaration should be another possibility...

    > Yet another possibility is to leave it the way you originally wrote it
    > and not worry about the fact that the loop aborts on the first failure.
    > Let it fail, fix it, then re-run the test to find the next failure.
    > Perhaps not as efficient as finding them all at once, but you're going
    > to fix them one at a time anyway, so what does it matter?


    Imagine all tests that use INVERT_X fail, all others pass. What would your
    educated guess be where the code is wrong? ;)

    Thanks Roy!

    Uli

    --
    Domino Laser GmbH
    Geschäftsführer: Thorsten Föcking, Amtsgericht Hamburg HR B62 932
    Ulrich Eckhardt, Nov 22, 2010
    #4
  5. Richard Thomas wrote:
    [batch-programming different unit tests]
    > You could have a parameter to the test method and some custom
    > TestLoader that knows what to do with it.


    Interesting, thanks for this suggestion, I'll look into it!

    Uli

    --
    Domino Laser GmbH
    Geschäftsführer: Thorsten Föcking, Amtsgericht Hamburg HR B62 932
    Ulrich Eckhardt, Nov 22, 2010
    #5
  6. Ulrich Eckhardt

    Roy Smith Guest

    In article <>,
    Ulrich Eckhardt <> wrote:

    > > Yet another possibility is to leave it the way you originally wrote it
    > > and not worry about the fact that the loop aborts on the first failure.
    > > Let it fail, fix it, then re-run the test to find the next failure.
    > > Perhaps not as efficient as finding them all at once, but you're going
    > > to fix them one at a time anyway, so what does it matter?

    >
    > Imagine all tests that use INVERT_X fail, all others pass. What would your
    > educated guess be where the code is wrong? ;)


    Well, let me leave you with one last thought. There's really two kinds
    of tests -- acceptance tests, and diagnostic tests.

    I tend to write acceptance tests first. The idea is that if all the
    tests pass, I know my code works. When some test fails, that's when I
    start digging deeper and writing diagnostic tests, to help me figure out
    what went wrong.

    The worst test is a test which is never written because it's too hard to
    write. If it's easy to write a bunch of tests which verify correct
    operation but don't give a lot of clues about what went wrong, it might
    be worth doing that first and seeing what happens. If some of the tests
    fail, then invest the time to write more detailed tests which give you
    more information about each failure.
    Roy Smith, Nov 22, 2010
    #6
  7. Ulrich Eckhardt

    Ian Kelly Guest

    On 11/22/2010 4:38 AM, Ulrich Eckhardt wrote:
    > Let's say I have two flags invert X and invert Y. Now, for testing these, I
    > would write one test for each combination. What I have in the test case is
    > something like this:
    >
    > def test_invert_flags(self):
    > """test flags to invert coordinates"""
    > tests = [((10, 20), INVERT_NONE, (10, 20)),
    > ((10, 20), INVERT_X, (-10, 20)),
    > ((10, 20), INVERT_Y, (10, -20))]
    > for input, flags, expected in tests:
    > res = do_invert(input, flags)
    > self.assertEqual(res, expected,
    > "%s caused wrong results" % (flags,))
    >
    > So, what I do that I test the function 'do_invert' for different input
    > combinations and verify the result. The ugly thing is that this will abort
    > the whole test if one of the tests in the loop fails. So, my question is
    > how do I avoid this?
    >
    > I know that I could write a common test function instead:
    >
    > def _test_invert_flags(self, input, flags, expected):
    > res = do_invert(input, flags)
    > self.assertEqual(res, expected)
    >
    > def test_invert_flags_non(self):
    > """test not inverting coordinates"""
    > self._test_invert_flags((10, 20), INVERT_NONE, (10, 20))
    >
    > def test_invert_flags_x(self):
    > """test inverting X coordinates"""
    > self._test_invert_flags((10, 20), INVERT_X, (-10, 20))
    >
    > def test_invert_flags_y(self):
    > """test inverting Y coordinates"""
    > self._test_invert_flags((10, 20), INVERT_Y, (10, -20))
    >
    > What I don't like here is that this is unnecessarily verbose and that it
    > basically repeats information.


    The above code looks perfectly fine to me for testing. I think the
    question you should ask yourself is whether the different combinations
    you are testing represent tests of distinct behaviors, or tests of the
    same behavior on a variety of data. If the former case, as in the
    sample code you posted, then these should probably have separate tests
    anyway, so that you can easily see that both INVERT_X and INVERT_BOTH
    are failing, but INVERT_Y is not, which may be valuable diagnostic data.

    On the other hand, if your test is trying the INVERT_X behavior on nine
    different points, you probably don't need or want to see every
    individual point that fails. It's enough to know that INVERT_X is
    failing and to have a sample point where it fails. In that case I would
    say just run them in a loop and don't worry that it might exit early.

    > Also, I'd rather construct the error message
    > from the data instead of maintaining it in different places, because
    > manually keeping those in sync is another, errorprone burden.


    I'm not sure I follow the problem you're describing. If the factored
    out workhorse function receives the data to test, what prevents it from
    constructing an error message from that data?

    Cheers,
    Ian
    Ian Kelly, Nov 22, 2010
    #7
  8. Ian Kelly wrote:
    > On 11/22/2010 4:38 AM, Ulrich Eckhardt wrote:
    >> Also, I'd rather construct the error message from the data
    >> instead of maintaining it in different places, because
    >> manually keeping those in sync is another, errorprone burden.

    >
    > I'm not sure I follow the problem you're describing. If the factored
    > out workhorse function receives the data to test, what prevents it from
    > constructing an error message from that data?


    Sorry, unprecise description of what I want. If you define a test function
    and run the tests with "-v", the framework prints the first line of the
    docstring of that function followed by okay/fail/error, which is much
    friendlier to the reader than the exception dump afterwards. Using multiple
    very similar functions requires equally similar docstrings that repeat
    themselves. I'd prefer creating these from the input data.

    Thanks for your suggestion, Ian!

    Uli

    --
    Domino Laser GmbH
    Geschäftsführer: Thorsten Föcking, Amtsgericht Hamburg HR B62 932
    Ulrich Eckhardt, Nov 22, 2010
    #8
  9. On Nov 22, 11:38 am, Ulrich Eckhardt <>
    wrote:
    > Hi!
    >
    > I'm writing tests and I'm wondering how to achieve a few things most
    > elegantly with Python's unittest module.
    >
    > Let's say I have two flags invert X and invert Y. Now, for testing these, I
    > would write one test for each combination. What I have in the test case is
    > something like this:
    >
    >   def test_invert_flags(self):
    >       """test flags to invert coordinates"""
    >       tests = [((10, 20), INVERT_NONE, (10, 20)),
    >                ((10, 20), INVERT_X, (-10, 20)),
    >                ((10, 20), INVERT_Y, (10, -20))]
    >       for input, flags, expected in tests:
    >           res = do_invert(input, flags)
    >           self.assertEqual(res, expected,
    >                            "%s caused wrong results" % (flags,))
    >
    > So, what I do that I test the function 'do_invert' for different input
    > combinations and verify the result. The ugly thing is that this will abort
    > the whole test if one of the tests in the loop fails. So, my question is
    > how do I avoid this?
    >
    > I know that I could write a common test function instead:
    >
    >   def _test_invert_flags(self, input, flags, expected):
    >       res = do_invert(input, flags)
    >       self.assertEqual(res, expected)
    >
    >   def test_invert_flags_non(self):
    >       """test not inverting coordinates"""
    >       self._test_invert_flags((10, 20), INVERT_NONE, (10, 20))
    >
    >   def test_invert_flags_x(self):
    >       """test inverting X coordinates"""
    >       self._test_invert_flags((10, 20), INVERT_X, (-10, 20))
    >
    >   def test_invert_flags_y(self):
    >       """test inverting Y coordinates"""
    >       self._test_invert_flags((10, 20), INVERT_Y, (10, -20))
    >
    > What I don't like here is that this is unnecessarily verbose and that it
    > basically repeats information. Also, I'd rather construct the error message
    > from the data instead of maintaining it in different places, because
    > manually keeping those in sync is another, errorprone burden.
    >
    > Any suggestions?
    >
    > Uli
    >
    > --
    > Domino Laser GmbH
    > Geschäftsführer: Thorsten Föcking, Amtsgericht Hamburg HR B62 932



    The following is a bit ghastly, I'm not sure I'd recommend it, but if
    you are determined, you could try dynamically adding test methods to
    the test class. The following is untested - I suspect I have made a
    schoolboy error in attempting to make methods out of functions - but
    something like it might work:


    class MyTestClass(unittest.TestCase):
    pass

    testdata = [
    (INPUTS, EXPECTED),
    (INPUTS, EXPECTED),
    (INPUTS, EXPECTED),
    ]

    for index, (input, expected) in enumerate(testdata):
    # the following sets an attribute on MyTestClass
    # the names of the attributes are 'test_1', 'test_2', etc
    # the value of the attributes is a test method that performs the
    assert
    setattr(
    MyTestClass,
    'test_%d' % (index,),
    lambda s: s.assertEquals(METHOD_UNDER_TEST(*input), expected)
    )
    Jonathan Hartley, Nov 23, 2010
    #9
  10. Short update on what I've settled for generating test functions for various
    input data:

    # test case with common test function
    class MyTest(unittest.TestCase):
    def _test_invert_flags(self, input, flags, expected):
          res = do_invert(input, flags)
          self.assertEqual(res, expected)

    # test definitions for the various invert flags
    tests = [((10, 20), INVERT_NONE, (10, 20)),
             ((10, 20), INVERT_X, (-10, 20)),
             ((10, 20), INVERT_Y, (10, -20))]

    # add test to the test case class
    for input, flags, expected in tests:
    def test(self):
    self._test_invert_flags(input, flags, expected)
    test.__doc__ = "testing invert flags %s" % flags
    setattr(MyTest, "test_invert_flags_%s" % flags, test)


    Yes, the names of the test functions would clash if I tested the same flags
    twice, in the real code that doesn't happen (enumerate is my friend!).

    Thanks all!

    Uli

    --
    Domino Laser GmbH
    Geschäftsführer: Thorsten Föcking, Amtsgericht Hamburg HR B62 932
    Ulrich Eckhardt, Nov 24, 2010
    #10
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Daniel

    Python and database unittests

    Daniel, Aug 26, 2008, in forum: Python
    Replies:
    9
    Views:
    757
    M.-A. Lemburg
    Aug 27, 2008
  2. Peter Larsen [CPH]

    Global.asax.cs and unittests

    Peter Larsen [CPH], Sep 5, 2008, in forum: ASP .Net
    Replies:
    3
    Views:
    2,026
    Steven Cheng [MSFT]
    Sep 10, 2008
  3. Cameron Simpson
    Replies:
    3
    Views:
    269
    Terry Reedy
    Apr 23, 2010
  4. martinankerl at eml dot cc

    Bugtracking & UnitTests == good?

    martinankerl at eml dot cc, Jul 6, 2004, in forum: Ruby
    Replies:
    13
    Views:
    205
    Martin Ankerl
    Jul 12, 2004
  5. Puneet Pattar
    Replies:
    1
    Views:
    123
    Brian Candler
    Dec 14, 2009
Loading...

Share This Page