Unittest testing assert*() calls rather than methods?

Discussion in 'Python' started by Tim Chase, Sep 28, 2011.

  1. Tim Chase

    Tim Chase Guest

    While I asked this on the Django list as it happened to be with
    some Django testing code, this might be a more generic Python
    question so I'll ask here too.

    When performing unittest tests, I have a number of methods of the
    form

    def test_foo(self):
    data = (
    (item1, result1),
    ... #bunch of tests for fence-post errors
    )
    for test, result in data:
    self.assertEqual(process(test), result)

    When I run my tests, I only get a tick for running one the one
    test (test_foo), not the len(data) tests that were actually
    performed. Is there a way for unittesting to report the number
    of passed-assertions rather than the number of test-methods run?

    -tkc
     
    Tim Chase, Sep 28, 2011
    #1
    1. Advertising

  2. Tim Chase wrote:

    > While I asked this on the Django list as it happened to be with
    > some Django testing code, this might be a more generic Python
    > question so I'll ask here too.
    >
    > When performing unittest tests, I have a number of methods of the
    > form
    >
    > def test_foo(self):
    > data = (
    > (item1, result1),
    > ... #bunch of tests for fence-post errors
    > )
    > for test, result in data:
    > self.assertEqual(process(test), result)
    >
    > When I run my tests, I only get a tick for running one the one
    > test (test_foo), not the len(data) tests that were actually
    > performed. Is there a way for unittesting to report the number
    > of passed-assertions rather than the number of test-methods run?


    I used to ask the same question, but then I decided that if I wanted each
    data point to get its own tick, I should bite the bullet and write an
    individual test for each.

    If you really care, you could subclass unittest.TestCase, and then cause
    each assert* method to count how often it gets called. But really, how much
    detailed info about *passed* tests do you need?

    If you are writing loops inside tests, you might find this anecdote useful:

    http://mail.python.org/pipermail/python-list/2011-April/1270640.html



    --
    Steven
     
    Steven D'Aprano, Sep 29, 2011
    #2
    1. Advertising

  3. Tim Chase

    Roy Smith Guest

    In article <4e83b8e0$0$29972$c3e8da3$>,
    Steven D'Aprano <> wrote:

    > If you are writing loops inside tests, you might find this anecdote useful:
    >
    > http://mail.python.org/pipermail/python-list/2011-April/1270640.html


    On the other hand, the best test is one that gets written. I will often
    write tests that I know do not meet the usual standards of purity and
    wholesomeness. Here's a real-life example:

    for artist in artists:
    name = artist['name']
    self.assertIsInstance(name, unicode)
    name = name.lower()
    # Due to fuzzy matching, it's not strictly guaranteed that the
    # following assertion is true, but it works in this case.
    self.assertTrue(name.startswith(term), (name, term))

    Could I have written the test without the loop? Probably. Would it
    have been a better test? I guess, at some level, probably. And, of
    course, the idea of a "not strictly guaranteed" assertion is probably
    enough to make me lose my Unit Tester's Guild Secret Decoder Ring
    forever :)

    But, the test was quick and easy to write, and provides value. I could
    have spent twice as long writing a better test, and it would have
    provided a little more value, but certainly not double. More
    importantly, had I spent the extra time writing the better test, I might
    have not had enough time to write all the other tests I wrote that day.

    Sometimes good enough is good enough.
     
    Roy Smith, Sep 29, 2011
    #3
  4. > I used to ask the same question, but then I decided that if I wanted each
    > data point to get its own tick, I should bite the bullet and write an
    > individual test for each.


    Nearly the entire re module test suite is a list of tuples. If it was
    instead a bunch of TestCase classes, there'd be a lot more boilerplate
    to write. (At a bare minimum, there'd be two times as many lines, and
    all the extra lines would be identical...)

    Why is writing boilerplate for a new test a good thing? It discourages
    the authorship of tests. Make it as easy as possible by e.g. adding a
    new thing to whatever you're iterating over. This is, for example, why
    the nose test library has a decorator for generating a test suite from
    a generator.

    Devin

    On Wed, Sep 28, 2011 at 8:16 PM, Steven D'Aprano
    <> wrote:
    > Tim Chase wrote:
    >
    >> While I asked this on the Django list as it happened to be with
    >> some Django testing code, this might be a more generic Python
    >> question so I'll ask here too.
    >>
    >> When performing unittest tests, I have a number of methods of the
    >> form
    >>
    >>    def test_foo(self):
    >>      data = (
    >>        (item1, result1),
    >>        ... #bunch of tests for fence-post errors
    >>        )
    >>      for test, result in data:
    >>        self.assertEqual(process(test), result)
    >>
    >> When I run my tests, I only get a tick for running one the one
    >> test (test_foo), not the len(data) tests that were actually
    >> performed.  Is there a way for unittesting to report the number
    >> of passed-assertions rather than the number of test-methods run?

    >
    > I used to ask the same question, but then I decided that if I wanted each
    > data point to get its own tick, I should bite the bullet and write an
    > individual test for each.
    >
    > If you really care, you could subclass unittest.TestCase, and then cause
    > each assert* method to count how often it gets called. But really, how much
    > detailed info about *passed* tests do you need?
    >
    > If you are writing loops inside tests, you might find this anecdote useful:
    >
    > http://mail.python.org/pipermail/python-list/2011-April/1270640.html
    >
    >
    >
    > --
    > Steven
    >
    > --
    > http://mail.python.org/mailman/listinfo/python-list
    >
     
    Devin Jeanpierre, Sep 29, 2011
    #4
  5. Tim Chase

    Roy Smith Guest

    In article <>,
    Ben Finney <> wrote:

    > Worse, if one of the scenarios causes the test to fail, the loop will
    > end and you won't get the results for the remaining scenarios.


    Which, depending on what you're doing, may or may not be important. In
    many cases, there's only two states of interest:

    1) All tests pass

    2) Anything else
     
    Roy Smith, Sep 29, 2011
    #5
  6. Tim Chase

    Eric Snow Guest

    On Wed, Sep 28, 2011 at 6:50 PM, Devin Jeanpierre
    <> wrote:
    >> I used to ask the same question, but then I decided that if I wanted each
    >> data point to get its own tick, I should bite the bullet and write an
    >> individual test for each.

    >
    > Nearly the entire re module test suite is a list of tuples. If it was
    > instead a bunch of TestCase classes, there'd be a lot more boilerplate
    > to write. (At a bare minimum, there'd be two times as many lines, and
    > all the extra lines would be identical...)
    >
    > Why is writing boilerplate for a new test a good thing? It discourages
    > the authorship of tests. Make it as easy as possible by e.g. adding a
    > new thing to whatever you're iterating over. This is, for example, why
    > the nose test library has a decorator for generating a test suite from
    > a generator.


    +1

    >
    > Devin
    >
    > On Wed, Sep 28, 2011 at 8:16 PM, Steven D'Aprano
    > <> wrote:
    >> Tim Chase wrote:
    >>
    >>> While I asked this on the Django list as it happened to be with
    >>> some Django testing code, this might be a more generic Python
    >>> question so I'll ask here too.
    >>>
    >>> When performing unittest tests, I have a number of methods of the
    >>> form
    >>>
    >>>    def test_foo(self):
    >>>      data = (
    >>>        (item1, result1),
    >>>        ... #bunch of tests for fence-post errors
    >>>        )
    >>>      for test, result in data:
    >>>        self.assertEqual(process(test), result)
    >>>
    >>> When I run my tests, I only get a tick for running one the one
    >>> test (test_foo), not the len(data) tests that were actually
    >>> performed.  Is there a way for unittesting to report the number
    >>> of passed-assertions rather than the number of test-methods run?

    >>
    >> I used to ask the same question, but then I decided that if I wanted each
    >> data point to get its own tick, I should bite the bullet and write an
    >> individual test for each.
    >>
    >> If you really care, you could subclass unittest.TestCase, and then cause
    >> each assert* method to count how often it gets called. But really, how much
    >> detailed info about *passed* tests do you need?
    >>
    >> If you are writing loops inside tests, you might find this anecdote useful:
    >>
    >> http://mail.python.org/pipermail/python-list/2011-April/1270640.html
    >>
    >>
    >>
    >> --
    >> Steven
    >>
    >> --
    >> http://mail.python.org/mailman/listinfo/python-list
    >>

    > --
    > http://mail.python.org/mailman/listinfo/python-list
    >
     
    Eric Snow, Sep 29, 2011
    #6
  7. Tim Chase

    Roy Smith Guest

    In article <>,
    Ben Finney <> wrote:

    > Roy Smith <> writes:
    >
    > > In article <>,
    > > Ben Finney <> wrote:
    > >
    > > > Worse, if one of the scenarios causes the test to fail, the loop will
    > > > end and you won't get the results for the remaining scenarios.

    > >
    > > Which, depending on what you're doing, may or may not be important. In
    > > many cases, there's only two states of interest:
    > >
    > > 1) All tests pass
    > >
    > > 2) Anything else

    >
    > For the purpose of debugging, it's always useful to more specifically
    > narrow down the factors leading to failure.


    Well, sure, but "need to debug" is just a consequence of being in state
    2. If a test fails and I can't figure out why, I can always go back and
    add additional code to the test case to extract additional information.
     
    Roy Smith, Sep 29, 2011
    #7
  8. Tim Chase

    Tim Chase Guest

    On 09/28/11 19:52, Roy Smith wrote:
    > In many cases, there's only two states of interest:
    >
    > 1) All tests pass
    >
    > 2) Anything else


    Whether for better or worse, at some places (such as a previous
    employer) the number (and accretion) of test-points is a
    marketing bullet-point for upgrades & new releases.

    -tkc
     
    Tim Chase, Sep 29, 2011
    #8
  9. Tim Chase

    Roy Smith Guest

    In article <>,
    Tim Chase <> wrote:

    > On 09/28/11 19:52, Roy Smith wrote:
    > > In many cases, there's only two states of interest:
    > >
    > > 1) All tests pass
    > >
    > > 2) Anything else

    >
    > Whether for better or worse, at some places (such as a previous
    > employer) the number (and accretion) of test-points is a
    > marketing bullet-point for upgrades & new releases.


    Never attribute to malice that which is adequately explained by the
    stupidity of the marketing department.
     
    Roy Smith, Sep 29, 2011
    #9
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Robert Brewer
    Replies:
    1
    Views:
    504
    bsmith
    Nov 7, 2004
  2. Thomas Guettler

    assert 0, "foo" vs. assert(0, "foo")

    Thomas Guettler, Feb 23, 2005, in forum: Python
    Replies:
    3
    Views:
    2,535
    Carl Banks
    Feb 23, 2005
  3. Alex Vinokur

    assert(x) and '#define ASSERT(x) assert(x)'

    Alex Vinokur, Nov 25, 2004, in forum: C Programming
    Replies:
    5
    Views:
    937
    Keith Thompson
    Nov 25, 2004
  4. Paul  Moore
    Replies:
    1
    Views:
    395
    Paul Moore
    Oct 14, 2008
  5. ImpalerCore

    To assert or not to assert...

    ImpalerCore, Apr 27, 2010, in forum: C Programming
    Replies:
    79
    Views:
    1,711
    Richard Bos
    May 17, 2010
Loading...

Share This Page