Which mock library do you prefer?

L

Lacrima

Hello!

I am newbie mastering test driven development. I can't clarify myself
which mock library to use.
There are number of them and which one do you prefer?

Two libraries that attracted my attention are:
* minimock
* dingus
As for me the latest one, dingus, is the easiest (see this screencast:
), but it has very few downloads from pypi,
so it scares me a little.
Minimock has wider usage and community, but I have some troubles using
it. Maybe I am wrong, but with minimock you always have to keep track
the order of imports in your test modules. Well, may be I just don't
understand fully how minimock works.

What are your suggestions?
 
P

Phlip

Lacrima said:
I am newbie mastering test driven development. I can't clarify myself
which mock library to use.
There are number of them and which one do you prefer?

Two libraries that attracted my attention are:
* minimock
* dingus
As for me the latest one, dingus, is the easiest (see this screencast:
  ), but it has very few downloads from pypi,
so it scares me a little.
Minimock has wider usage and community, but I have some troubles using
it. Maybe I am wrong, but with minimock you always have to keep track
the order of imports in your test modules. Well, may be I just don't
understand fully how minimock works.

What are your suggestions?

I have used http://pypi.python.org/pypi/mock/0.6.0 . It mocks, and it
has a mode that works one method at a time, and another mode that
mocks a method before its owning object gets constructed.

However, TDD is not about mocking, and on greenfield code you should
only mock to recover from some external problem, such as:

- a random number generator
- the system clock
- anything over "the wire" - over a TCP/IP socket
- hardware, such as your graphics or sound

Never mock to avoid hitting the database. Some TDD verbiage advises
"never hit the database". That is a mind-game to force you to decouple
your code. Your objects should always have the option (via
"construction encapsulation") to run as stubs, with some of their
behaviors turned off. And if you TDD low-level code that hits a
database, a mock would only tell the test what it wants to hear. And
if you TDD high-level code that manages business rules, database
records make perfectly good behavioral "fixtures" to support those
rules.
 
S

Steve Howell

Hello!

I am newbie mastering test driven development. I can't clarify myself
which mock library to use.
There are number of them and which one do you prefer?

Two libraries that attracted my attention are:
* minimock
* dingus
As for me the latest one, dingus, is the easiest (see this screencast:
  ), but it has very few downloads from pypi,
so it scares me a little.

I've used dingus with success. I wouldn't let the lack of downloads
be a concern; the funny name is probably scaring some people away, and
of course there are other alternatives too.
 
L

Lacrima

I have usedhttp://pypi.python.org/pypi/mock/0.6.0. It mocks, and it
has a mode that works one method at a time, and another mode that
mocks a method before its owning object gets constructed.

However, TDD is not about mocking, and on greenfield code you should
only mock to recover from some external problem, such as:

- a random number generator
- the system clock
- anything over "the wire" - over a TCP/IP socket
- hardware, such as your graphics or sound

Never mock to avoid hitting the database. Some TDD verbiage advises
"never hit the database". That is a mind-game to force you to decouple
your code. Your objects should always have the option (via
"construction encapsulation") to run as stubs, with some of their
behaviors turned off. And if you TDD low-level code that hits a
database, a mock would only tell the test what it wants to hear. And
if you TDD high-level code that manages business rules, database
records make perfectly good behavioral "fixtures" to support those
rules.

Hi, Phlip!

Thanks for your reply! Isn't what you are talking about integration
tests? And unit tests should be fully isolated? So even for method
'some_method()' of class A I should mock instance of class A (i.e. to
mock 'self') to test 'some_method()'.

Please, could you explain in more detail your thoughts:
Your objects should always have the option (via
"construction encapsulation") to run as stubs, with some of their
behaviors turned off. And if you TDD low-level code that hits a
database, a mock would only tell the test what it wants to hear. And
if you TDD high-level code that manages business rules, database
records make perfectly good behavioral "fixtures" to support those
rules.

And could you give an example.
For me it's really hard to develop test first. Often I don't know what
tests to write to replace hardcoded return values by objects that
perform actual work.
I have read several books on TDD and explored http://c2.com/cgi/wiki?TestDrivenDevelopment
and related wikis, but often it seems I don't have enough
understanding to write even simple application.
And sorry for my English.

with regards,
Max.
 
P

Phlip

Lacrima said:
Thanks for your reply! Isn't what you are talking about integration
tests? And unit tests should be fully isolated? So even for method
'some_method()' of class A I should mock instance of class A (i.e. to
mock 'self') to test 'some_method()'.

"Unit test" is a high-end QA concept. Developers can get the best
return on "developer tests". They don't bother with aerospace-quality
isolation between units.

If a TDD test needs to pull in a bunch of modules to pass, that's
generally a good thing, because they all get indirect testing. If they
catch a bug, their local tests might not catch it, but the higher
level tests still have a chance.

(And if your product still needs unit tests, TDD will make them very
easy for a formal QA team to add.)

However, expensive setup is a design smell. That means if a test case
requires too many lines of code for its Assemble phase (before its
Activate and Assert phases), then maybe those lines of code support
objects that are too coupled, and they need a better design.

Throwing mocks at these objects, instead of decoupling them, will
"perfume" the design smell, instead of curing it.
And could you give an example.

def test_frob(self):
frob = Frob()
frob.knob = Mock()
frob.knob.value = Mock(return_value = 42)
assert 42 == frob.method_using_knob()

We need the mock because we can't control how Frob's constructor built
its knob. So instead, give Frob the option to construct with a Knob:

def test_frob(self):
knob = Knob(42)
frob = Frob(knob)
assert frob.method_using_knob()

Note that in production the Knob constructor never takes a Knob. Maybe
we should upgrade the production code too (!), or maybe Knob's
constructor should only create a knob if it didn't get passed one.
Either technique is acceptable, because the resulting code decouples
Frobs and Knobs just a little bit more.
For me it's really hard to develop test first. Often I don't know what
tests to write to replace hardcoded return values by objects that
perform actual work.

You have read too many books on TDD. C-:

Alternate between writing lines of test and lines of code. Run the
tests after the fewest possible edits, and always correctly predict if
the tests will pass, or will fail, and with what diagnostic. (And
configure your editor to run the stankin tests, no matter how hard it
fights you!) The high-end tricks will get easier after you get the
basic cycle down.
 
L

Lacrima

I'm not sure why you think you need to keep track of the order of
imports.

Simply set up the mocks as you want them, in your fixtures; then, when
tearing down your fixtures, use ‘minimock.restore()’ to restore the
affected namespaces to their initial state.

--
 \       “… one of the main causes of the fall of the Roman Empire was |
  `\        that, lacking zero, they had no way to indicate successful |
_o__)                  termination of their C programs.” —Robert Firth |
Ben Finney

Hi Ben!

See these two topics:
http://groups.google.com/group/minimock-dev/browse_thread/thread/bcbb3b7cc60eb96f
http://groups.google.com/group/minimock-dev/browse_thread/thread/c41cd996735ea1a6

There are special cases, which you have to be aware of, if you use
minimock.
 
L

Lacrima

"Unit test" is a high-end QA concept. Developers can get the best
return on "developer tests". They don't bother with aerospace-quality
isolation between units.

If a TDD test needs to pull in a bunch of modules to pass, that's
generally a good thing, because they all get indirect testing. If they
catch a bug, their local tests might not catch it, but the higher
level tests still have a chance.

(And if your product still needs unit tests, TDD will make them very
easy for a formal QA team to add.)

However, expensive setup is a design smell. That means if a test case
requires too many lines of code for its Assemble phase (before its
Activate and Assert phases), then maybe those lines of code support
objects that are too coupled, and they need a better design.

Throwing mocks at these objects, instead of decoupling them, will
"perfume" the design smell, instead of curing it.


  def test_frob(self):
      frob = Frob()
      frob.knob = Mock()
      frob.knob.value = Mock(return_value = 42)
      assert 42 == frob.method_using_knob()

We need the mock because we can't control how Frob's constructor built
its knob. So instead, give Frob the option to construct with a Knob:

  def test_frob(self):
      knob = Knob(42)
      frob = Frob(knob)
      assert frob.method_using_knob()

Note that in production the Knob constructor never takes a Knob. Maybe
we should upgrade the production code too (!), or maybe Knob's
constructor should only create a knob if it didn't get passed one.
Either technique is acceptable, because the resulting code decouples
Frobs and Knobs just a little bit more.


You have read too many books on TDD. C-:

Alternate between writing lines of test and lines of code. Run the
tests after the fewest possible edits, and always correctly predict if
the tests will pass, or will fail, and with what diagnostic. (And
configure your editor to run the stankin tests, no matter how hard it
fights you!) The high-end tricks will get easier after you get the
basic cycle down.

Hi Phlip!

Thanks for your exhaustive answer.
Actually, I'll investigate your example with 'frob'. From just reading
the example it's not clear for me what I will benefit, using this
approach.
And I have already refused to write totally isolated tests, because it
looks like a great waste of time.
 
P

Phlip

Thanks for your exhaustive answer.
Actually, I'll investigate your example with 'frob'. From just reading
the example it's not clear for me what I will benefit, using this
approach.

I can't get a good hit for "construction encapsulation" in Google.
(Although I got some good bad ones!)

This paper _almost_ gets the idea:
http://www.netobjectives.com/download/Code Qualities and Practices.pdf
And I have already refused to write totally isolated tests, because it
looks like a great waste of time.

Do you run your tests after the fewest possible edits? Such as 1-3
lines of code?

I'm not sure why the TDD books don't hammer that point down...
 
P

Phlip

It only looks like that until you chase your tail in a long, fruitless
debugging session because (you later realise) the behaviour of one test
is being affected by another. Test isolation is essential to ensure that
your tests are doing what you think they're doing.

That is runtime test isolation. It's not the same thing as "unit test
isolation". Just take care in your tearDown() to scrub your
environment.

Google "Mock abuse" from here...
 
L

Lacrima

This paper _almost_ gets the idea:http://www.netobjectives.com/download/Code Qualities and Practi...


Do you run your tests after the fewest possible edits? Such as 1-3
lines of code?

Hi!
I run my tests all the time (they almost replaced debugger in my IDE).
But there are times, when I can't just run tests after 1-3 lines of
code.
For example, I am developing an application that talks to some web
service. One of methods of a class, which implements API for a given
web service, should parse xml response from web service. At first, I
hardcoded returned values, so that they looked like already parsed.
But further, additional tests forced me to actually operate with
sample xml data instead of hardcoded values. So I created sample xml
file that resembled response from server. And after that I can't just
write 1-3 lines between each test. Because I need to read() the file
and sort it out in a loop (at least 6-9 lines of code for small xml
file). And only after this procedure I run my tests with the hope that
they all pass.
Maybe it's not proper TDD, but I can't figure out how to reduce period
between running tests in a case above.
 
L

Lacrima

It only looks like that until you chase your tail in a long, fruitless
debugging session because (you later realise) the behaviour of one test
is being affected by another. Test isolation is essential to ensure that
your tests are doing what you think they're doing.

--
 \       “A ‘No’ uttered from deepest conviction is better and greater |
  `\       than a ‘Yes’ merely uttered to please, or what is worse, to |
_o__)                              avoid trouble.” —Mohandas K. Gandhi |
Ben Finney

Hi!

Right, isolation is essential. But I can't decide to which extent I
should propagate isolation.
For example, in "Python Testing: Beginner's Guide" by Daniel Arbuckle,
author suggests that if you do unittesting you should isolate the
smallest units of code from each other. For example, if you have a
class:
Class SomeClass(object):
def method1(self):
return 5
def method2(self):
return self.method1 + 10

According to the book, if you want to test method2, you should isolate
it from method1 and class instance('self').
Other books are not so strict...

And what should I follow as newbie?

Currently, I don't create mocks of units if they are within the same
class with the unit under test. If that is not right approach, please,
explain what are best practices... I am just learning TDD..

with regards,
Maxim
 
P

Phlip

Lacrima said:
I run my tests all the time (they almost replaced debugger in my IDE).
But there are times, when I can't just run tests after 1-3 lines of
code. ....
Maybe it's not proper TDD

You are still being too literal. The "1-3 lines of code" guideline is
a guideline, not a rule. It means 1 small edit is best, 2 edits are
mostly harmless, 3 is okay, 4 is acceptable, and so on. It's the peak
of the Zipf's Law curve:

http://www.rimmkaufman.com/content/mobydick.png

You "mocked the wire" with that hardcoded XML so that your subsequent
edits can be very short and reliable. Props!
 
M

Mark Lawrence

Phlip said:
Please read my reply: Ben is well intentioned but completely wrong
here.

Mock abuse will not cure the runtime isolation problem.

I believe that Ben is perfectly correct, and that you are talking at
cross purposes because you've missed the significance of his
<quote>
(you later realise)
</quote>
within his post.
Runtime test isolation doesn't enter into from what I can see.
Can you please clarify the situation one way or the other.

TIA.

Mark Lawrence.
 
L

Lacrima

Lacrima said:
Right, isolation [of test cases] is essential. But I can't decide to
which extent I should propagate isolation.

You used “propagate” in a sense I don't understand there.
For example, in "Python Testing: Beginner's Guide" by Daniel Arbuckle,
author suggests that if you do unittesting you should isolate the
smallest units of code from each other.

I'm not sure what the author means, but I would say that as it stands
that advice is independent of what testing is being done. In all cases:

* Make your code units small, so each one is not doing much and is easy
  to understand.

* Make the interface of units at each level as narrow as feasible, so
  they're not brittle in the face of changes to the implementation.
For example, if you have a
class:
Class SomeClass(object):
    def method1(self):
        return 5
    def method2(self):
        return self.method1 + 10
According to the book, if you want to test method2, you should isolate
it from method1 and class instance('self').

I don't really know what that means.

Remember that each test case should not be “test method1”. That is far
too broad, and in some cases too narrow. There is no one-to-one mapping
between methods and unit test cases.

Instead, each test case should test one true-or-false assertion about
the behaviour of the code. “When we start with this initial state (the
test fixture), and perform this operation, the resulting state is that”..

It makes a lot of sense to name the test case so the assertion being
made *is* its name: not ‘test frobnicate’ with dozens of assertions, but
one ‘test_frobnicate_with_valid_spangulator_returns_true’ which makes
that assertion, and extra ones for each distinct assertion.

The failure of a unit test case should indicate *exactly* what has gone
wrong. If you want to make multiple assertions about a code unit, write
multiple test cases for that unit and name the tests accordingly.

This incidentally requires that you test something small enough that
such a true-or-false assertion is meaningful, which leads to
well-designed code with small easily-tested code units. But that's an
emergent property, not a natural law.
Currently, I don't create mocks of units if they are within the same
class with the unit under test. If that is not right approach, please,
explain what are best practices... I am just learning TDD..

In the fixture of the unit test case, create whatever test doubles are
necessary to put your code into the initial state you need for the test
case; then tear all those down whatever the result of the test case.

If you need to create great honking wads of fixtures for any test case,
that is a code smell: your code units are too tightly coupled to
persistent state, and need to be decoupled with narrow interfaces.

The Python ‘unittest’ module makes this easier by letting you define
fixtures common to many test cases (the ‘setUp’ and ‘tearDown’
interface). My rule of thumb is: if I need to make different fixtures
for some set of test cases, I write a new test case class for those
cases.

--
 \       “Following fashion and the status quo is easy. Thinking about |
  `\        your users' lives and creating something practical is much |
_o__)                                harder.” —Ryan Singer, 2008-07-09 |
Ben Finney

Hi, Ben!!!

Sorry for too late reply!!!
Thank you very much for sharing your experience! I still have to grasp
a lot in TDD.
 
A

Albert van der Horst

ouble.=94 =97Mohandas K. Gandhi |

Hi!

Right, isolation is essential. But I can't decide to which extent I
should propagate isolation.
For example, in "Python Testing: Beginner's Guide" by Daniel Arbuckle,
author suggests that if you do unittesting you should isolate the
smallest units of code from each other. For example, if you have a
class:
Class SomeClass(object):
def method1(self):
return 5
def method2(self):
return self.method1 + 10

According to the book, if you want to test method2, you should isolate
it from method1 and class instance('self').
Other books are not so strict...

And what should I follow as newbie?

Currently, I don't create mocks of units if they are within the same
class with the unit under test. If that is not right approach, please,
explain what are best practices... I am just learning TDD..

Unit testing is a concept that goes well with functions without
side effects. If you have classes, that doesn't work so well.

For classes use cases are the way to go.
Think about it. The whole state of an object can affect the
way a method works. So effectively for a unit test you have
to put the object in a whole special fully controlled state.
The point of an object is that it can't have just any state, but only
those attained by calling methods properly. So you may find yourself
artificially imposing states on an object with great effort, and
expecting error detection where the errors cannot in fact occur etc.
etc.

Not that coming up with good use cases is easy, but at least
they are naturally related to your objects.
with regards,
Maxim

Groetjes Albert
 
T

Terry Reedy

Pretty much any test assumes that basic things other than the tested
object work correctly. For instance, any test of method2 will assume
that '+' works correctly. The dependency graph between methods in a
class will nearly always be acyclic. So I would start with the 'leaf'
methods and work up. In the above case, test method1 first and then
method2. The dependence of the test of method2 on the correctness of
method1 is hardly worse, to me, then its dependence on the correctness
of int.__add__. It is just the the responsibility for the latter falls
on the developers, and *their* suite of tests.

Whenever any code test fails, there are two possibilities. The code
itself is buggy, or something it depends on is buggy. I see two reasons
for isolation and mock units: test resource saving (especially time) and
independent development. If you are developing ClassA and someone else
is developing ClassB, you might want to test ClassA even though it
depends on ClassB and classB is not ready yet. This consideration is
much less likely to apply to method2 versus method1 of a coherent class.

My current opinions.

Terry Jan Reedy
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top