unit testing advice

S

Shadowfirebird

Forgive me if this is a stupid question.

(Actually, I know it's a stupid question -- they're the only ones
worth asking...)

* How do you unit test a method who's job is to interface with the
outside world? For example, a method that outputs to a file? *

I've done a lot of coding, but in a dinosaur language -- automated
unit tests are completely new to me. I understand how to use
unit/test; but it seems to me that that's only half the story. I need
some suggestions on how to design my code in such a way as it can be
tested. Suggestions, anyone?

Shadowfirebird
 
G

Gregory Brown

Forgive me if this is a stupid question.

(Actually, I know it's a stupid question -- they're the only ones
worth asking...)

* How do you unit test a method who's job is to interface with the
outside world? For example, a method that outputs to a file? *

I've done a lot of coding, but in a dinosaur language -- automated
unit tests are completely new to me. I understand how to use
unit/test; but it seems to me that that's only half the story. I need
some suggestions on how to design my code in such a way as it can be
tested. Suggestions, anyone?

Usually, mock objects are best for this. There exist some special
purpose ones, but it is relatively straightforward to put one together
using something like Mocha or Flexmock, both available on RubyForge.
If you are using RSpec, support for mock objects is built in.

-greg
 
S

Shadowfirebird

Many thanks. Clearly I'm going to have to go back to research mode on
this one.

I've found this lovely bit of example code for mocha, though -- see
below. If I understand you correctly, you seem to be saying that you
*don't* test the output routine; you use a mock to fake it so that you
can test the rest of the code?

That would imply that you farm out the difficult bits of the output
routine into other methods in the same way that you would with
functional programming -- ideally, leaving the output routine as
something that is so simple it doesn't need testing?


class Enterprise
def initialize(dilithium); @dilithium = dilithium; end

def go(warp_factor); warp_factor.times { @dilithium.nuke:)anti_matter) }; end
end


require 'test/unit'
require 'rubygems'
require 'mocha'

class EnterpriseTest < Test::Unit::TestCase

def test_should_boldly_go
dilithium = mock()
dilithium.expects:)nuke).with:)anti_matter).at_least_once #
auto-verified at end of test
enterprise = Enterprise.new(dilithium)
enterprise.go(2)
end

end


Usually, mock objects are best for this. There exist some special
purpose ones, but it is relatively straightforward to put one together
using something like Mocha or Flexmock, both available on RubyForge.
If you are using RSpec, support for mock objects is built in.

-greg



--
Me, I imagine places that I have never seen / The colored lights in
fountains, blue and green / And I imagine places that I will never go
/ Behind these clouds that hang here dark and low
But it's there when I'm holding you / There when I'm sleeping too /
There when there's nothing left of me / Hanging out behind the
burned-out factories / Out of reach but leading me / Into the
beautiful sea
 
G

Gregory Brown

Many thanks. Clearly I'm going to have to go back to research mode on
this one.

I've found this lovely bit of example code for mocha, though -- see
below. If I understand you correctly, you seem to be saying that you
*don't* test the output routine; you use a mock to fake it so that you
can test the rest of the code?

Mocks do check to make sure that certain calls are actually made with
constraints on the way they are called and how they respond.
So if you have a method like load_text_file(name) that appends a .txt
to the end of the filename and then reads the file, like so:

def load_text_file(name)
File.read("#{name}.txt")
end

You'd want to do a mock like:

# Set up the expectation
File.expects:)read).with("foo.txt")
load_text_file "foo"

The point here is that this tests your code without unnecessarily
verifying that File.read() works.

Of course, be sure that your mocks reflect reality when you design
them, as it's possible to build nonsense mocks that lead you astray.
However, as soon as you really try to use your code, you'd notice that
and be able to fix it in your tests...
That would imply that you farm out the difficult bits of the output
routine into other methods in the same way that you would with
functional programming -- ideally, leaving the output routine as
something that is so simple it doesn't need testing?

Essentially the idea behind mocking is that you replace an external
resource with an object that handles the sorts of messages that form
the interface between your external resource and your code. This
should behave in the same way you'd expect your real resource to
behave given the way that you are using it, so that you can test
things like exception handling and also test your code that wraps and
invokes these resources.

From this point of view, the assumption is that you can rely on thing
like File handles, Database connections, or other external resources
to work as expected, so you don't need to actually test them directly.
What you do need to test is the interaction between your code and
these resources, and for this purpose, a suitable mock object that
verifies these things works great.

And yes, testing this way does encourage you to make your wrappers of
external resources clean and easy to work with, which is a side
benefit.

-greg
 
R

Robert Dober

def go(warp_factor); warp_factor.times { @dilithium.nuke:)anti_matter) }; end
I thought that antimatter consumption was exponential to warp speed!
R.
 
F

Frederick Cheung

Many thanks. Clearly I'm going to have to go back to research mode on
this one.

I've found this lovely bit of example code for mocha, though -- see
below. If I understand you correctly, you seem to be saying that you
*don't* test the output routine; you use a mock to fake it so that you
can test the rest of the code?

The following article is well worth reading: http://martinfowler.com/articles/mocksArentStubs.html

Fred
 
P

Phlip

Shadowfirebird said:
I've found this lovely bit of example code for mocha, though -- see
below. If I understand you correctly, you seem to be saying that you
*don't* test the output routine; you use a mock to fake it so that you
can test the rest of the code?

Consider a test T that calls A, which calls B.

Sometimes, a B is cheap to assemble, so T can assemble B, activate A, and test
that A did its thing. T considers B's behavior just a side-effect.

Most of the time, B should be real. And its state should be static (such as a
Rails "fixture", with static data.) B should not have runaway dependencies. The
test T should be easy to set-up.

You should mock B if it's too expensive to set up. For example, if B reads the
system clock, and if T would prefer the date is 2008 July 31, the test T should
not wait an infinite amount of time, until the cosmos oscillates and 2008 July
31 occurs again. Tests should run as fast as possible, to avoid any hesitation
running them.

You should mock B, so it behaves as if the date is correct. Or you should mock
(in Ruby) Time.now, so all Ruby methods that use the date will read the mocked date.

Other examples of things too expensive to directly test:

- live users
- random numbers
- hardware - networks, robots, tape drives, the clock, etc
- system errors

If your B object is not on the list, you should not mock it. Unit tests work
best when they cross-test everything. The only thing better than a test that
fails because A broke is many tests that all accurately fail because B broke. If
your B is too expensive to assemble, you should refactor it, so it bypasses the
behavior that T and A did not care about.
class Enterprise
def initialize(dilithium); @dilithium = dilithium; end

def go(warp_factor); warp_factor.times { @dilithium.nuke:)anti_matter) }; end
end

Very nice. And note it obeys my list, by mocking both hardware and space-time
distortions.
 
S

Shadowfirebird

Thanks everyone.

Now I need to go meditate to get my head around the idea of designing
methods to be testable. It's quite a shift.

Shadowfirebird.


Consider a test T that calls A, which calls B.

Sometimes, a B is cheap to assemble, so T can assemble B, activate A, and
test that A did its thing. T considers B's behavior just a side-effect.

Most of the time, B should be real. And its state should be static (such as
a Rails "fixture", with static data.) B should not have runaway
dependencies. The test T should be easy to set-up.

You should mock B if it's too expensive to set up. For example, if B reads
the system clock, and if T would prefer the date is 2008 July 31, the test T
should not wait an infinite amount of time, until the cosmos oscillates and
2008 July 31 occurs again. Tests should run as fast as possible, to avoid
any hesitation running them.

You should mock B, so it behaves as if the date is correct. Or you should
mock (in Ruby) Time.now, so all Ruby methods that use the date will read the
mocked date.

Other examples of things too expensive to directly test:

- live users
- random numbers
- hardware - networks, robots, tape drives, the clock, etc
- system errors

If your B object is not on the list, you should not mock it. Unit tests work
best when they cross-test everything. The only thing better than a test that
fails because A broke is many tests that all accurately fail because B
broke. If your B is too expensive to assemble, you should refactor it, so it
bypasses the behavior that T and A did not care about.


Very nice. And note it obeys my list, by mocking both hardware and
space-time distortions.



--
Me, I imagine places that I have never seen / The colored lights in
fountains, blue and green / And I imagine places that I will never go
/ Behind these clouds that hang here dark and low
But it's there when I'm holding you / There when I'm sleeping too /
There when there's nothing left of me / Hanging out behind the
burned-out factories / Out of reach but leading me / Into the
beautiful sea
 
P

Phlip

Shadowfirebird said:
Now I need to go meditate to get my head around the idea of designing
methods to be testable. It's quite a shift.

It's easy if you write the tests first. Get them to fail for the right reason,
then write code to pass them.
 
D

David A. Black

Hi --

Consider a test T that calls A, which calls B.

Sometimes, a B is cheap to assemble, so T can assemble B, activate A, and
test that A did its thing. T considers B's behavior just a side-effect.

Most of the time, B should be real. And its state should be static (such as a
Rails "fixture", with static data.) B should not have runaway dependencies.
The test T should be easy to set-up.

You should mock B if it's too expensive to set up.

Mocking is also good for pinpointing exactly what you want to test and
what you don't, even if not mocking wouldn't be that expensive in
resources. For example, there's the classic Rails create method, which
goes like this:

if new record is saved successfully
go do something
else
do something else
end

In this case, you can certainly do a real saving of the object. But if
you want to test *only* the conditional logic, and make sure that the
controller takes the right branch given a record whose answer to "Did
you save?" is always "Yes" or always "No", then you can use a mock
object.
For example, if B reads the system clock, and if T would prefer the
date is 2008 July 31, the test T should not wait an infinite amount
of time, until the cosmos oscillates and 2008 July 31 occurs again.
Tests should run as fast as possible, to avoid any hesitation
running them.

You should mock B, so it behaves as if the date is correct. Or you should
mock (in Ruby) Time.now, so all Ruby methods that use the date will read the
mocked date.

Other examples of things too expensive to directly test:

- live users
- random numbers
- hardware - networks, robots, tape drives, the clock, etc
- system errors

If your B object is not on the list, you should not mock it.

I wouldn't narrow it down that strictly. It can depend on the purpose
of the test, as well as the profile of the thing you're mocking.


David
 
P

Phlip

David said:
I wouldn't narrow it down that strictly. It can depend on the purpose
of the test, as well as the profile of the thing you're mocking.

You can also avoid mocking the clock by setting a time to 2.minutes.ago, for
example.

(And "hardware" covers "profile". We don't care if B takes a trillion clock
cycles, on a magic CPU that can run them all instantly.)

However, some teams go mock-crazy (even those subjected to high-end
consultants), and mock everything for no reason. Don't do that!
 
D

David A. Black

Hi --

You can also avoid mocking the clock by setting a time to 2.minutes.ago, for
example.

I've had the experience, as have others I imagine, of putting a
future date in a fixture and then, six months later, wondering why my
test wasn't passing... so I'm all for "ago" and friends :)
(And "hardware" covers "profile". We don't care if B takes a trillion clock
cycles, on a magic CPU that can run them all instantly.)

However, some teams go mock-crazy (even those subjected to high-end
consultants), and mock everything for no reason. Don't do that!

It's all about doing it for a reason; I'm just adding to the list.


David
 
S

Shadowfirebird

I certainly "get" the idea that it's better to write the tests first.
Unfortunately my dinosaur brain needed to write some code to prove
that my class model was workable...!

What I'm seeing quite clearly now, is that the next best thing is
writing the tests while *pretending* that you haven't written the code
yet.

Because the tests should be based on what the code is *supposed* to
do, not what it does. Look at the code when you want to know what it
does; but when you want to know what it's supposed to do, look inside
your head.

I suppose you are all going to say that the tests should show what the
code is supposed to do? I really like that idea...


Hi --



I've had the experience, as have others I imagine, of putting a
future date in a fixture and then, six months later, wondering why my
test wasn't passing... so I'm all for "ago" and friends :)


It's all about doing it for a reason; I'm just adding to the list.


David

--
Rails training from David A. Black and Ruby Power and Light:
* Advancing With Rails August 18-21 Edison, NJ
* Co-taught by D.A. Black and Erik Kastner
See http://www.rubypal.com for details and updates!



--
Me, I imagine places that I have never seen / The colored lights in
fountains, blue and green / And I imagine places that I will never go
/ Behind these clouds that hang here dark and low
But it's there when I'm holding you / There when I'm sleeping too /
There when there's nothing left of me / Hanging out behind the
burned-out factories / Out of reach but leading me / Into the
beautiful sea
 
D

David A. Black

Hi --

I certainly "get" the idea that it's better to write the tests first.
Unfortunately my dinosaur brain needed to write some code to prove
that my class model was workable...!

I often have to bootstrap myself into an application by getting
something running before I can get my brain into test mode. Certainly
with something like Rails, there's a lot to do before any tests are
written, since you can't write unit tests before you know what your
models are (and generating the model so conveniently writes the test
file for you :)

One thing to remember is that TDD is about development, and that not
all instances of entering code on a keyboard are development. And
we've all, I believe, learned an absolute ton from exploration and
experimentation that wasn't part of the process of ongoing application
development. It's important not to feel like you're being
unprofessional or sloppy if you happen to want to try out some code in
a file, or in irb, and you don't write a test for it.

In fact, one question that has come to intrigue me recently is the
question of whether there are any active programmers who have
literally written code test-first from the time they first learned how
to program onward. I suspect the answer is no -- and if that's the
case, it means that there is no evidence for the position that it's
always, automatically bad to write code without a test. That's not
meant to be a counterargument to the idea that testing is important
and a good practice. I do wonder whether things get a bit too
doctrinaire at times, though.
What I'm seeing quite clearly now, is that the next best thing is
writing the tests while *pretending* that you haven't written the code
yet.

Because the tests should be based on what the code is *supposed* to
do, not what it does. Look at the code when you want to know what it
does; but when you want to know what it's supposed to do, look inside
your head.

I suppose you are all going to say that the tests should show what the
code is supposed to do? I really like that idea...

I don't think you have to decide it's one thing or the other. If you
can write tests after you write code, *and* if you feel certain that
the code is as good as it would have been if you had written the tests
first (not all first, but iteratively), that's fine. It's that second
clause that's the issue, though. A lot of people find that writing
tests first helps them get in the "zone" of thinking about *exactly*
what their code is supposed to do, in a very focused way. So it
becomes part of the process of writing the program, not just a way to
put protective armour and/or documentation around it later.


David
 
P

Phlip

David said:
In fact, one question that has come to intrigue me recently is the
question of whether there are any active programmers who have
literally written code test-first from the time they first learned how
to program onward. I suspect the answer is no -- and if that's the
case, it means that there is no evidence for the position that it's
always, automatically bad to write code without a test.

Nobody holds such a position. But when your boss is breathing down the back of
your neck waiting for you to fix a bug (one that someone wrote by avoiding
tests), and you are "wasting time" trying to write a test case, instead of just
fixing it, your boss might need a little help understanding how tests could have
avoided the situation...
 
D

David A. Black

Nobody holds such a position.

I hope you're right, but I wonder sometimes.
But when your boss is breathing down the back
of your neck waiting for you to fix a bug (one that someone wrote by avoiding
tests), and you are "wasting time" trying to write a test case, instead of
just fixing it, your boss might need a little help understanding how tests
could have avoided the situation...

I agree; in general, it's important (and difficult, in some
situations) to see testing as part of the process and not something
grafted onto it or digressive from it.


David
 
J

James Britt

David said:
Hi --



I often have to bootstrap myself into an application by getting
something running before I can get my brain into test mode. Certainly
with something like Rails, there's a lot to do before any tests are
written, since you can't write unit tests before you know what your
models are (and generating the model so conveniently writes the test
file for you :)

Isn't the idea of TDD and BDD that you discover what classes and method
are needed by writing the tests first? So there *should* be no model in
place prior to writing the test; the initial failure of the test is what
drive the creation of the model.

One thing to remember is that TDD is about development, and that not
all instances of entering code on a keyboard are development. And
we've all, I believe, learned an absolute ton from exploration and
experimentation that wasn't part of the process of ongoing application
development. It's important not to feel like you're being
unprofessional or sloppy if you happen to want to try out some code in
a file, or in irb, and you don't write a test for it.

In fact, one question that has come to intrigue me recently is the
question of whether there are any active programmers who have
literally written code test-first from the time they first learned how
to program onward. I suspect the answer is no -- and if that's the
case, it means that there is no evidence for the position that it's
always, automatically bad to write code without a test.


Has anyone ever seen a Learn To Program or Learn Language Blub book that
did TDD? I doubt such a thing exists. Instead, people are shown code,
encouraged to write code, then (in so many words) told that what they
were shown and told is not the right way to code. On the other hand,
having a unit test for the 1-liner helloworld.rb seems massively goofy.

A more useful view is that TDD and company are for code you plan to
keep and maintain, and if you find supposedly transitory code lingering
a bit too long you need to consider re-writing via TDD or retro-fitting
tests. (And even that should be tempered by the size of the code in
question. )
That's not
meant to be a counterargument to the idea that testing is important
and a good practice. I do wonder whether things get a bit too
doctrinaire at times, though.

What I've often heard is that folks (such as myself) will hack out a
running version of something as a means of exploratory coding; it's like
doing sketches prior to starting The Big Fresco. Then (so some claim),
that code is chucked and actual TDD begins: tests, failure, code to make
the tests pass.

In reality I think most exploratory coders will salvage the nicer parts
of the code and tack on unit tests, then move forward with TDD. There
may be some bias at play, where the time invested in creating that
initial code colors how one determines its quality.
I don't think you have to decide it's one thing or the other. If you
can write tests after you write code, *and* if you feel certain that
the code is as good as it would have been if you had written the tests
first (not all first, but iteratively), that's fine. It's that second
clause that's the issue, though. A lot of people find that writing
tests first helps them get in the "zone" of thinking about *exactly*
what their code is supposed to do, in a very focused way. So it
becomes part of the process of writing the program, not just a way to
put protective armour and/or documentation around it later.

That sounds about right. Testing is not the goal; robust, accurate,
maintainable code is the goal.

--
James Britt

www.happycamperstudios.com - Wicked Cool Coding
www.jamesbritt.com - Playing with Better Toys
www.ruby-doc.org - Ruby Help & Documentation
www.rubystuff.com - The Ruby Store for Ruby Stuff
 
P

Phlip

James said:
Isn't the idea of TDD and BDD that you discover what classes and method
are needed by writing the tests first? So there *should* be no model in
place prior to writing the test; the initial failure of the test is what
drive the creation of the model.

When "greenfield" coding, one way to design is to force an unpredictable design
to emerge via TDD. That's the high-end rationale for TDD. You can also sketch a
design, then see if you can write the right tests to force it to emerge.

When writing new code that addresses some existing library or module, you often
do this:

def test_learn_foo
foo = assemble_foo
result = foo.activate
p result
end

Now you noodle around inside foo.activate - essentially learning what it can do.
Then you pin down your research with assertions. It's all good.
 
D

David A. Black

Hi --

Isn't the idea of TDD and BDD that you discover what classes and method are
needed by writing the tests first? So there *should* be no model in place
prior to writing the test; the initial failure of the test is what drive the
creation of the model.

That may be the ideal for some people, but not for me when I'm
modeling a domain and especially when I'm working out a database
schema. I don't consider unit tests to have superseded the other tools
that support those activities, including blank pieces of paper and
index cards and so on. I guess you could write unit tests for a Rails
app before creating the models, if you mocked up the objects'
attributes, their associated objects, and so on (since there would
presumably be no database yet), and then rename your files so they
don't get clobbered when you generate the test files... but it seems
like it would be terribly arduous, with no real gain, and I don't
think I've ever seen anyone do it.
Has anyone ever seen a Learn To Program or Learn Language Blub book that did
TDD? I doubt such a thing exists. Instead, people are shown code,
encouraged to write code, then (in so many words) told that what they were
shown and told is not the right way to code. On the other hand, having a
unit test for the 1-liner helloworld.rb seems massively goofy.

The thing is, all the people who argue that testing first is the right
way to code did not themselves learn to code that way. That doesn't
prove or disprove anything, but it does make me wonder whether
teaching someone test methodology right out of the starting gate is
demonstrably the best way to go about teaching someone programming. I
tend to think it isn't, though I also think that there are good and
bad ways to introduce testing into the mix (the best way probably
being to present it as essentially what they've been doing all along,
but more structured; and the worst way being the "OK, the fun is over,
now let's get serious" stuff).


David
 
S

Shadowfirebird

I wish I had learned this stuff ten years ago.

For almost my entire career people have assumed that I can look at
their code and tell what it is supposed to do. I can't; no-one can,
with any accuracy. All I can do is look at code and say what it
*does*. Which is probably not the same thing, or I wouldn't be
looking at the code in the first place...

But if there are automated tests, *they* will tell you what the code
should do. And, better yet, if the programmers have been using them,
they will be accurate and up to date. They might not be complete, of
course, but two out of three is a hell of a good hit rate in this
subject.

As far as I am concerned, that's good enough to spark a religious conversion.


As for test-driven-design, I think it's a fine ideal. Mostly when I
code I know in advance where I'm going, and in that case it pays to
use TDD. But sometimes I haven't the faintest idea what I'm doing; I
don't know how many classes I need or what jobs go in what classes --
or even if the idea is at all workable. I suspect that TDD isn't of
much use in those times.

Shadowfirebird.



David A. Black said...

I started on coding sheets. We wrote tests first. We used to
"desk check" our code too. We had to, it might take a couple of days to
get the cards punched before getting one, perhaps two, compilations per
day. I'm in my forties, so this is ancient, but not prehistoric history.

That said, I lean towards what David is saying, that it's a skill to be
learned after experimenting/playing with code. After all, we all learn
to speak before we write. And being able to write well takes a lot more
skill than simply being able to write what we say.



--
Me, I imagine places that I have never seen / The colored lights in
fountains, blue and green / And I imagine places that I will never go
/ Behind these clouds that hang here dark and low
But it's there when I'm holding you / There when I'm sleeping too /
There when there's nothing left of me / Hanging out behind the
burned-out factories / Out of reach but leading me / Into the
beautiful sea
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,756
Messages
2,569,540
Members
45,025
Latest member
KetoRushACVFitness

Latest Threads

Top