I don't get rspec

P

Patrick Fernie

Is there an equivalent to autotest for rspec (autospec?)? I've googled
around, but only came across a Rails plugin, which (perhaps obviously)
is very Rails-centric.

-P

They can contain anything except NUL bytes (the extended symbol
syntax). They are turned into class and method names automatically
(we need some special formatting for that due to Test::Unit
restrictions).


test/spec allows them since 0.2, and I use them to group contexts that
go together. Note that nothing is inherited, it's purely namespacing.
RSpec doesn't have them, and they aren't interesting in implementing
them AFAICT.

That's correct. We've really struggled with this because users want a
means of sharing context material, but we believe that nesting
contexts is going to lead to specs that are confusing to look at. We
believe that clarity supercedes DRY in specs, and so we haven't
supported this yet.

That said, you can include modules and get reuse that way:

module SomeHelperMethodsUsefulAcrossContexts
end

context "some context" do
include SomeHelperMethodsUsefulAcrossContexts
end

I'd be surprised if test/spec doesn't already support this out of the box.

Cheers,
David
 
J

Jim Weirich

Wayne said:
I don't look at this as "add a test to the spec/add a trivial fix to
the code" but instead as "add a test to the spec/do the simplest thing
that could possibly work to get the tests to pass", and I find
substantial value in this approach.

When I do TDD, I approach it with a Jeckle and Hyde perspective. When I
write a test, I am writing a specification of how the code should work.

When I start writing code, I become the laziest programmer you can
imagine, writing only enough code to pass the specification as it is
written at that moment. If it is not called out in the spec, it doesn't
get written.

When I switch back to test writing, I cuss out that lazy programmer
(i.e. me just a few moments ago) and write additional specs to force the
code to do exactly what I want.

The tension between Dr. Jeckle (the spec writer), and Mr. Hyde (the
programmer) causes Dr. Jeckle to write better specs and Mr. Hyde to
write lean, correct code.
 
D

David Chelimsky

When I do TDD, I approach it with a Jeckle and Hyde perspective. When I
write a test, I am writing a specification of how the code should work.

When I start writing code, I become the laziest programmer you can
imagine, writing only enough code to pass the specification as it is
written at that moment. If it is not called out in the spec, it doesn't
get written.

When I switch back to test writing, I cuss out that lazy programmer
(i.e. me just a few moments ago) and write additional specs to force the
code to do exactly what I want.

The tension between Dr. Jeckle (the spec writer), and Mr. Hyde (the
programmer) causes Dr. Jeckle to write better specs and Mr. Hyde to
write lean, correct code.

With me it's more like Abbott and Costello!

Seriously, thanks for describing this so well.

Cheers,
David
 
J

Joe Van Dyk

Yes. This has always bothered me about the zentest tool. I love
many of the tools in that package, but the zentest tool seems to
encourage bad testing habits. The video linked to earlier in the
thread explains why.

What's wrong with zentest, specifically?

Thanks,
Joe
 
J

James Edward Gray II

What's wrong with zentest, specifically?

You will probably understand better by watching the movie, but
zentest encourages you to make a test case for each class and a test
method for each method. BDD teaches that these decisions are pretty
arbitrary.

The true goal of testing is to test the "behaviors" of your object.
If that means interacting with five methods or just one possible call
sequence of a method that has several, that's what you should really
be testing. What do if you need to work with two classes at one to
test something?

This class and method organization is also encourages you to use so-
so names for your tests at best. If you have a Calendar class and a
TestCalendar, that's pretty much saying the same thing twice (a
violation of DRY) and it doesn't tell you much. BDD encourages you
to actually express what you are testing with your naming. This
really helps you focus on the goal of the process and is much less
arbitrary.

There you have it, in my own words. Just to be clear, the above is
my opinion through and through. I do not speak for the zentest or
BDD teams.

James Edward Gray II
 
E

Eric Hodel

You will probably understand better by watching the movie, but
zentest encourages you to make a test case for each class and a
test method for each method.

Which will match up 1:1 if you factor your code well.
BDD teaches that these decisions are pretty arbitrary.

I don't use BDD because I already have my tests broken up by units of
behavior. I call them methods.

BDD presents a method of thinking about test design to help you focus
on better object design and object modeling. Unit testing doesn't
have this method of thinking built-in, you have to discover it.
(Although it makes itself evident if you pay attention to how
difficult it is to test something.)
The true goal of testing is to test the "behaviors" of your
object. If that means interacting with five methods or just one
possible call sequence of a method that has several, that's what
you should really be testing.

I think there are two true goals of testing. One is to enumerate
your edge cases so each one behaves as you expect. Another is to get
feedback about how well you've designed your program. Both are
equally important.

If you want to write specifications to enumerate your edge cases,
great. However, if you've factored poorly neither BDD nor TDD will
give you small, concise tests nor a small, concise implementation.
What do if you need to work with two classes at one to test something?

Then your classes are probably coupled too tightly.
This class and method organization is also encourages you to use so-
so names for your tests at best. If you have a Calendar class and
a TestCalendar, that's pretty much saying the same thing twice (a
violation of DRY) and it doesn't tell you much. BDD encourages you
to actually express what you are testing with your naming. This
really helps you focus on the goal of the process and is much less
arbitrary.

Mapping tests to classes 1:1 is a sign of proper design. Your
implementation's names should reflect the behavior implemented
within. If you have to give different names to your tests and your
implementation you've probably named your implementation poorly.
Instead you should refactor your implementation so that it can be
named well.

Read the tutorial on the rspec site:

http://rspec.rubyforge.org/tutorials/index.html

The specifications match up with the implementation very well, so the
Stack is probably well-designed. If they didn't, that's a good sign
of code smell.

(You get the same code smell out of unit testing, it just smells a
little different.)
 
P

Pat Maddox

Which will match up 1:1 if you factor your code well.

The point is that you shouldn't try to shoehorn your tests into this
1:1 approach. You brought up the tutorial as an example of specs that
match up nicely with the code, but even that ends up with a few
contexts. If you just have a StackTest class, you won't get the same
logical separation. How many programmers are going to write an
EmpyStackTest class, a OneItemStackTest class, a FullStackTest class,
etc?

As your behavior becomes more complex - by interacting with other
objects - you need to be able to drill a bit deeper with your
specifications. I often have one complete context for an individual
method. Not many programmers will create a new test class just to
spec a method. And that doesn't mean that this method is too tightly
coupled to other objects, it just means that you need more
finely-grained specifications.
BDD presents a method of thinking about test design to help you focus
on better object design and object modeling. Unit testing doesn't
have this method of thinking built-in, you have to discover it.

The bottom line is that if you're using either Test::Unit or RSpec the
most effectively, you're probably doing things nearly the same way
that you would with the other framework. The difference is that RSpec
guides you to the proper way. Unit testing doesn't have the proper
thinking built-in, as you said. The BDD guys give us a testing tool
with some of the lessons they've learned over the years built-in.
It's just a different path to the same place.

Pat
 
E

Eric Hodel

The point is that you shouldn't try to shoehorn your tests into this
1:1 approach.

Shoehorning is design smell, so I refactor. The 1:1 falls out
because of good design. Its not something I aim for, its something I
refactor to.
You brought up the tutorial as an example of specs that
match up nicely with the code, but even that ends up with a few
contexts. If you just have a StackTest class, you won't get the same
logical separation. How many programmers are going to write an
EmpyStackTest class, a OneItemStackTest class, a FullStackTest class,
etc?

BDD and TDD are not 1:1 in this manner, so you've overdesigned your
unit tests.

Instead, do the simplest thing that works:

require 'rubygems'
require 'test/unit'
require 'test/zentest_assertions'

class TestStack < Test::Unit::TestCase

def setup
@s = Stack.new
end

def test_empty_eh
assert @s.empty?
@s.push 1
deny @s.empty?
end

def test_push
@s.push 1
deny @s.empty?
assert_equal 1, @s.top
end

def test_top
assert_nil @s.top
@s.push 1
assert_equal 1, @s.top
end

end
As your behavior becomes more complex - by interacting with other
objects - you need to be able to drill a bit deeper with your
specifications. I often have one complete context for an individual
method. Not many programmers will create a new test class just to
spec a method. And that doesn't mean that this method is too tightly
coupled to other objects, it just means that you need more
finely-grained specifications.

This is where I create new methods or new classes on the
implementation side. As the number of edgecases increase I pull out
bits that are resistant to testing by refactoring. Creating multiple
test classes to test one implementation class is design smell.
 
D

Devin Mullins

Eric said:
class TestStack < Test::Unit::TestCase

def setup
@s = Stack.new
end

def test_empty_eh
assert @s.empty?
@s.push 1
deny @s.empty?
end

def test_push
@s.push 1
deny @s.empty?
assert_equal 1, @s.top
end

def test_top
assert_nil @s.top
@s.push 1
assert_equal 1, @s.top
end

end

Each one of these test methods invokes multiple methods on Stack. You're
not testing a method, you're testing the interaction between several
methods -- i.o.w. a call sequence. Looking at the contents of each, it's
not clear that one method really is prominent -- i.e. is test_push
testing #push, #empty?, or #top ?

Question -- what would you name:

def test_???
@s.push 1
@s.pop
assert @s.empty?
end

or would you merge it in with test_empty_eh ?

Devin
 
D

David Chelimsky

Which will match up 1:1 if you factor your code well.

Eric,

I realize that you might have been put on the defensive here, but the
suggestion that 1:1 is the only natural conclusion of well factored
code denies a world of experience.

In my own experience, the ratio of test cases to classes usually
starts off close to 1:1, but as a project evolves it naturally moves
away from that. If you see that a class is growing, for example, it is
common to start refactoring out methods into new classes as those
classes make themselves apparent.

Some people will add new tests for the new class as a matter of
course. If you do that, then you are testing the same code in two
different tests. Multiply that over a large system and you're
increasing the time it takes your test suites to run exponentially.
Slower test suites get run less often, providing less timely feedback.

Also, 1:1 mapping means that every time you want to refactor you are
burdened with refactoring your tests. 1:1 as a policy has a tendency
to push people towards doing less refactoring and leaving code in a
state that less well factored than they might envision.

Others are satisfied that the new class is already tested through the
classes that are using it. The risk they run is that tests get further
and further away from the code they are testing, and problems become
progressively difficult to isolate.

Which is right? Neither 100% of the time. The skill is in finding the
right balance in each different situation and understanding how to
make good decisions about when new tests resulting from refactoring is
a good idea and when it's not.
I don't use BDD because I already have my tests broken up by units of
behavior. I call them methods.

BDD presents a method of thinking about test design to help you focus
on better object design and object modeling. Unit testing doesn't
have this method of thinking built-in, you have to discover it.

I'm not sure I understand the distinction you're trying to make here.
TDD/BDD is all about discovery.
(Although it makes itself evident if you pay attention to how
difficult it is to test something.)


I think there are two true goals of testing. One is to enumerate
your edge cases so each one behaves as you expect. Another is to get
feedback about how well you've designed your program. Both are
equally important.
Agree.

If you want to write specifications to enumerate your edge cases,
great. However, if you've factored poorly neither BDD nor TDD will
give you small, concise tests nor a small, concise implementation.

Agree, though I do believe that writing the tests first helps to push
you in the right direction.
Then your classes are probably coupled too tightly.

Let's say you have a Money class that handles things like rounding,
math, conversions, etc. This is all very well tested. Now you are
building an Account class that interacts with an instance of Money.
Would you mock that out in every Account test?
Mapping tests to classes 1:1 is a sign of proper design.

Again, I disagree. 1:1 mapping is a sign that you are good at
following policy, nothing more. If I write a giant class that has 50
methods with a single test case with 50 corresponding test methods
I've met 1:1, but I still probably have a shitty design.

If I have some other, looser ratio of tests/classes and every time I
need to add features to my app it takes 10 times what I expect it to,
I probably have a shitty design.

On the flip side, if I have a design that is easy for me or any of my
colleagues to extend as business requirements change, then I have a
good design. For that to happen there is probably good test coverage,
but I really don't think that 1:1 mapping has much to do with it.

But that's just me.

Cheers,
David
 
E

Eric Hodel

I realize that you might have been put on the defensive here,

I don't really care if people use zentest or autotest or unit_diff or
Test::Rails or not, but I will say that you're missing out if you're
not testing.
but the suggestion that 1:1 is the only natural conclusion of well
factored
code denies a world of experience.

I'm going to cut and ignore some of your points to respond to this,
which seems to be the core of your message. I think the parts I've
ignored are tangent to the central issue.

(If you want to see how I handle testing specific cases you can go
through tests in my various projects. You'll see I don't really do
1:1 testing, and some are barely tested at all!)
I'm not sure I understand the distinction you're trying to make here.
TDD/BDD is all about discovery.

I added back some context. I believe James' assertion was that
ZenTest's approach was different from BDD's approach. They are the
same, but test/unit has less philosophy standing behind it, and less
guidance about how things should be done.

Generally, implementing a BDD specification would end up as a method
for me. This happens because I use feedback from both the tests and
the implementations to improve my code.
Again, I disagree. 1:1 mapping is a sign that you are good at
following policy, nothing more. If I write a giant class that has 50
methods with a single test case with 50 corresponding test methods
I've met 1:1, but I still probably have a shitty design.

Following policy is not listening to your code, which I touched on in
the context below (and hopefully enough above). I listen to my code,
and 50 methods tells me its screaming in pain.
If I have some other, looser ratio of tests/classes and every time I
need to add features to my app it takes 10 times what I expect it to,
I probably have a shitty design.

On the flip side, if I have a design that is easy for me or any of my
colleagues to extend as business requirements change, then I have a
good design. For that to happen there is probably good test coverage,
but I really don't think that 1:1 mapping has much to do with it.

But that's just me.

Following rules sucks, and honestly I really don't do 1:1 the way you
present it (I probably misrepresented myself due to inexact
expression). It just falls out that way for me because of my style
and the way I listen to my code and my tests.

I don't want to say, "YOU MUST TEST 1:1!" since that misses out on
half of the information you get from testing. I want to say
something like, "write your tests, listen to your code, listen to
your tests, then refactor and make it simpler. Then do it again and
again and again. Once you're in the habit you'll just have to the
first four.", but even that doesn't get what I want to convey across.

Oh, and any time your code does something unexpected, write a test.

I still feel that what you test (or specify) should translate very
closely to what you implement, unit to unit (or behavior to
behavior). For me this smells good, feels good, flows well, and is a
pleasure to work with. It took me a long time (at least three years)
to figure out what I was supposed to be doing, and I've still got
years to go to get it right. Most of it was from doing unpleasant
things and getting stuck in crappy code. (Basically, the easier it
is, the more I think I'm doing something right.)

We have these marvelous tools (be it test/unit or BDD-style) and it
takes a while to realize we're probably using only half of what it
can tell us (the tests all pass, done!) and ignoring the half that
can make code beautiful. Not because of ignorance, but because the
information is conveyed in a way that's unusual at first.

It took effort for me to look at code and say "it would be so much
cleaner if...", "it would be so much simpler if...", "I can remove
this whole class by...".

Now I can look at something and say "oh, eww, I could just do this
and this falls off, then that falls off then ...".

I don't know how to teach people to see it, but its there, if you
look and listen.
 
D

David Chelimsky

I don't really care if people use zentest or autotest or unit_diff or
Test::Rails or not, but I will say that you're missing out if you're
not testing.


I'm going to cut and ignore some of your points to respond to this,
which seems to be the core of your message. I think the parts I've
ignored are tangent to the central issue.

(If you want to see how I handle testing specific cases you can go
through tests in my various projects. You'll see I don't really do
1:1 testing, and some are barely tested at all!)


I added back some context. I believe James' assertion was that
ZenTest's approach was different from BDD's approach. They are the
same, but test/unit has less philosophy standing behind it, and less
guidance about how things should be done.

Generally, implementing a BDD specification would end up as a method
for me. This happens because I use feedback from both the tests and
the implementations to improve my code.


Following policy is not listening to your code, which I touched on in
the context below (and hopefully enough above). I listen to my code,
and 50 methods tells me its screaming in pain.


Following rules sucks, and honestly I really don't do 1:1 the way you
present it (I probably misrepresented myself due to inexact
expression). It just falls out that way for me because of my style
and the way I listen to my code and my tests.

I don't want to say, "YOU MUST TEST 1:1!" since that misses out on
half of the information you get from testing. I want to say
something like, "write your tests, listen to your code, listen to
your tests, then refactor and make it simpler. Then do it again and
again and again.

Hear, hear!
Once you're in the habit you'll just have to the
first four.", but even that doesn't get what I want to convey across.

Oh, and any time your code does something unexpected, write a test.
+1

I still feel that what you test (or specify) should translate very
closely to what you implement, unit to unit (or behavior to
behavior). For me this smells good, feels good, flows well, and is a
pleasure to work with. It took me a long time (at least three years)
to figure out what I was supposed to be doing, and I've still got
years to go to get it right. Most of it was from doing unpleasant
things and getting stuck in crappy code. (Basically, the easier it
is, the more I think I'm doing something right.)

I think that's generally true, with the addition that the "easy"
measurement becomes more meaningful over time. If a month later it's
easy for me to revisit a place in the code to add or change something,
then I must have been doing something right a month ago.

Not to belittle the flow issue. Flow, how it feels as you're doing it,
is very, very important as well. I've just learned over time that, for
me, not everything that feels right today proves itself out in a month
or later. That said, most things that "feel wrong" today end up being
for a long time.
We have these marvelous tools (be it test/unit or BDD-style) and it
takes a while to realize we're probably using only half of what it
can tell us (the tests all pass, done!) and ignoring the half that
can make code beautiful. Not because of ignorance, but because the
information is conveyed in a way that's unusual at first.

It took effort for me to look at code and say "it would be so much
cleaner if...", "it would be so much simpler if...", "I can remove
this whole class by...".

Now I can look at something and say "oh, eww, I could just do this
and this falls off, then that falls off then ...".

I don't know how to teach people to see it, but its there, if you
look and listen.

Eric,

Thanks for taking the time for such a thoughtful and thorough reply. I
was definitely reacting to two statements you had made:

"Which will match up 1:1 if you factor your code well."
"Mapping tests to classes 1:1 is a sign of proper design."

Together, these seemed to advocate a position that 1:1 is good,
anything else is bad. I can see clearly from your response that I
misread your intent, and that we are primarily in agreement.

The overriding message (which I hope I'm reading correctly this time)
with which I agree wholeheartedly is that tools are exactly that:
tools. They can make some things easier. They can make things more
fun. They can shed light on the things you're interested in lighting.

But they can't make you produce good code. Only time and experience
and diligent attention to the code itself can get you there.

Cheers,
David
 
D

David Chelimsky

It works exactly the same in test/spec.

Glad to hear it. Do you have any feeling about which approach to reuse
you personally prefer? Nested contexts or included modules?
 
L

Luke Ivers

Glad to hear it. Do you have any feeling about which approach to reuse
you personally prefer? Nested contexts or included modules?
I know I'm not the person that this was targeted at, but I would absolutely
love to see nested contexts included in rspec... I was just wishing I could
do that yesterday.
 
D

David Chelimsky

As said, nesting is purely for organizational and namespacing purposes.
I use modules.

Interesting. I thought it was for context state sharing like this:

context "a User" do
setup do
user = User.new
end

context "with a valid email address" do
setup do
user.email = "(e-mail address removed)"
end
...
end

context "with an invalid email address" do
setup do
user.email = "(e-mail address removed)"
end
...
end
end

This leads us quickly down a path in which you have to look up a chain
of several setup methods to understand failures.

If it's just a matter of namespacing for reporting/organization that's
kind of interesting.

David
 
D

David Chelimsky

I know I'm not the person that this was targeted at, but I would absolutely
love to see nested contexts included in rspec... I was just wishing I could
do that yesterday.

Google this:

pipermail rspec nested contexts

and you'll see what's been said about it before. Feel free to
resurrect the discussion on the rspec-devel or rspec-users list if you
have some new insights to add to the discussion.

Cheers,
David
 
M

mekondelta

M. Edward (Ed) Borasky said:
I do have a bone to pick with the tutorial (or maybe I don't understand
TDD/BDD yet). In the stack example, I cringe with horror when I see them
setting a method that just returns "true" simply to get a test to pass.
That grates on me, because I *know* I'm actually going to have to write
the real code eventually. It seems like a waste of time to do the
nitty-gritty "add a test to the spec/add a trivial fix to the code" cycle.
The idea behind doing small steps is that you know exactly what went wrong
when you run the tests. If you roll several steps into one test/development
cycle then you have many possibilities to investigate what went wrong if
they fail.

Sometimes, it is easier to skip the most trivial steps though!
 
G

Giles Bowkett

I do have a bone to pick with the tutorial (or maybe I don't understand
The idea behind doing small steps is that you know exactly what went wrong
when you run the tests. If you roll several steps into one test/development
cycle then you have many possibilities to investigate what went wrong if
they fail.

Sometimes, it is easier to skip the most trivial steps though!

Also keep in mind that is just a tutorial example. The goal is just to
communicate the process - spec first, make the spec pass, next step.
It's all about a single point of failure, and a clear mapping between
specs and functionality.
 
R

Rick DeNatale

Also keep in mind that is just a tutorial example. The goal is just to
communicate the process - spec first, make the spec pass, next step.
It's all about a single point of failure, and a clear mapping between
specs and functionality.

Actually I believe that the pure process of
test-driven/behavior-driven development is:

1) Write the test/spec
2) Ensure that it FAILS
3) Write the code to make it pass
4) Goto step 1

Step 2 is to debug the spec.
 
D

David Chelimsky

Actually I believe that the pure process of
test-driven/behavior-driven development is:

1) Write the test/spec
2) Ensure that it FAILS
3) Write the code to make it pass
4) Goto step 1

Step 2 is to debug the spec.

The reason that we want to see the test fail is so that we can
confident that when it passes that it's passing because of the code we
wrote in step 3. Otherwise, we could write code that the test doesn't
interact with in any way resulting in a useless test and dead code.

Is that what you mean by "debug the spec?" If not, can you please explain?

Thx,
David
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,773
Messages
2,569,594
Members
45,119
Latest member
IrmaNorcro
Top