Test::unit assertion pass scenario

Discussion in 'Ruby' started by John Smith, Dec 30, 2009.

  1. John Smith

    John Smith Guest

    When using the test::unit assertion, such as assert_equal, the script
    will throw a failure if the test condition (assertion) is not met. If it
    does pass, no output is displayed.

    Is there a way to force the results of the test to display both passes
    and failures?

    Thanks in advance!
    --
    Posted via http://www.ruby-forum.com/.
     
    John Smith, Dec 30, 2009
    #1
    1. Advertising

  2. John Smith

    Ryan Davis Guest

    On Dec 30, 2009, at 12:08 , John Smith wrote:

    > When using the test::unit assertion, such as assert_equal, the script
    > will throw a failure if the test condition (assertion) is not met. If =

    it
    > does pass, no output is displayed.
    >=20
    > Is there a way to force the results of the test to display both passes
    > and failures?


    how are you using it?? Normally it displays something like:

    > /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -w =

    -I../../minitest/dev/lib:lib:ext:bin:test -e 'require "rubygems"; =
    require "minitest/autorun"; require "test/test_autotest.rb"; require =
    "test/test_focus.rb"; require "test/test_unit_diff.rb"; require =
    "test/test_zentest.rb"; require "test/test_zentest_mapping.rb"'=20
    > Loaded suite -e
    > Started
    > =

    ..........................................................................=
    .............................
    > Finished in 0.214105 seconds.
    >=20
    > 103 tests, 259 assertions, 0 failures, 0 errors, 0 skips


    (this is for minitest, not test/unit, but the output is very similar)

    Your test should be set up like:

    > # test_blah.rb:
    > require 'test/unit'
    >=20
    > class TestThingy < Test::Unit::TestCase
    > def test_thingy
    > assert_equal 2, 1+1
    > end
    > end.


    Here is a run:

    > % ruby=20
    > require 'test/unit'
    >=20
    > class TestThingy < Test::Unit::TestCase
    > def test_thingy
    > assert_equal 2, 1+1
    > end
    > end
    > ^d
    > Loaded suite -
    > Started
    > .
    > Finished in 0.000706 seconds.
    >=20
    > 1 tests, 1 assertions, 0 failures, 0 errors
    >=20
     
    Ryan Davis, Dec 30, 2009
    #2
    1. Advertising

  3. John Smith

    John Smith Guest

    Yep, the example below is exactly the way I'm using it.
    However, as demonstrated in your example, the 259 assertions that were
    run (and passed) do not display any kind of passing checkpoint, the way
    it would have if any of those assertions failed.

    Basically, I am looking for a way to provide info just for both passed
    and failed assertions, similar to what is done when an assertion fails.

    Thanks again!

    Ryan Davis wrote:
    > On Dec 30, 2009, at 12:08 , John Smith wrote:
    >
    >> When using the test::unit assertion, such as assert_equal, the script
    >> will throw a failure if the test condition (assertion) is not met. If it
    >> does pass, no output is displayed.
    >>
    >> Is there a way to force the results of the test to display both passes
    >> and failures?

    >
    > how are you using it?? Normally it displays something like:
    >
    >> /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -w -I../../minitest/dev/lib:lib:ext:bin:test -e 'require "rubygems"; require "minitest/autorun"; require "test/test_autotest.rb"; require "test/test_focus.rb"; require "test/test_unit_diff.rb"; require "test/test_zentest.rb"; require "test/test_zentest_mapping.rb"'
    >> Loaded suite -e
    >> Started
    >> .......................................................................................................
    >> Finished in 0.214105 seconds.
    >>
    >> 103 tests, 259 assertions, 0 failures, 0 errors, 0 skips

    >
    > (this is for minitest, not test/unit, but the output is very similar)
    >
    > Your test should be set up like:
    >
    >> # test_blah.rb:
    >> require 'test/unit'
    >>
    >> class TestThingy < Test::Unit::TestCase
    >> def test_thingy
    >> assert_equal 2, 1+1
    >> end
    >> end.

    >
    > Here is a run:


    --
    Posted via http://www.ruby-forum.com/.
     
    John Smith, Dec 31, 2009
    #3
  4. John Smith

    Ryan Davis Guest

    On Dec 30, 2009, at 17:41 , John Smith wrote:

    > Yep, the example below is exactly the way I'm using it.
    > However, as demonstrated in your example, the 259 assertions that were=20=


    > run (and passed) do not display any kind of passing checkpoint, the =

    way=20
    > it would have if any of those assertions failed.
    >=20
    > Basically, I am looking for a way to provide info just for both passed=20=


    > and failed assertions, similar to what is done when an assertion =

    fails.

    Your use of "info" is pretty nebulous.

    >>> =

    ..........................................................................=
    .............................
    >>> Finished in 0.214105 seconds.
    >>>=20
    >>> 103 tests, 259 assertions, 0 failures, 0 errors, 0 skips


    All of that is "info".

    What do you want it to do differently, and (more importantly) WHY?
     
    Ryan Davis, Dec 31, 2009
    #4
  5. On 31.12.2009 02:41, John Smith wrote:
    > Yep, the example below is exactly the way I'm using it.
    > However, as demonstrated in your example, the 259 assertions that were
    > run (and passed) do not display any kind of passing checkpoint, the way
    > it would have if any of those assertions failed.
    >
    > Basically, I am looking for a way to provide info just for both passed
    > and failed assertions, similar to what is done when an assertion fails.


    Why?

    If an assertion passes everything is well (within the test parameters,
    anyway ;) ), and no action is required.

    In fact, you'd drown out, since you degrade the signal/noise ratio,
    actual failures.

    If you are looking for a way to see what code gets exercised (and if all
    code gets tested), RCov (used to be) a good solution (alas, it hasn't
    been updated since 2007) to test for code coverage, too.

    --
    Phillip Gawlowski
     
    Phillip Gawlowski, Dec 31, 2009
    #5
  6. John Smith

    Ryan Davis Guest

    On Dec 30, 2009, at 17:47 , Phillip Gawlowski wrote:

    > If you are looking for a way to see what code gets exercised (and if =

    all code gets tested), RCov (used to be) a good solution (alas, it =
    hasn't been updated since 2007) to test for code coverage, too.

    not true:

    > rcov (0.9.7.1)
    > Platform: ruby, java
    > Authors: Relevance, Chad Humphries (spicycode), Aaron Bedra
    > (abedra), Jay McGaffigan, Mauricio Fernandez
    > Homepage: http://github.com/relevance/rcov
    >=20
    > Code coverage analysis tool for Ruby


    see http://gemcutter.org/gems/rcov

    > Versions
    > =95 0.9.7.1 December 29, 2009
    > =95 0.9.7.1 December 29, 2009 java
    > =95 0.9.7 December 27, 2009
    > =95 0.9.7 December 27, 2009 java
    > =95 0.9.6 May 12, 2009



    But I feel I should point out: rcov doesn't tell you that your tests are =
    any good... it is only good for "what code gets exercised" but not =
    "[all] code gets tested".
     
    Ryan Davis, Dec 31, 2009
    #6
  7. John Smith

    John Smith Guest

    Why is a good question. First, the extra info is not for myself, nor
    would it be for any of the devs who may run it. The theory is that
    anyone who writes the test or uses them regularly should be familiar
    with what is being tested anyway, and hence, only the failures really
    need further investigation.

    It's more of a CYA item for those who are, shall we say, not in the
    know.

    Ryan Davis wrote:
    > On Dec 30, 2009, at 17:41 , John Smith wrote:
    >
    >> Yep, the example below is exactly the way I'm using it.
    >> However, as demonstrated in your example, the 259 assertions that were
    >> run (and passed) do not display any kind of passing checkpoint, the way
    >> it would have if any of those assertions failed.
    >>
    >> Basically, I am looking for a way to provide info just for both passed
    >> and failed assertions, similar to what is done when an assertion fails.

    >
    > Your use of "info" is pretty nebulous.
    >
    >>>> .......................................................................................................
    >>>> Finished in 0.214105 seconds.
    >>>>
    >>>> 103 tests, 259 assertions, 0 failures, 0 errors, 0 skips

    >
    > All of that is "info".
    >
    > What do you want it to do differently, and (more importantly) WHY?


    --
    Posted via http://www.ruby-forum.com/.
     
    John Smith, Dec 31, 2009
    #7
  8. Ryan Davis wrote:
    > But I feel I should point out: rcov doesn't tell you that your tests
    > are any good... it is only good for "what code gets exercised" but
    > not "[all] code gets tested".


    Simple proof: take a hypothetical "perfect" test suite with 100%
    coverage. Remove all assertions. Still 100% coverage, but *nothing*
    gets tested anymore.

    jwm
     
    Jörg W Mittag, Dec 31, 2009
    #8
  9. John Smith

    Ryan Davis Guest

    On Dec 30, 2009, at 20:35 , J=F6rg W Mittag wrote:

    > Ryan Davis wrote:
    >> But I feel I should point out: rcov doesn't tell you that your tests
    >> are any good... it is only good for "what code gets exercised" but
    >> not "[all] code gets tested".

    >=20
    > Simple proof: take a hypothetical "perfect" test suite with 100%
    > coverage. Remove all assertions. Still 100% coverage, but *nothing*
    > gets tested anymore.


    _Exactly_
     
    Ryan Davis, Dec 31, 2009
    #9
  10. John Smith

    Ryan Davis Guest

    On Dec 30, 2009, at 20:05 , John Smith wrote:

    > Why is a good question. First, the extra info is not for myself, nor=20=


    > would it be for any of the devs who may run it. The theory is that=20
    > anyone who writes the test or uses them regularly should be familiar=20=


    > with what is being tested anyway, and hence, only the failures really=20=


    > need further investigation.
    >=20
    > It's more of a CYA item for those who are, shall we say, not in the=20
    > know.


    Some sort of detailed report of exactly what assertions you're running =
    isn't a very good CYA. You might be better off with:

    + # of tests
    + # of assertions (or better: assertions / test)
    + % of coverage (possibly add heckle #'s, but that's a serious PITA)
    + loc test / loc impl (but please for gods' sake refactor both sides)
    + test time

    and then graph that over time.
     
    Ryan Davis, Dec 31, 2009
    #10
  11. On 31.12.2009 04:35, Ryan Davis wrote:
    >
    > On Dec 30, 2009, at 17:47 , Phillip Gawlowski wrote:
    >
    >> If you are looking for a way to see what code gets exercised (and if all code gets tested), RCov (used to be) a good solution (alas, it hasn't been updated since 2007) to test for code coverage, too.

    >
    > not true:


    Someone needs to update eigenclass.org's RCov page, then.

    --
    Phillip Gawlowski
     
    Phillip Gawlowski, Dec 31, 2009
    #11
  12. John Smith

    Ryan Davis Guest

    On Dec 30, 2009, at 22:19 , Phillip Gawlowski wrote:

    > On 31.12.2009 04:35, Ryan Davis wrote:
    >>=20
    >> On Dec 30, 2009, at 17:47 , Phillip Gawlowski wrote:
    >>=20
    >>> If you are looking for a way to see what code gets exercised (and if =

    all code gets tested), RCov (used to be) a good solution (alas, it =
    hasn't been updated since 2007) to test for code coverage, too.
    >>=20
    >> not true:

    >=20
    > Someone needs to update eigenclass.org's RCov page, then.


    He's not responding to anyone. Which is why it has new parents.
     
    Ryan Davis, Dec 31, 2009
    #12
  13. On 31.12.2009 09:42, Ryan Davis wrote:

    > He's not responding to anyone. Which is why it has new parents.


    Aha! Thanks. :)

    --
    Phillip Gawlowski
     
    Phillip Gawlowski, Dec 31, 2009
    #13
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Replies:
    2
    Views:
    172
    Bret Pettichord
    Oct 7, 2005
  2. aidy
    Replies:
    2
    Views:
    147
    Tim Pease
    Apr 16, 2007
  3. Curt Sampson
    Replies:
    3
    Views:
    116
    Phlip
    Aug 11, 2007
  4. Replies:
    1
    Views:
    106
  5. timr
    Replies:
    2
    Views:
    165
Loading...

Share This Page