Assertions in principle

Discussion in 'C++' started by jeffc226@yahoo.com, Mar 2, 2007.

  1. Guest

    This might be less of a design issue than a C++ language issue per se,
    but I have a problem with assertions. I mean, they work, of course.
    But there's something I'm uncomfortable with that's never been
    explained to my satisfaction.

    I recently read this explanation. "There is an effective litmus test
    to differentiate the cases in which you need to use assert and when
    you need to use genuine error checking: you use error checking for
    things that could happen, even very improbably. You use assert only
    for things that you truly believe cannot possibly happen under any
    circumstances. An assertion that fails always signals a design or a
    programmer error - not a user error."

    OK I can buy that. But what keeps going unmentioned is that
    assertions are debug-mode only. Well, it's mentioned, but usually in
    the context of increased efficiency at runtime since they're compiled
    out.

    I understand the idea that even the things you take for granted as
    being true might not be true. Software has bugs, and today almost
    every decent sized application uses third party supplements and
    interfaces. Interfaces change, code versions change, versions of code
    get mismatched. New permutations of hardware and software mixes are
    constantly occurring.

    And where does all this manifest itself? At the developer's box?
    Sometimes. But the majority of time in a modern large application
    with many users, it's very likely for a bug to show itself in the
    field. And that is exactly where the assertion does not exist. Even
    most test departments use release code, not debug code. What exactly
    is the point of checking for "impossible" error situations only at the
    developer's desk? That just doesn't make sense to me. The code gets
    executed far too much outside of that environment, in ways the
    original developer might not even have imagined, for that to be good
    enough.

    I would go so far as to say the original developer's box is precisely
    where assertions are NOT needed, because that's the only place where a
    debugger is available. (I see how they can come in handy, and force
    you to resist making assumptions.) But you really need assertions (or
    something) in environments where the debugger isn't available.
     
    , Mar 2, 2007
    #1
    1. Advertising

  2. Dennis Jones Guest

    <> wrote in message
    news:...

    <snip>

    > I recently read this explanation. "There is an effective litmus test
    > to differentiate the cases in which you need to use assert and when
    > you need to use genuine error checking: you use error checking for
    > things that could happen, even very improbably. You use assert only
    > for things that you truly believe cannot possibly happen under any
    > circumstances. An assertion that fails always signals a design or a
    > programmer error - not a user error."
    >
    > OK I can buy that. But what keeps going unmentioned is that
    > assertions are debug-mode only. Well, it's mentioned, but usually in
    > the context of increased efficiency at runtime since they're compiled
    > out.


    <snip>

    > I would go so far as to say the original developer's box is precisely
    > where assertions are NOT needed, because that's the only place where a
    > debugger is available. (I see how they can come in handy, and force
    > you to resist making assumptions.) But you really need assertions (or
    > something) in environments where the debugger isn't available.


    I agree. I think asserts should be left in release code, and you can
    accomplish this by writing your own assert macro (appropriately for your
    platform):

    #define Assert(p) ((p) ? (void)0 : _assert(#p, __FILE__, __LINE__))

    Of course, there may be cases where the assert condition is a time-consuming
    task that you would not want your users to suffer in the field. I'm not
    sure I know how best to handle that case, but one way might be to enable
    asserts with a runtime flag that can be enabled at the user's discretion.
    This has the advantage that the assert code is available if you need it, but
    has the disadvantage that it doesn't help when you have a
    difficult-to-reproduce error condition. Fortunately, most conditions for
    which assert is beneficial are in fact programmer errors, and can be
    reproduced readily.

    - Dennis
     
    Dennis Jones, Mar 2, 2007
    #2
    1. Advertising

  3. Guest

    On Mar 2, 8:08 am, "" <> wrote:
    > This might be less of a design issue than a C++ language issue per se,
    > but I have a problem with assertions. I mean, they work, of course.
    > But there's something I'm uncomfortable with that's never been
    > explained to my satisfaction.
    >
    > I recently read this explanation. "There is an effective litmus test
    > to differentiate the cases in which you need to use assert and when
    > you need to use genuine error checking: you use error checking for
    > things that could happen, even very improbably. You use assert only
    > for things that you truly believe cannot possibly happen under any
    > circumstances. An assertion that fails always signals a design or a
    > programmer error - not a user error."
    >
    > OK I can buy that. But what keeps going unmentioned is that
    > assertions are debug-mode only. Well, it's mentioned, but usually in
    > the context of increased efficiency at runtime since they're compiled
    > out.
    >
    > I understand the idea that even the things you take for granted as
    > being true might not be true. Software has bugs, and today almost
    > every decent sized application uses third party supplements and
    > interfaces. Interfaces change, code versions change, versions of code
    > get mismatched. New permutations of hardware and software mixes are
    > constantly occurring.
    >
    > And where does all this manifest itself? At the developer's box?
    > Sometimes. But the majority of time in a modern large application
    > with many users, it's very likely for a bug to show itself in the
    > field. And that is exactly where the assertion does not exist. Even
    > most test departments use release code, not debug code. What exactly
    > is the point of checking for "impossible" error situations only at the
    > developer's desk? That just doesn't make sense to me. The code gets
    > executed far too much outside of that environment, in ways the
    > original developer might not even have imagined, for that to be good
    > enough.
    >
    > I would go so far as to say the original developer's box is precisely
    > where assertions are NOT needed, because that's the only place where a
    > debugger is available. (I see how they can come in handy, and force
    > you to resist making assumptions.) But you really need assertions (or
    > something) in environments where the debugger isn't available.


    It's definitely a balancing act rather than a black and white thing.
    I normally have (at least) two kinds of assert macros defined (so I
    define my own rather than just use the standard 'assert'). One of
    these will be turned off for a release/optimized build and one kept
    on. Typically you want to put asserts all over the place -- it's
    common to have one at the start of every function checking arguments
    and at the end of the function checking the 'post condition'. With
    so many asserts the performance can suffer for code that gets called a
    lot. So places with high risk like major interfaces to other people's
    code I might use an assert which stays active all the time, other
    places more internal to my own code I will disable the asserts for a
    release build.

    Sure I could still suffer problems after release in my more internal
    code, but I generally see the risk verses cost acceptable for that
    code if I can do good testing before release.

    btw, I often have a third type of assert that calls special debug mode
    functions that check the inegrity of data structures. These are very
    expensive, but are great for catching some problems. I may keep these
    disabled even during some debugging to keep performance up.
     
    , Mar 2, 2007
    #3
  4. Guest

    On Mar 2, 12:19 pm, "Dennis Jones" <> wrote:
    > I agree. I think asserts should be left in release code, and you can
    > accomplish this by writing your own assert macro (appropriately for your
    > platform):


    I like that idea.

    >I'm not
    > sure I know how best to handle that case, but one way might be to enable
    > asserts with a runtime flag that can be enabled at the user's discretion.


    I like that idea too.

    >Fortunately, most conditions for
    > which assert is beneficial are in fact programmer errors, and can be
    > reproduced readily.


    Actually I was specifically thinking of all the cases I've run into
    that *can't* be reproduced back at the developer's desk, due to the
    vast number of variables and permutations in some of today's scenarios
    (I say "scenario" instead of simply "application" because it's
    becoming increasingly valid these days to think in terms of the entire
    system, of which the application is just one part.) But putting
    custom macros in release code, to be turned on when more feedback is
    needed for a problem in the field, sounds like a good solution.
     
    , Mar 2, 2007
    #4
  5. <> wrote in message
    news:...

    > OK I can buy that. But what keeps going unmentioned is that
    > assertions are debug-mode only.


    What do you mean by this statement? The C++ standard has no notion of
    "debug mode."

    It is true that by setting a preprocessor variable named NDEBUG, you make
    assertions go away. But it's up to you whether to do that. If you don't
    think it's a good idea, don't do it.
     
    Andrew Koenig, Mar 2, 2007
    #5
  6. Kaz Kylheku Guest

    On Mar 2, 8:08 am, "" <> wrote:
    > This might be less of a design issue than a C++ language issue per se,
    > but I have a problem with assertions. I mean, they work, of course.
    > But there's something I'm uncomfortable with that's never been
    > explained to my satisfaction.
    >
    > I recently read this explanation. "There is an effective litmus test
    > to differentiate the cases in which you need to use assert and when
    > you need to use genuine error checking: you use error checking for
    > things that could happen, even very improbably. You use assert only
    > for things that you truly believe cannot possibly happen under any
    > circumstances. An assertion that fails always signals a design or a
    > programmer error - not a user error."
    >
    > OK I can buy that. But what keeps going unmentioned is that
    > assertions are debug-mode only.


    The ANSI/ISO C assert, inherited into C++, is disabled by the presence
    of the NDEBUG macro. This is not the same thing as being ``debug-mode
    only''.

    Debug mode is whatever your compiler, build environment and local
    conventions dictate debug mode is.

    Where I work, both debug and optimized builds have assertions enabled.

    C. A. R. Hoare (of quicksort fame) wrote this sometime in the early
    1970's:

    It is absurd to make elaborate security checks on debugging runs,
    when no trust is putin the results, and then remove them in
    production runs, when an erroneous result could be expensive
    or disastrous. What would we think of a sailing enthusiast who
    wears his life-jacket when training on dry land but takes it
    off as soon as he goes to sea?

    So your argument finds good company, indeed.

    > I would go so far as to say the original developer's box is precisely
    > where assertions are NOT needed, because that's the only place where a
    > debugger is available. (I see how they can come in handy, and force
    > you to resist making assumptions.) But you really need assertions (or
    > something) in environments where the debugger isn't available.


    You need both. Under debugger, an assertion triggers the type of event
    that causes the program to stop so that it can be debugged. Without
    the assertion, the program will keep executing; it won't stop until
    some fault occurs. By that time, in spite of the debugger being
    available at that point, it may be harder to locate the root cause.
    Assertions go off closer to the root causes of defects.
     
    Kaz Kylheku, Mar 3, 2007
    #6
  7. Alan Johnson Guest

    Kaz Kylheku wrote:
    > C. A. R. Hoare (of quicksort fame) wrote this sometime in the early
    > 1970's:
    >
    > It is absurd to make elaborate security checks on debugging runs,
    > when no trust is putin the results, and then remove them in
    > production runs, when an erroneous result could be expensive
    > or disastrous. What would we think of a sailing enthusiast who
    > wears his life-jacket when training on dry land but takes it
    > off as soon as he goes to sea?
    >
    > So your argument finds good company, indeed.


    I'm not sure I like Hoare's analogy. Consider this one: When you are
    learning to ride a bicycle you attach training wheels to catch you when
    you make a mistake. When you are competing in the Tour de France you no
    longer use the training wheels. Analogously, when you are writing code
    you use asserts to catch when your invariants stop being invariant.
    After you are satisfied the code is correct you disable them for
    performance.

    That said, I do find the rest of the arguments compelling.

    --
    Alan Johnson
     
    Alan Johnson, Mar 3, 2007
    #7
  8. Chris Theis Guest

    "Alan Johnson" <> wrote in message
    news:...
    > Kaz Kylheku wrote:
    >> C. A. R. Hoare (of quicksort fame) wrote this sometime in the early
    >> 1970's:
    >>
    >> It is absurd to make elaborate security checks on debugging runs,
    >> when no trust is putin the results, and then remove them in
    >> production runs, when an erroneous result could be expensive
    >> or disastrous. What would we think of a sailing enthusiast who
    >> wears his life-jacket when training on dry land but takes it
    >> off as soon as he goes to sea?
    >>
    >> So your argument finds good company, indeed.

    >
    > I'm not sure I like Hoare's analogy. Consider this one: When you are
    > learning to ride a bicycle you attach training wheels to catch you when
    > you make a mistake. When you are competing in the Tour de France you no
    > longer use the training wheels. Analogously, when you are writing code
    > you use asserts to catch when your invariants stop being invariant. After
    > you are satisfied the code is correct you disable them for performance.

    [SNIP]

    But how do you actually "assert" that the code is correct? Running your own
    tests, extended beta-testing etc. etc. is certainly a mandatory thing, but
    it does not at all guarantee correct code in the sense, that it will act
    correctly under each and every aspect that might come along.

    IMO the sailing analogy hits the point. Staying with the analogies - even
    those competing in the TDF wear helmets for safety, but not all of them. Of
    course one would disable asserts in time critical parts in the production
    code, but this is not necessarily useful throughout the whole program.
    Especially when implementing complex parser/compiler systems they come in
    extremely handy in code that is already shipped to customers.

    Cheers
    Chris
     
    Chris Theis, Mar 3, 2007
    #8
  9. On 2 Mar 2007 08:08:55 -0800, "jeffc226@...com" wrote:
    >This might be less of a design issue than a C++ language issue per se,
    >but I have a problem with assertions. I mean, they work, of course.
    >But there's something I'm uncomfortable with that's never been
    >explained to my satisfaction.
    >
    >I recently read this explanation. "There is an effective litmus test
    >to differentiate the cases in which you need to use assert and when
    >you need to use genuine error checking: you use error checking for
    >things that could happen, even very improbably. You use assert only
    >for things that you truly believe cannot possibly happen under any
    >circumstances. An assertion that fails always signals a design or a
    >programmer error - not a user error."


    This is a very good explanation (source?). assert is for finding bugs
    in a program. Input validation, error handling, ... is not done with
    asserts. Therefore asserts should never be used in production
    (release) code.

    >OK I can buy that. But what keeps going unmentioned is that
    >assertions are debug-mode only. Well, it's mentioned, but usually in
    >the context of increased efficiency at runtime since they're compiled
    >out.


    Yes, that's their purpose. Actually, assert should be renamed to
    debug_assert.

    >I understand the idea that even the things you take for granted as
    >being true might not be true. Software has bugs, and today almost
    >every decent sized application uses third party supplements and
    >interfaces. Interfaces change, code versions change, versions of code
    >get mismatched. New permutations of hardware and software mixes are
    >constantly occurring.


    Well tested software has so few bugs that is can be smoothly used by
    the client. And what have interface changes and mismatched code to do
    with asserts?

    >And where does all this manifest itself? At the developer's box?
    >Sometimes. But the majority of time in a modern large application
    >with many users, it's very likely for a bug to show itself in the
    >field. And that is exactly where the assertion does not exist.


    The bug crashes the program, assert calls abort which crashes the
    program. So?

    >Even most test departments use release code, not debug code.


    Of course, they test the program that is shipped, not a debug version.

    >What exactly
    >is the point of checking for "impossible" error situations only at the
    >developer's desk?


    To find bugs in the program. Nothing more, nothing less. Together with
    unit tests, functional tests, integration tests, ... it is one step to
    produce a deployable program. Actually, assert can be seen as
    automated test built into code.

    >That just doesn't make sense to me. The code gets
    >executed far too much outside of that environment, in ways the
    >original developer might not even have imagined, for that to be good
    >enough.


    That just doesn't make sense to me. You write code that consists of
    (public) interfaces to encapsulated code. You validate the input.
    Since software is (should be) reusable it's no problem (even desired)
    that it is used in ways 'the original developer might not even have
    imagined'. You check the input as part of the contract between your
    code and the caller. When the input is within the allowd range your
    code must work.

    >I would go so far as to say the original developer's box is precisely
    >where assertions are NOT needed, because that's the only place where a
    >debugger is available. (I see how they can come in handy, and force
    >you to resist making assumptions.) But you really need assertions (or
    >something) in environments where the debugger isn't available.


    Using a debugger is the most inefficient activity in programming. It
    is entirely resistant to automation. asserts help you avoid using the
    debugger.
    BTW, have you ever seen a program/library that prints/displays asserts
    in production/release mode? I would be very reluctant to use that
    program because the code quality most probably is very poor.

    Best wishes,
    Roland Pibinger
     
    Roland Pibinger, Mar 3, 2007
    #9
  10. Chris Theis Guest

    "Roland Pibinger" <> wrote in message
    news:...
    [SNIP]
    >
    > This is a very good explanation (source?). assert is for finding bugs
    > in a program. Input validation, error handling, ... is not done with
    > asserts. Therefore asserts should never be used in production
    > (release) code.


    I absolutely agree on the point that input validation & user error handling
    is not done with asserts. But in contrast to what you've stated I do not
    quite see the causality why this would exclude using them in production
    code? I'd very much appreciate if you could probably elaborate on this.

    [SNIP]
    > Using a debugger is the most inefficient activity in programming.


    Sorry, I might seem to be thick but this is a statement which I do not
    understand at all. What exactly do you mean by inefficient? In my
    understanding the term inefficient relates to wasting time and considering
    the use of a debugger as a waste of time sounds rather awsome to me. But
    there might be a misunderstanding here.

    > It is entirely resistant to automation. asserts help you avoid using the
    > debugger


    Let me summarize, you use the assert to avoid using the debugger and not
    using the debugger helps you to be more efficient in programming. I actually
    wonder how you check the state/value of variables, objects etc. in your
    code.

    > BTW, have you ever seen a program/library that prints/displays asserts
    > in production/release mode? I would be very reluctant to use that
    > program because the code quality most probably is very poor.


    It actually makes sense to have production/release code printing asserts, or
    rather dumping them in a file. Especially for large scale systems with lots
    of modules, dynamic libs etc. it can come in very handy if the user has the
    possibility to enable this assertion/logging mechanism while trying to
    reproduce faulty behavior. It saves the developer a lot of time if such a
    warning/error protocol is sent to him while trying to fix things being
    remote from the customer.

    Best regards
    Chris
     
    Chris Theis, Mar 3, 2007
    #10
  11. Ian Collins Guest

    Roland Pibinger wrote:
    > On 2 Mar 2007 08:08:55 -0800, "jeffc226@...com" wrote:
    >
    >>I recently read this explanation. "There is an effective litmus test
    >>to differentiate the cases in which you need to use assert and when
    >>you need to use genuine error checking: you use error checking for
    >>things that could happen, even very improbably. You use assert only
    >>for things that you truly believe cannot possibly happen under any
    >>circumstances. An assertion that fails always signals a design or a
    >>programmer error - not a user error."

    >
    > This is a very good explanation (source?). assert is for finding bugs
    > in a program. Input validation, error handling, ... is not done with
    > asserts. Therefore asserts should never be used in production
    > (release) code.
    >

    I can't agree with that. Asserts should never be used for input
    validation, error handling or anything use related to external input to
    the application. But they do have value for trapping "impossible"
    internal states.
    >
    >>And where does all this manifest itself? At the developer's box?
    >>Sometimes. But the majority of time in a modern large application
    >>with many users, it's very likely for a bug to show itself in the
    >>field. And that is exactly where the assertion does not exist.

    >
    > The bug crashes the program, assert calls abort which crashes the
    > program. So?
    >

    Assert can abort the program in a controlled way, it could generate an
    error report.

    One very reliable embedded product my team developed started rebooting
    in the field, luckily for us we were logging asserts in non-volatile
    storage so we were able to trace the cause - a faulty batch of parts
    that reported an impossible state, checked by an assert.

    --
    Ian Collins.
     
    Ian Collins, Mar 3, 2007
    #11
  12. Ian Collins Guest

    Chris Theis wrote:
    >
    >> Using a debugger is the most inefficient activity in programming.

    >
    >
    > Sorry, I might seem to be thick but this is a statement which I do not
    > understand at all. What exactly do you mean by inefficient? In my
    > understanding the term inefficient relates to wasting time and
    > considering the use of a debugger as a waste of time sounds rather
    > awsome to me. But there might be a misunderstanding here.
    >

    If your code has a decent set of unit tests, you should rarely, if ever,
    have to resort to the debugger. If you change some code and the tests
    fail, revert the change and try again. If your test harness is too
    cumbersome to run the tests after each small change, replace it with
    some thing that isn't.

    --
    Ian Collins.
     
    Ian Collins, Mar 3, 2007
    #12
  13. Chris Theis Guest

    "Ian Collins" <> wrote in message
    news:...
    > Chris Theis wrote:
    >>
    >>> Using a debugger is the most inefficient activity in programming.

    >>
    >>
    >> Sorry, I might seem to be thick but this is a statement which I do not
    >> understand at all. What exactly do you mean by inefficient? In my
    >> understanding the term inefficient relates to wasting time and
    >> considering the use of a debugger as a waste of time sounds rather
    >> awsome to me. But there might be a misunderstanding here.
    >>

    > If your code has a decent set of unit tests, you should rarely, if ever,
    > have to resort to the debugger. If you change some code and the tests
    > fail, revert the change and try again. If your test harness is too
    > cumbersome to run the tests after each small change, replace it with
    > some thing that isn't.


    I do concur that the debugger is not necessarily to best tool of choice to
    test code, but rather to debug code! Unit tests etc. will verify compliance
    with a set of rules/contract guidelines, but that's about it. This might be
    nit-picking but the OT referred to using a debugger as the most inefficient
    activity in programming and not testing.

    Cheers
    Chris
     
    Chris Theis, Mar 4, 2007
    #13
  14. Ian Collins Guest

    Chris Theis wrote:
    >
    > "Ian Collins" <> wrote in message
    > news:...
    >> Chris Theis wrote:
    >>>
    >>>> Using a debugger is the most inefficient activity in programming.
    >>>
    >>> Sorry, I might seem to be thick but this is a statement which I do not
    >>> understand at all. What exactly do you mean by inefficient? In my
    >>> understanding the term inefficient relates to wasting time and
    >>> considering the use of a debugger as a waste of time sounds rather
    >>> awsome to me. But there might be a misunderstanding here.
    >>>

    >> If your code has a decent set of unit tests, you should rarely, if ever,
    >> have to resort to the debugger. If you change some code and the tests
    >> fail, revert the change and try again. If your test harness is too
    >> cumbersome to run the tests after each small change, replace it with
    >> some thing that isn't.

    >
    > I do concur that the debugger is not necessarily to best tool of choice
    > to test code, but rather to debug code! Unit tests etc. will verify
    > compliance with a set of rules/contract guidelines, but that's about it.


    No, if you practice Test Driven Development (TDD) your unit tests take
    on a far more important role. Done well, one seldom has to debug.

    > This might be nit-picking but the OT referred to using a debugger as the
    > most inefficient activity in programming and not testing.
    >

    He was correct, if you frequently resort to the debugger, something in
    your process id broken. Stepping through code "to check that it works"
    is an incredible waste of time.

    --
    Ian Collins.
     
    Ian Collins, Mar 4, 2007
    #14
  15. On Sun, 04 Mar 2007 09:35:54 +1300, Ian Collins wrote:
    >Assert can abort the program in a controlled way, it could generate an
    >error report.
    >One very reliable embedded product my team developed started rebooting
    >in the field, luckily for us we were logging asserts in non-volatile
    >storage so we were able to trace the cause - a faulty batch of parts
    >that reported an impossible state, checked by an assert.


    What you describe (logging, error report) is classic error handling
    rather than detection of bugs.

    Best regards,
    Roland Pibinger
     
    Roland Pibinger, Mar 4, 2007
    #15
  16. On Sun, 4 Mar 2007 01:52:07 +0100, "Chris Theis" wrote:
    >I do concur that the debugger is not necessarily to best tool of choice to
    >test code, but rather to debug code! Unit tests etc. will verify compliance
    >with a set of rules/contract guidelines, but that's about it. This might be
    >nit-picking but the OT referred to using a debugger as the most inefficient
    >activity in programming and not testing.


    When you debug a program you always start anew. You cannot automate
    debugging. asserts and unit tests find bugs automatically and are
    therefore more efficient than debugging.

    Best wishes,
    Roland Pibinger
     
    Roland Pibinger, Mar 4, 2007
    #16
  17. Ian Collins Guest

    Roland Pibinger wrote:
    > On Sun, 04 Mar 2007 09:35:54 +1300, Ian Collins wrote:
    >
    >>Assert can abort the program in a controlled way, it could generate an
    >>error report.
    >>One very reliable embedded product my team developed started rebooting
    >>in the field, luckily for us we were logging asserts in non-volatile
    >>storage so we were able to trace the cause - a faulty batch of parts
    >>that reported an impossible state, checked by an assert.

    >
    >
    > What you describe (logging, error report) is classic error handling
    > rather than detection of bugs.
    >

    I am describing using asserts to enforce the contract between the
    application and its operating environment. If a device or library
    specification specifies a valid set of output values, assert is a good
    sanity check.

    --
    Ian Collins.
     
    Ian Collins, Mar 4, 2007
    #17
  18. Chris Theis Guest

    "Ian Collins" <> wrote in message
    news:...
    > Chris Theis wrote:
    >>
    >> "Ian Collins" <> wrote in message
    >> news:...
    >>> Chris Theis wrote:
    >>>>
    >>>>> Using a debugger is the most inefficient activity in programming.
    >>>>
    >>>> Sorry, I might seem to be thick but this is a statement which I do not
    >>>> understand at all. What exactly do you mean by inefficient? In my
    >>>> understanding the term inefficient relates to wasting time and
    >>>> considering the use of a debugger as a waste of time sounds rather
    >>>> awsome to me. But there might be a misunderstanding here.
    >>>>
    >>> If your code has a decent set of unit tests, you should rarely, if ever,
    >>> have to resort to the debugger. If you change some code and the tests
    >>> fail, revert the change and try again. If your test harness is too
    >>> cumbersome to run the tests after each small change, replace it with
    >>> some thing that isn't.

    >>
    >> I do concur that the debugger is not necessarily to best tool of choice
    >> to test code, but rather to debug code! Unit tests etc. will verify
    >> compliance with a set of rules/contract guidelines, but that's about it.

    >
    > No, if you practice Test Driven Development (TDD) your unit tests take
    > on a far more important role. Done well, one seldom has to debug.
    >
    >> This might be nit-picking but the OT referred to using a debugger as the
    >> most inefficient activity in programming and not testing.
    >>

    > He was correct, if you frequently resort to the debugger, something in
    > your process id broken. Stepping through code "to check that it works"
    > is an incredible waste of time.


    But that's exactly what I meant - using the debugger to "check that it
    works" is certainly not the best approach. For this there are other methods,
    but this is what I, probably this is a personal opinion, consider as
    "testing" in the context that checks are made that modules work as they are
    supposed to do. However, using the debugger in programming, and this IMHO
    refers to the whole context of development & bug fixing, is certainly not a
    waste of time. When you've had that problem that your software rebooted at
    the client, what did you do? I guess you didn't start to run all the unit
    tests again, because if you could have spotted the problem with them you
    would have done that before shipping the product, wouldn't you?

    Don't get me wrong, I highly value unit tests as an important & useful
    approach, but I also regard a debugger as one of the most important tools at
    your fingertips. Still, one has to know where each tool has its application.
    It's for sure that having a hammer doesn't mean that everything looks like a
    nail.

    Cheers
    Chris
     
    Chris Theis, Mar 4, 2007
    #18
  19. On Sun, 04 Mar 2007 23:14:14 +1300, Ian Collins rote:
    >I am describing using asserts to enforce the contract between the
    >application and its operating environment. If a device or library
    >specification specifies a valid set of output values, assert is a good
    >sanity check.


    Is a contract violation a bug or an expected runtime scenario? IMO,
    the latter.

    Best regards,
    Roland Pibinger
     
    Roland Pibinger, Mar 4, 2007
    #19
  20. Ian Collins Guest

    Chris Theis wrote:
    > "Ian Collins" <> wrote in message
    > news:...
    >
    >> Chris Theis wrote:
    >>>
    >>> I do concur that the debugger is not necessarily to best tool of choice
    >>> to test code, but rather to debug code! Unit tests etc. will verify
    >>> compliance with a set of rules/contract guidelines, but that's about it.

    >>
    >>
    >> No, if you practice Test Driven Development (TDD) your unit tests take
    >> on a far more important role. Done well, one seldom has to debug.
    >>
    >>> This might be nit-picking but the OT referred to using a debugger as the
    >>> most inefficient activity in programming and not testing.
    >>>

    >> He was correct, if you frequently resort to the debugger, something in
    >> your process id broken. Stepping through code "to check that it works"
    >> is an incredible waste of time.

    >
    > But that's exactly what I meant - using the debugger to "check that it
    > works" is certainly not the best approach. For this there are other
    > methods, but this is what I, probably this is a personal opinion,
    > consider as "testing" in the context that checks are made that modules
    > work as they are supposed to do.


    That's were we differ, I consider them an important part of the design
    process.

    > However, using the debugger in
    > programming, and this IMHO refers to the whole context of development &
    > bug fixing, is certainly not a waste of time. When you've had that
    > problem that your software rebooted at the client, what did you do? I
    > guess you didn't start to run all the unit tests again, because if you
    > could have spotted the problem with them you would have done that before
    > shipping the product, wouldn't you?
    >

    In that case, the debugger was of no use. We had to run the target with
    an ICE and a logic analyser for several days waiting for the fault to occur!

    Unit tests were irrelevant in this example, the assert was there to
    validate the input from the device (in this case, the value of a
    register when an interrupt occurred). No one tests for all possible
    invalid input - remember this was an input outside of the device's
    specifications.

    > Don't get me wrong, I highly value unit tests as an important & useful
    > approach, but I also regard a debugger as one of the most important
    > tools at your fingertips.


    I'm currently working in a language where I don't have debugger (PHP),
    so I have to rely on my unit tests. If these are fine grained enough,
    it's fairly easy to find a fault when one fails.


    --
    Ian Collins.
     
    Ian Collins, Mar 4, 2007
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. CW

    Webmessenger principle

    CW, Sep 22, 2004, in forum: ASP .Net
    Replies:
    5
    Views:
    707
    Steven Cheng[MSFT]
    Sep 23, 2004
  2. Pavel Pluhacek

    principle of stport std::sort

    Pavel Pluhacek, Sep 1, 2003, in forum: C++
    Replies:
    2
    Views:
    439
    llewelly
    Sep 1, 2003
  3. Thomas Matthews

    Dependency Inversion Principle Dilemma

    Thomas Matthews, Dec 18, 2003, in forum: C++
    Replies:
    12
    Views:
    663
    Mike Smith
    Dec 23, 2003
  4. Joe Feldman

    Principle Engineer needed

    Joe Feldman, Sep 28, 2004, in forum: C++
    Replies:
    14
    Views:
    9,110
    nenupharvn
    May 6, 2010
  5. Replies:
    3
    Views:
    391
    Victor Bazarov
    Aug 12, 2005
Loading...

Share This Page