Our OWN 'Deep C Secrets'

Discussion in 'C Programming' started by Kamilche, May 8, 2004.

  1. Kamilche

    Kamilche Guest

    You know, all this furor over this book caused me to go look it up on
    Amazon. I've never read this book... but from what I can see from the
    legally available table of contents, excerpt, and index at Amazon, it
    looks more like a "Teach me newbie C" book than a "UNCOVER DEEP
    MYSTERIES OF C!!" sort of affair.

    I've got a better idea! Let's discuss some 'Deep C Secrets' of our
    own! I'll start.

    Testing - If you haven't tested to prove it right, it can almost
    certainly be proven to be WRONG. It's terribly easy to screw
    something up in C, causing buffer overruns and undefined behavior.
    It's gotten to the point where I don't feel comfortable calling coding
    done until the tests are written. There's too many places C can bite
    you.

    I wasn't this paranoid in other programming languages... you know, the
    ones that take 1/10 the time to write, and run 100x slower, like VB.
    Because you didn't have fine-grained control over the code, there were
    fewer places to screw up something fundamental. But you paid for it in
    loss of control, loss of speed, and a less stable development
    environment, because you had to rely on OTHER programmer's buggy code.
    If it's YOUR buggy code, you at least have a fighting chance of
    figuring out what's wrong with it and fixing it.

    Uh, look there. A pearl of wisdom! Grab it and start your OWN book.
    :p
    Kamilche, May 8, 2004
    #1
    1. Advertising

  2. Kamilche wrote:
    > You know, all this furor over this book caused me to go look it up on
    > Amazon. I've never read this book... but from what I can see from the
    > legally available table of contents, excerpt, and index at Amazon, it
    > looks more like a "Teach me newbie C" book than a "UNCOVER DEEP
    > MYSTERIES OF C!!" sort of affair.
    >
    > I've got a better idea! Let's discuss some 'Deep C Secrets' of our
    > own! I'll start.
    >
    > Testing - If you haven't tested to prove it right, it can almost
    > certainly be proven to be WRONG. It's terribly easy to screw
    > something up in C, causing buffer overruns and undefined behavior.


    Except you can almost *never* prove anything right by testing it. That
    is a very dangerous mind set that is bound to screw you over sooner or
    later. The only sure way to prove a program correct is to write a
    mathematical proof. Testing only works if the size of the input is
    bounded and it can all be tested. (But tests are good because they can
    prove something is *wrong* with a piece of code.)
    --
    Daniel Sjöblom
    Remove _NOSPAM to reply by mail
    =?ISO-8859-1?Q?Daniel_Sj=F6blom?=, May 9, 2004
    #2
    1. Advertising

  3. Daniel Sjöblom <_NOSPAM> writes:

    > Except you can almost *never* prove anything right by testing it. That
    > is a very dangerous mind set that is bound to screw you over sooner or
    > later. The only sure way to prove a program correct is to write a
    > mathematical proof.


    However, a mathematical proof created by a human can contain logical
    fallacies, so you also have to prove the proof correct. And as the
    proof of the proof can again contain logical fallacies, you also have
    to prove it correct, and so on ad infinitum.

    > Testing only works if the size of the input is bounded and it can all
    > be tested. (But tests are good because they can prove something is
    > *wrong* with a piece of code.)


    Given the choice between a 10 million lines of code program which comes
    with a mathematical proof of correctness, but has not been tested, and
    a thoroughly tested 10 million lines of code program which has not been
    proven correct, I'd trust the latter more.

    As Stanford mathematician Donald E. Knuth once put it [1]: "Beware of
    bugs in the above code; I have only proved it correct, not tried it." :)

    Martin


    [1] http://www-cs-faculty.stanford.edu/~knuth/faq.html

    --
    ,--. Martin Dickopp, Dresden, Germany ,= ,-_-. =.
    / ,- ) http://www.zero-based.org/ ((_/)o o(\_))
    \ `-' `-'(. .)`-'
    `-. Debian, a variant of the GNU operating system. \_/
    Martin Dickopp, May 9, 2004
    #3
  4. Kamilche

    CBFalconer Guest

    Daniel Sjöblom wrote:
    > Kamilche wrote:
    >

    .... snip ...
    >>
    >> Testing - If you haven't tested to prove it right, it can almost
    >> certainly be proven to be WRONG. It's terribly easy to screw
    >> something up in C, causing buffer overruns and undefined behavior.

    >
    > Except you can almost *never* prove anything right by testing it.
    > That is a very dangerous mind set that is bound to screw you over
    > sooner or later. The only sure way to prove a program correct is
    > to write a mathematical proof. Testing only works if the size of
    > the input is bounded and it can all be tested. (But tests are
    > good because they can prove something is *wrong* with a piece of
    > code.)


    The technique of loop invariants can prove some assertions about
    various routines. They can also help to document the
    preconditions required. Unfortunately virtually nobody bothers.
    Having proved those results, they can then be used in larger
    areas, provided you keep the overall system well and cleanly
    structured.

    --
    fix (vb.): 1. to paper over, obscure, hide from public view; 2.
    to work around, in a way that produces unintended consequences
    that are worse than the original problem. Usage: "Windows ME
    fixes many of the shortcomings of Windows 98 SE". - Hutchison
    CBFalconer, May 9, 2004
    #4
  5. Kamilche

    Kamilche Guest

    Daniel Sjöblom <_NOSPAM> wrote in message news:<409e3c6e$0$2438$>...

    > Except you can almost *never* prove anything right by testing it. That
    > is a very dangerous mind set that is bound to screw you over sooner or
    > later. The only sure way to prove a program correct is to write a
    > mathematical proof. Testing only works if the size of the input is
    > bounded and it can all be tested. (But tests are good because they can
    > prove something is *wrong* with a piece of code.)


    Yes, that's true - testing doesn't prove the absence of errors, it
    only stops the stupidest ones. A programmer who does boundary
    testing, tests for success, tests for failures, and stress testing,
    can rest easier at night knowing they've made an honest effort.
    Believe it or not, in my experience, most programmers DON'T unit test
    their programs... they only do so if there's a bug so hard-to-find
    that normal testing doesn't catch it. Sad, but true.

    What's the testing policy in YOUR shop?
    Kamilche, May 10, 2004
    #5
  6. Kamilche

    John L Guest

    "Kamilche" <> wrote in message news:...
    >
    > Yes, that's true - testing doesn't prove the absence of errors, it
    > only stops the stupidest ones. A programmer who does boundary
    > testing, tests for success, tests for failures, and stress testing,
    > can rest easier at night knowing they've made an honest effort.
    > Believe it or not, in my experience, most programmers DON'T unit test
    > their programs... they only do so if there's a bug so hard-to-find
    > that normal testing doesn't catch it. Sad, but true.
    >


    One thing that can undermine testing is management merely
    adding discovered flaws to a list of bugs (or "issues") rather
    than immediately fixing them.

    Another is not automating the analysis of test output. It can
    be too easy to take any plausible output as being correct,
    especially for our own code. A former colleague once asked
    me to help track down an elusive bug in his hash table code.
    We reran all his unit tests, one of which involved storing and
    retrieving one word beginning with each letter of the alphabet.
    Fortunately, I knew that there should have been 26 words output,
    not 25. The programmer was a lot brighter than I am but was
    faced with the classic problem of proofreading his own work:
    he saw what he knew what should have been there.

    --
    John.
    John L, May 10, 2004
    #6
  7. Martin Dickopp wrote:
    > Daniel Sjöblom <_NOSPAM> writes:
    >
    >
    >>Except you can almost *never* prove anything right by testing it. That
    >>is a very dangerous mind set that is bound to screw you over sooner or
    >>later. The only sure way to prove a program correct is to write a
    >>mathematical proof.

    >
    >
    > However, a mathematical proof created by a human can contain logical
    > fallacies, so you also have to prove the proof correct.


    Actually no. A proof cannot contain logical fallacies. If it did, it
    wouldn't be a proof :)

    >
    >
    >>Testing only works if the size of the input is bounded and it can all
    >>be tested. (But tests are good because they can prove something is
    >>*wrong* with a piece of code.)

    >
    >
    > Given the choice between a 10 million lines of code program which comes
    > with a mathematical proof of correctness, but has not been tested, and
    > a thoroughly tested 10 million lines of code program which has not been
    > proven correct, I'd trust the latter more.


    You read too much into my comment. I never said it was an either or
    choice. I'd prefer both tested and proved code.
    --
    Daniel Sjöblom
    Remove _NOSPAM to reply by mail
    =?ISO-8859-1?Q?Daniel_Sj=F6blom?=, May 10, 2004
    #7
  8. On Sat, 8 May 2004, Kamilche wrote:

    > You know, all this furor over this book caused me to go look it up on
    > Amazon. I've never read this book... but from what I can see from the
    > legally available table of contents, excerpt, and index at Amazon, it
    > looks more like a "Teach me newbie C" book than a "UNCOVER DEEP
    > MYSTERIES OF C!!" sort of affair.
    >
    > I've got a better idea! Let's discuss some 'Deep C Secrets' of our
    > own! I'll start.
    >
    > Testing - If you haven't tested to prove it right, it can almost
    > certainly be proven to be WRONG. It's terribly easy to screw
    > something up in C, causing buffer overruns and undefined behavior.
    > It's gotten to the point where I don't feel comfortable calling coding
    > done until the tests are written. There's too many places C can bite
    > you.


    I'd have to disagree with this. Far too often I have seen people use
    testing to make themselves feel good about their code. I believe testing
    is just a way to audit your design. It is a way to make sure you did the
    right thing. If you did not design the right thing then how are you going
    to test it? In other words, if I didn't think to code to prevent X then
    why would I test for X?

    Design and attention to detail are important. Empirical testing cannot
    give 100% certainty. The number of combinations in just a simple project
    is incredibly high. If there is user input then the numbers sky-rocket.

    You need to think about all the things the program is capable of. Define
    what you want it to do for all those capabilities. Then testing becomes a
    way to confirm you did what you planned.

    If you have introduced undefined behaviour into our program, testing might
    not catch it. One of the most common mistake is that programmers test and
    test and test and test. After testing to the point of insanity they decide
    it is good and ship it. First problem, how many different configurations
    did they test on? Probably one or two. Maybe ten to twenty different
    configurations. Second problem, how many total person hours went into
    testing? Let's say 10 people testing every day for 8 hours. They test for
    a month (20 business days) for a total of 1600 hours. You ship it and sell
    1000 copies. In 2 hours there has been more 'testing' by the customers
    then you did in a month. There will be a lot of duplication with the
    customers use but within a month or two they will find things you did not.

    If the program was badly designed or requirements were not well defined,
    I'd consider testing the same as using a bucket to keep a leaky boat from
    sinking.

    Some companies do a 'beta' release just to get more empirical testing on
    the product. To me this is using a bigger bucket.

    > I wasn't this paranoid in other programming languages... you know, the
    > ones that take 1/10 the time to write, and run 100x slower, like VB.
    > Because you didn't have fine-grained control over the code, there were
    > fewer places to screw up something fundamental. But you paid for it in
    > loss of control, loss of speed, and a less stable development
    > environment, because you had to rely on OTHER programmer's buggy code.
    > If it's YOUR buggy code, you at least have a fighting chance of
    > figuring out what's wrong with it and fixing it.
    >
    > Uh, look there. A pearl of wisdom! Grab it and start your OWN book.
    > :p
    >


    --
    Send e-mail to: darrell at cs dot toronto dot edu
    Don't send e-mail to
    Darrell Grainger, May 10, 2004
    #8
  9. On Mon, 10 May 2004 16:38:13 +0300, Daniel Sjöblom wrote:

    > Martin Dickopp wrote:
    >> Daniel Sjöblom <_NOSPAM> writes:
    >>
    >>
    >>>Except you can almost *never* prove anything right by testing it. That
    >>>is a very dangerous mind set that is bound to screw you over sooner or
    >>>later. The only sure way to prove a program correct is to write a
    >>>mathematical proof.

    >>
    >>
    >> However, a mathematical proof created by a human can contain logical
    >> fallacies, so you also have to prove the proof correct.

    >
    > Actually no. A proof cannot contain logical fallacies. If it did, it
    > wouldn't be a proof :)


    Nonsense. If you call it a proof, I have to disprove your logic to
    contradict you. If I don't follow your logic, or if your logical method is
    wrong in a way we both miss, your proof is worthless yet we will both
    believe it is valid.

    Running a suite of tests is (or should be) trivial, and it should catch
    all obvious possible errors and all errors related to outliers in the
    possible input sets. If errors persist, the only reasonable way to catch
    them is through real-life testing and debugging. Which is what really
    counts anyway: Real-life performance, not ivory tower mathematics.

    In any case, how do we test airframes and cars and other physical objects?
    Crash testing, proving grounds, and other engineering methods. Mathematics
    only describes what we already know, and it can exhaustively search spaces
    we have already tested physically. In an absurd case, that might even be
    useful. Actual testing, on the other hand, tells us something new.

    --
    yvoregnevna gjragl-guerr gjb-gubhfnaq guerr ng lnubb qbg pbz
    To email me, rot13 and convert spelled-out numbers to numeric form.
    "Makes hackers smile" makes hackers smile.
    August Derleth, May 11, 2004
    #9
  10. On Tue, 11 May 2004, August Derleth wrote:
    >
    > On Mon, 10 May 2004 16:38:13 +0300, Daniel Sjöblom wrote:
    > > Martin Dickopp wrote:
    > >> Daniel Sjöblom <_NOSPAM> writes:
    > >>> The only sure way to prove a program correct is to write a
    > >>> mathematical proof.
    > >>
    > >> However, a mathematical proof created by a human can contain logical
    > >> fallacies, so you also have to prove the proof correct.

    > >
    > > Actually no. A proof cannot contain logical fallacies. If it did, it
    > > wouldn't be a proof :)

    >
    > Nonsense. If you call it a proof, I have to disprove your logic to
    > contradict you.


    Calling a tail a proof doesn't make it one. :)

    -Arthur
    Arthur J. O'Dwyer, May 11, 2004
    #10
  11. Kamilche

    Kamilche Guest

    (Darrell Grainger) wrote in message news:<>...

    > I'd have to disagree with this. Far too often I have seen people use
    > testing to make themselves feel good about their code...


    > ... Second problem, how many total person hours went into
    > testing? Let's say 10 people testing every day for 8 hours. They test for
    > a month (20 business days) for a total of 1600 hours. You ship it and sell
    > 1000 copies. In 2 hours there has been more 'testing' by the customers
    > then you did in a month. There will be a lot of duplication with the
    > customers use but within a month or two they will find things you did not.
    >
    > If the program was badly designed or requirements were not well defined,
    > I'd consider testing the same as using a bucket to keep a leaky boat from
    > sinking.


    You can't artifically inflate the numbers to support your argument,
    and use 'many customers will test for me' as a justification to get
    out of doing unit testing. Unit testing is done by the programmer, not
    users... and trying to 'pass the buck' to quality control or
    end-users, is one of the reasons the software industry is in such bad
    shape today. There's no way ten people SHOULD be testing every day for
    8 hours, for a month, just to make up for a lazy programmer who
    refuses to unit test. The programmer who wrote the code, should be the
    programmer who writes the unit tests, and the tests should run
    automatically, either at code compilation time, or once at startup.
    Live human testers should be reserved for system integration tests,
    the 'big picture' bugs that programmers don't often catch on their
    own.

    It sounds to me, like you're trying to feel good WITHOUT testing your
    code... and you really shouldn't. You say your code works? Prove it.
    When embedded in the source code, the test code serves double duty as
    a usage example, as well, so it certainly won't be wasted effort.
    Kamilche, May 11, 2004
    #11
  12. "Arthur J. O'Dwyer" <> writes:

    > On Tue, 11 May 2004, August Derleth wrote:
    >>
    >> On Mon, 10 May 2004 16:38:13 +0300, Daniel Sjöblom wrote:
    >> > Martin Dickopp wrote:
    >> >> Daniel Sjöblom <_NOSPAM> writes:
    >> >>> The only sure way to prove a program correct is to write a
    >> >>> mathematical proof.
    >> >>
    >> >> However, a mathematical proof created by a human can contain logical
    >> >> fallacies, so you also have to prove the proof correct.
    >> >
    >> > Actually no. A proof cannot contain logical fallacies. If it did, it
    >> > wouldn't be a proof :)

    >>
    >> Nonsense. If you call it a proof, I have to disprove your logic to
    >> contradict you.

    >
    > Calling a tail a proof doesn't make it one. :)


    True. So we can only conclude that no mathematical proofs are
    guaranteed to exist. Anything created by a human being [*] as a proof
    could contain logical fallacies, and is therefore not guaranteed to be
    a proof. It is therefore futile to argue for proving a program correct,
    if there is no guarantee that the result will indeed be a proof. :)

    Martin


    [*] This also applies recursively to machines etc. created by human
    beings.


    --
    ,--. Martin Dickopp, Dresden, Germany ,= ,-_-. =.
    / ,- ) http://www.zero-based.org/ ((_/)o o(\_))
    \ `-' `-'(. .)`-'
    `-. Debian, a variant of the GNU operating system. \_/
    Martin Dickopp, May 11, 2004
    #12
  13. On Tue, 11 May 2004, Kamilche wrote:

    > (Darrell Grainger) wrote in message news:<>...
    >
    > > I'd have to disagree with this. Far too often I have seen people use
    > > testing to make themselves feel good about their code...

    >
    > > ... Second problem, how many total person hours went into
    > > testing? Let's say 10 people testing every day for 8 hours. They test for
    > > a month (20 business days) for a total of 1600 hours. You ship it and sell
    > > 1000 copies. In 2 hours there has been more 'testing' by the customers
    > > then you did in a month. There will be a lot of duplication with the
    > > customers use but within a month or two they will find things you did not.
    > >
    > > If the program was badly designed or requirements were not well defined,
    > > I'd consider testing the same as using a bucket to keep a leaky boat from
    > > sinking.

    >
    > You can't artifically inflate the numbers to support your argument,
    > and use 'many customers will test for me' as a justification to get
    > out of doing unit testing. Unit testing is done by the programmer, not
    > users... and trying to 'pass the buck' to quality control or
    > end-users, is one of the reasons the software industry is in such bad
    > shape today. There's no way ten people SHOULD be testing every day for
    > 8 hours, for a month, just to make up for a lazy programmer who
    > refuses to unit test. The programmer who wrote the code, should be the
    > programmer who writes the unit tests, and the tests should run
    > automatically, either at code compilation time, or once at startup.
    > Live human testers should be reserved for system integration tests,
    > the 'big picture' bugs that programmers don't often catch on their
    > own.
    >
    > It sounds to me, like you're trying to feel good WITHOUT testing your
    > code... and you really shouldn't. You say your code works? Prove it.
    > When embedded in the source code, the test code serves double duty as
    > a usage example, as well, so it certainly won't be wasted effort.


    You have snipped a good deal of the original message. If you go back and
    re-read the message you will see that I never said that you should not
    test. I was trying to stress the point that testing just confirms your
    design and it is the design and requiremens that are most important.

    The impression I received from the original posting was that with C
    language you had to test more to make sure you caught defects. I was
    saying that with C language the DESIGN is more important. Without design
    what are you testing? If you didn't do a design or didn't think about
    something in your design then how on earth are you going to think to test
    for it?

    Additionally, I was NOT saying that testing was pointless and you should
    just rely on quality control or the end user to test your product. The
    point I was trying to make was that if you rely on JUST testing and don't
    have a solid design to guide your testing you will fail. Random or even
    adhoc testing will not work. The numbers I was giving were to show that if
    you rely on random or adhoc testing, i.e. testing that is not guided by a
    good design, then you might as well leave it to the customer to test for
    you.

    When I code a function I design and code it with testing in mind. By the
    time the code is complete I already know all the tests that have to be
    run. Testing is an by-product that comes out of a good design.

    --
    Send e-mail to: darrell at cs dot toronto dot edu
    Don't send e-mail to
    Darrell Grainger, May 11, 2004
    #13
  14. On Tue, 11 May 2004 01:46:29 -0600, in comp.lang.c , August Derleth
    <> wrote:

    >Running a suite of tests is (or should be) trivial, and it should catch
    >all obvious possible errors and all errors related to outliers in the
    >possible input sets.


    /Running/ the tests is indeed trivial. Defining all possible input sets is
    the tricky part. For most programmes, thats an infinite set.

    --
    Mark McIntyre
    CLC FAQ <http://www.eskimo.com/~scs/C-faq/top.html>
    CLC readme: <http://www.angelfire.com/ms3/bchambless0/welcome_to_clc.html>


    ----== Posted via Newsfeed.Com - Unlimited-Uncensored-Secure Usenet News==----
    http://www.newsfeed.com The #1 Newsgroup Service in the World! >100,000 Newsgroups
    ---= 19 East/West-Coast Specialized Servers - Total Privacy via Encryption =---
    Mark McIntyre, May 11, 2004
    #14
  15. Martin Dickopp wrote:
    > "Arthur J. O'Dwyer" <> writes:
    >
    >
    >>On Tue, 11 May 2004, August Derleth wrote:
    >>
    >>>On Mon, 10 May 2004 16:38:13 +0300, Daniel Sjöblom wrote:
    >>>
    >>>>Martin Dickopp wrote:
    >>>>
    >>>>>Daniel Sjöblom <_NOSPAM> writes:
    >>>>>
    >>>>>>The only sure way to prove a program correct is to write a
    >>>>>>mathematical proof.
    >>>>>
    >>>>>However, a mathematical proof created by a human can contain logical
    >>>>>fallacies, so you also have to prove the proof correct.
    >>>>
    >>>>Actually no. A proof cannot contain logical fallacies. If it did, it
    >>>>wouldn't be a proof :)
    >>>
    >>>Nonsense. If you call it a proof, I have to disprove your logic to
    >>>contradict you.

    >>
    >>Calling a tail a proof doesn't make it one. :)

    >
    >
    > True. So we can only conclude that no mathematical proofs are
    > guaranteed to exist. Anything created by a human being [*] as a proof
    > could contain logical fallacies, and is therefore not guaranteed to be
    > a proof. It is therefore futile to argue for proving a program correct,
    > if there is no guarantee that the result will indeed be a proof. :)


    Bah. If you want to get metaphysical, you cannot be sure you are talking
    to me, so the correctness of a proof should be the least of your
    concerns wrt to certainity :)

    More seriously, you are just being defeatist. Of course I can make a
    mistake in a proof, just like I can make many other mistakes in my life.
    Does that mean I should do none of the things I may fail at and stay in
    bed all day long instead? I'm frankly quite amazed at how many
    programmers believe that testing is a replacement for thinking up front
    (since you guys can't seem to stand the word proof.) How do you suppose
    a programmer writing his own unit tests will write tests to test stuff
    that he didn't think of while coding?

    And again, I'm not discouraging testing, I'm merely pointing out that it
    is not a silver bullet.
    --
    Daniel Sjöblom
    Remove _NOSPAM to reply by mail
    =?ISO-8859-1?Q?Daniel_Sj=F6blom?=, May 11, 2004
    #15
  16. On Tue, 11 May 2004 22:27:52 +0100, Mark McIntyre wrote:

    > On Tue, 11 May 2004 01:46:29 -0600, in comp.lang.c , August Derleth
    > <> wrote:
    >
    >>Running a suite of tests is (or should be) trivial, and it should catch
    >>all obvious possible errors and all errors related to outliers in the
    >>possible input sets.

    >
    > /Running/ the tests is indeed trivial. Defining all possible input sets is
    > the tricky part. For most programmes, thats an infinite set.


    Indeed, and that's /another/ reason why proofs are rather futile: A proof
    holds water only if you can either prove something about the complete set
    of possible inputs and build from there, or if you can construct a proof
    that will hold for all inputs no matter how cockeyed.

    Testing the outliers, on the other hand, is stress testing, which is done
    on a routine basis in every engineering shop worth its calipers. (You
    can't test every possible wind velocity, so you test the most extreme
    velocities you can until something breaks down. If the breakdown happens
    at a too-low velocity, you fix it. If the breakdown happens at an absurdly
    high velocity, you pat yourself on the back and record the tested limit
    for that component or system.)

    (Something else occurs to me: A proof is rather useless unless you can
    prove things about the hardware. Things can't be virtual all the way down.)

    --
    yvoregnevna gjragl-guerr gjb-gubhfnaq guerr ng lnubb qbg pbz
    To email me, rot13 and convert spelled-out numbers to numeric form.
    "Makes hackers smile" makes hackers smile.
    August Derleth, May 12, 2004
    #16
  17. Kamilche

    Richard Bos Guest

    =?ISO-8859-1?Q?Daniel_Sj=F6blom?= <_NOSPAM> wrote:

    > I'm frankly quite amazed at how many
    > programmers believe that testing is a replacement for thinking up front
    > (since you guys can't seem to stand the word proof.)


    In practice, "proof" does not mean "thinking up front", it means
    "justifying a posteriori". Most program proofs I've seen were nothing
    but a display of the isomorphism of _this_ program to _that_ formal
    specification, which specification was then blithely assumed to be
    correct. Unfortunately, in general, that is inevitable; any proof will
    be more complex than the algorithm it "proves", and therefore less, not
    more, likely to be understandable and correct.

    Richard
    Richard Bos, May 12, 2004
    #17
  18. August Derleth <> wrote in message news:<>...
    > On Tue, 11 May 2004 22:27:52 +0100, Mark McIntyre wrote:
    > > On Tue, 11 May 2004 01:46:29 -0600, in comp.lang.c , August Derleth
    > > <> wrote:


    > >>Running a suite of tests is (or should be) trivial, and it should catch
    > >>all obvious possible errors and all errors related to outliers in the
    > >>possible input sets.

    > >
    > > /Running/ the tests is indeed trivial. Defining all possible input sets is
    > > the tricky part. For most programmes, thats an infinite set.

    >
    > Indeed, and that's /another/ reason why proofs are rather futile: A proof
    > holds water only if you can either prove something about the complete set
    > of possible inputs and build from there, or if you can construct a proof
    > that will hold for all inputs no matter how cockeyed.


    proofs can deal with infinite things

    x^2 + y^2 = z^2

    is saying something about *all* right angled triangles. To test it
    would
    involve an infinite set of cases. (Ignoring for the moment that
    numbers
    are normally finite on computers).

    > Testing the outliers, on the other hand, is stress testing, which is done
    > on a routine basis in every engineering shop worth its calipers. (You
    > can't test every possible wind velocity, so you test the most extreme
    > velocities you can until something breaks down. If the breakdown happens
    > at a too-low velocity, you fix it. If the breakdown happens at an absurdly
    > high velocity, you pat yourself on the back and record the tested limit
    > for that component or system.)


    software is discrete rather than continuous. If your bridge works at
    20
    tons and 30 tons its likely ok at 25. Software isn't like this. What
    is
    odd is that those people with the wind tunnel don't despise
    mathematics,
    whilst much of the software world does.

    > (Something else occurs to me: A proof is rather useless unless you can
    > prove things about the hardware. Things can't be virtual all the way down.)


    there have been attempts at formal descriptions of hardware. There was
    even
    an attempt to prove a CPU formally correct (or construct one that was
    formally correct). Viper I believe. I think the Transputer was
    partially
    proven correct (being partially non-formal is a like being a little
    bit pregnant). I understand modern hardware is "compiled" from formal
    descriptions that look a bit like programming langages.


    --
    Nick Keighley
    Nick Keighley, May 12, 2004
    #18
  19. On Wed, 12 May 2004 04:51:12 -0600, in comp.lang.c , August Derleth
    <> wrote:

    >at a too-low velocity, you fix it. If the breakdown happens at an absurdly
    >high velocity, you pat yourself on the back and record the tested limit
    >for that component or system.)


    no, you reduce the cost^wrobustness of the materials a bit... :)


    --
    Mark McIntyre
    CLC FAQ <http://www.eskimo.com/~scs/C-faq/top.html>
    CLC readme: <http://www.angelfire.com/ms3/bchambless0/welcome_to_clc.html>


    ----== Posted via Newsfeed.Com - Unlimited-Uncensored-Secure Usenet News==----
    http://www.newsfeed.com The #1 Newsgroup Service in the World! >100,000 Newsgroups
    ---= 19 East/West-Coast Specialized Servers - Total Privacy via Encryption =---
    Mark McIntyre, May 12, 2004
    #19
  20. Richard Bos wrote:
    > =?ISO-8859-1?Q?Daniel_Sj=F6blom?= <_NOSPAM> wrote:
    >
    >
    >>I'm frankly quite amazed at how many
    >>programmers believe that testing is a replacement for thinking up front
    >>(since you guys can't seem to stand the word proof.)

    >
    >
    > In practice, "proof" does not mean "thinking up front", it means
    > "justifying a posteriori". Most program proofs I've seen were nothing
    > but a display of the isomorphism of _this_ program to _that_ formal
    > specification, which specification was then blithely assumed to be
    > correct. Unfortunately, in general, that is inevitable; any proof will
    > be more complex than the algorithm it "proves", and therefore less, not
    > more, likely to be understandable and correct.


    I cannot disagree with that.

    But assuming a proof is too complex, there are still some uses for the
    proof, correct or not:

    Because its hard to write a big proof, the program will tend to be as
    modular as possible, so that each module can be proven separately.

    A proof by its very nature requires that all the possible input is
    defined (note, this is not equivalent to enumerating the input.) This
    naturally leads to code that documents its behaviour and possible
    undefined behaviour on different inputs.

    Sooner or later, most software will have to be maintained. A proof can
    be very useful in assuring that a mainteinance procedure (refactoring,
    dead code removal or whatever) does not break the code.

    In general, a proof will tend to be as straight forward as possible,
    leading to code that is straight forward and easy to maintain. I've seen
    much code that is three times longer (sometimes three times slower) than
    it should be, because certain conditions are asserted multiple times
    when they shouldn't be, or the author did not clearly understand the
    problem.

    Finally, simple or not so simple proofs are required in standard program
    optimizations, if they are to be safe.

    Testing will not do anything about issues 1,2 and 4, 5 (5 possibly being
    the compiler's job instead).

    I understand that for most projects there is no point in attempting a
    whole program proof. It is just too expensive and difficult, and often
    not even possible because there are factors that are outside the control
    of the programmers. I do think it is critical in software that can
    directly affect the lives of people.
    --
    Daniel Sjöblom
    Remove _NOSPAM to reply by mail
    =?ISO-8859-1?Q?Daniel_Sj=F6blom?=, May 13, 2004
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Jake Barnes
    Replies:
    1
    Views:
    419
    Andy Dingley
    Nov 14, 2005
  2. Pradeep

    Implementing our own providers

    Pradeep, Jun 7, 2006, in forum: ASP .Net
    Replies:
    3
    Views:
    1,805
    john_teague
    Jun 7, 2006
  3. Ding Wei
    Replies:
    8
    Views:
    586
    Liu Jin
    Jul 9, 2003
  4. macluvitch
    Replies:
    13
    Views:
    729
    Alan Balmer
    Jul 1, 2004
  5. sathyashrayan
    Replies:
    3
    Views:
    2,168
    Ron Natalie
    Dec 26, 2005
Loading...

Share This Page