Real world coding standards implementation feedback

N

Noah Roberts

James said:
Define "documenting unit tests". A unit test doesn't document
anything. And passing a unit test doesn't prove readability.
Unit tests are an essential part of the development process (and
given the possible standards cited by the original poster, and
the environment from which they come, I'm rather certain he's
aware of this). But you still need guidelines for such basic
things as naming conventions, indentation, file organization...

Another agile development paradigm. The unit tests, since they cover
the entire API, serve as documentation for said API. If I want to know
how a particular class is meant to be used, I can look at the unit
tests. Of course, the unit tests often need some sort of documentation
of their own.

I don't totally disagree with the reasoning. Quite often the best way
to learn how to use something is to see it used. That's what the unit
tests do. I don't agree though that it works in all cases or for all
people. Thus we do both unit tests AND doxygen AND uml AND comments
where something is weird.
 
N

Noah Roberts

Ian said:
James Kanze wrote:

You don't, you set your editor to do it for you!

A common editor setup or pretty-print on check in is the place to impose
such mechanical rules.

Except I've yet to see one that doesn't hose any formatting you've done
to make your template code readable.

Frankly, I think most developers are overly obsessed with formatting.
It doesn't actually matter that much. Space or no space before the
semi-colon has no effect on readability at all. Since it has no effect
it really doesn't matter. The only thing a standard about that does is
keep people from being annoyed about penmanship differences, because
that's all that it is.

Naming conventions and formatting do make a difference, since you don't
want to have to keep checking if a function is camel case or
spelled_with_underlines. Mixing that stuff in projects is a major
pain...but spaces, indentation, etc...are not a big deal.

Coding Standard #0: "Don't sweat the small stuff."
 
B

Bart van Ingen Schenau

Phlip said:
What I meant was the Prime Directive of any coding standard should
_not_ be where to place the {}, it should be "test-first".

I strongly disagree with that. A coding standard should _not_ describe
the process of how you create your source code, be that "test first",
"design first", or "hack around".
The coding standard is there to document the agreements within the team
regarding naming conventions, code layout and (un-)acceptable coding
constructs.
And nobody
in TDD-land practices "we just fixed it until it finally works". If
your code is in a rut, you revert back to where it worked better, and
then start again.

Actually, the TDD process of
- write a failing testcase
- fix the code until all testcases pass again
- repeat.

sounds very much like "we just fixed it until it finally works",
although you would probably replace "fix" with "refactor" which does not
really change the meaning. :)

Bart v Ingen Schenau
 
P

Phlip

Bart said:
I strongly disagree with that. A coding standard should _not_ describe
the process of how you create your source code, be that "test first",
"design first", or "hack around".
The coding standard is there to document the agreements within the team
regarding naming conventions, code layout and (un-)acceptable coding
constructs.

I missed if the OP requested technical or aesthetic style guides. The technical
guide should indeed be more nebulous, because it should always be "the best
practice you can learn for each situation."
Actually, the TDD process of
- write a failing testcase
- fix the code until all testcases pass again
- repeat.

sounds very much like "we just fixed it until it finally works",
although you would probably replace "fix" with "refactor" which does not
really change the meaning. :)

I look forward to pairing with you about that.
 
N

Noah Roberts

Bart said:
Phlip wrote:

Actually, the TDD process of
- write a failing testcase
- fix the code until all testcases pass again
- repeat.

sounds very much like "we just fixed it until it finally works",

What's wrong with "fix it until it works"? Do propose some other
criteria for discontinuing of fixing? I generally stop fixing things
when they work and no sooner.

BTW, refactoring can only be done when it's already working. You
discontinue refactoring when you meet the open-close principle on the
change you need to perform.
 
P

Phlip

Jeff said:
What hell-hole have you been trapped in, where {} placement is still a
contentious topic?

Must you be so susceptible to teasing?
Note that "test first" is not the same thing as "test driven." "Test
first" is usually a good thing. "Test driven" is naive.

I was trying to state it technically.
 
P

Phlip

Noah said:
What's wrong with "fix it until it works"? Do propose some other
criteria for discontinuing of fixing? I generally stop fixing things
when they work and no sooner.

BTW, refactoring can only be done when it's already working. You
discontinue refactoring when you meet the open-close principle on the
change you need to perform.

Nonono if you were as smart as me, you could think about the design first,
before you implement it. Then you wouldn't have to fix it, or refactor it,
afterwards!

(-:
 
J

joshuamaurice

Same comment that I have put in another form elsewhere but this "It is
much better to die..." is only true if the potential consequences of
dying are less than the potential consequence of attempting to restore
a sane state.  In the general case, that is probably correct.  But in
all and every cases.... ?

This is a rather hot-topic, asserts. You're right that not everything
which someone calls an assert should kill the process. However, if I
roll my own hashmap class, I might put in an internal sanity checks
like the cached size actually equals the number of stored elements. If
this sanity check fails, I want my process to die, either to debug and
fix hashmap, or to find that memory / process corruption.

Sometimes, for graphics is the common example, doing something half
right is better than nothing at all. I find such an example contrived
at best. What exactly do you mean? I think that if your check fails,
I'd guess there's something more wrong than a graphical display
glitch.

And finally, for systems with true requirements of never dying, this
is always accomplished with independent backups, or 3 independent
things with a voting system, and never by "don't use kill-the-
process / debug-the-process asserts".
 
J

joshuamaurice

Maybe, but there should be tests that trigger those asserts.

Again, I think you're missing the point. If I roll my own hashmap
class and have an assert that the cached size value actually equals
the number of stored elements, it's impossible to write a test case to
hit that assert for a correct implementation. It's usefulness is only
to go from possibly broken implementation to correct implementation.

I'm not saying you shouldn't unit test it. Yes, you should unit test
it. However, it is relatively impossible to write a test to trigger
such an assert and not have that imply that there is a bug in your
code. That is the definition of assert in programming circles, though
people seem apt to try to redefine it to something else.
 
I

Ian Collins

Jeff said:
If you can provide some input that causes an assertion to fail, then by
definition, it should not have been an assertion in the first place.

Not true. We had a number of units rebooting with an assert in a driver
(something like asserting data was ready when the data read bit was set)
which turned out to be a hardware bug. The assert was justified, the
device guaranteed data under that condition. The assert proved it was
breaking that guarantee.
[1] The code is simply not reachable. It doesn't exist for the compiler,
but for the human beings who read the code.

Can you provide an example?
 
P

Phlip

Jeff said:
You have a habit of advocating your favorite ideas by comparing them
with straw men.

I also have a habit of getting fired for attempting to enforce valid practices,
such as unit testing, and then nobody here believing me. Yes, folks, it happens!
 
A

Alf P. Steinbach

* Phlip:
I also have a habit of getting fired for attempting to enforce valid
practices, such as unit testing, and then nobody here believing me. Yes,
folks, it happens!

Ouch.

Is this true?

Perhaps a good guideline is: never take a development job with a boss from a
firm that doesn't have a dedicated Quality Assurance team. :)


Cheers & hth.,

- Alf (the "do as I say, not as I do" man)
 
P

peter koch

I am with you until this last point.  Here however, I have to disagree
that the only correct answer to a programming error is to terminate
the program.   This is domain and requirement specific.  If for
example you are writing a cash point sofware, you may decide that in
case of programming error, the best thing to do is to terminate the
program. Not wanting to chance corrupting anything more than they may
already have been.  This might very well be the most common "best"
answer to a software error.

However, if you are writing say landing software for a
fly-by-wire Airbus, maybe terminating the software and making the
plane impossible to control would not be the correct answer.

Yannick

Which is why you have redundancy. A possible solution is to have three
systems: two identical systems in a master-slave configuration and an
independent backup system typically with far less features setting in
if both master and slave dies.

/Peter
 
I

Ian Collins

Jeff said:
If the processor is hosed, then you still can't count on the code
behaving properly, no matter how much unit testing you've done.

Hence the assert.
[1] The code is simply not reachable. It doesn't exist for the
compiler, but for the human beings who read the code.

Can you provide an example?

if (condition) {
// ...
} else {
assert(!condition);
// ...
}

That code would exist.
 
J

joshuamaurice

Jeff said:
Ian said:
Jeff Schwab wrote:
Ian Collins wrote:
Maybe, but there should be tests that trigger those asserts.
[1] The code is simply not reachable.  It doesn't exist for the
compiler, but for the human beings who read the code.
Can you provide an example?
if (condition) {
    // ...
} else {
    assert(!condition);
    // ...
}

That code would exist.

It depends on what you mean. That assert will not make it into
compiled code with basic optimizations on for a simple enough
"condition", like a bool. The compiler will come along, expand assert,
see the if from the expanded assert, do some simple flow analysis, and
eliminate it as dead code.

All basic sanity check asserts and assert internal invariants are like
this. They are effectively dead code for a correct implementation.
Only when the code is broken (or for sufficiently complex things to
escape compiler optimizations) do they make it into the compiled
executable, and once they're hit, the code is fixed, and the assert
becomes eliminated by dead code optimization.
 
J

James Kanze

Another "agile development" term.


Looks like the first Chapter, a sample refactoring session, is
at least partially there.

I know what refactoring normally means. If in the linked to
document, it means what it normally means, then it's wrong.
When I refactor code into a function, I remove the original
code, replacing it by a call to the function. When I insert an
assert to enforce a documented pre-condition, I definitely
should not remove the documentation of the pre-condition.

[...]
There's nothing that says you need to be doing agile
development to use these techniques or concepts.

In fact, many of the "agile development" techniques are just a
relabeling of what was common practice 30 or 40 years ago.
The idea of refactoring, as applied in agile development, does
require full unit test coverage though.

No more so than anything else. You always need full unit test
coverage. (Again, that was an established fact long before
"agile" became in.)
The whole point is that you change your code and run the same
tests that existed before...thus telling you that you probably
haven't broken anything.

Yes. Of course, the tests aren't sufficient in themselves; you
also need code review. But the tests are necessary.
 
J

James Kanze

I am with you until this last point. Here however, I have to
disagree that the only correct answer to a programming error
is to terminate the program. This is domain and requirement
specific. If for example you are writing a cash point
sofware, you may decide that in case of programming error, the
best thing to do is to terminate the program. Not wanting to
chance corrupting anything more than they may already have
been. This might very well be the most common "best" answer
to a software error.

I know. There are exceptions. (IIRC, game software is a good
example.)
However, if you are writing say landing software for a
fly-by-wire Airbus, maybe terminating the software and making
the plane impossible to control would not be the correct
answer.

If you're writing anything critical like that, terminating the
program as quickly as possible is a requirement, in order that
the back-up system take over. It's working on such systems
which taught me the importance of terminating the program in
such cases.
 
J

James Kanze

On 20 Maj, 18:58, (e-mail address removed) (Yannick Tremblay) wrote:

[...]
Which is why you have redundancy. A possible solution is to have three
systems: two identical systems in a master-slave configuration and an
independent backup system typically with far less features setting in
if both master and slave dies.

Not identical. In avionics, it's typically a requirement that
the two systems in the master-slave configuration be written by
two different, independent teams, and in at least one case I know
of, that they be programmed in different languages.
 
J

James Kanze

Another agile development paradigm. The unit tests, since
they cover the entire API, serve as documentation for said
API.

And that, of course, is just bullshit. A trumped up excuse for
not writing the documentation. Unit tests are unit tests.
Documentation is documentation. They serve two different roles,
and are two very different things.
If I want to know how a particular class is meant to be
used, I can look at the unit tests. Of course, the unit tests
often need some sort of documentation of their own.

And of course, trying to figure out how what the exact pre- and
post-conditions of a function are from unit tests is a herculean
task. Where as one line of English text may express it clearly
and concisely.
I don't totally disagree with the reasoning. Quite often the
best way to learn how to use something is to see it used.

That's something else. Although you'd certainly want your unit
tests to be more complete than just a simple example.
That's what the unit tests do. I don't agree though that it
works in all cases or for all people. Thus we do both unit
tests AND doxygen AND uml AND comments where something is
weird.

Exactly.

If I have to maintain the code, I'm definitely going to look at
the unit tests---I'll almost certainly have to expand them. If
I'm just a user, however, I shouldn't have to look at any code
(except maybe, as you say, some simple examples---but most
classes should be simple enough that you don't need those).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,065
Latest member
OrderGreenAcreCBD

Latest Threads

Top