Real world coding standards implementation feedback

D

daniel.coudriet

Hello all,

we are in the process of selecting / devising a C++ coding standard to
use internally to our company our be able to provide feedback to our
customers about.

We are looking to settle on a commonly accepted standard in the
embedded software industry and, while we are seriously considering
both MISRA C++ 2008 and JSF++ Rev. D as likely candidates, we would
like to assess relavance and adoption of those standards for general
purpose embedded systems development.

Hence my question: have you or someone wihtin your organization
already deployed such a standard, and, should this be the case, what
were the pittfalls you faced in doing so?

TIA.

Best regards,
 
J

joshuamaurice

Hello all,

we are in the process of selecting / devising a C++ coding standard to
use internally to our company our be able to provide feedback to our
customers about.

We are looking to settle on a commonly accepted standard in the
embedded software industry and, while we are seriously considering
both MISRA C++ 2008 and JSF++ Rev. D as likely candidates, we would
like to assess relavance and adoption of those standards for general
purpose embedded systems development.

Hence my question: have you or someone wihtin your organization
already deployed such a standard, and, should this be the case, what
were the pittfalls you faced in doing so?

I've found that spending such time on a coding standard places more
emphasis on following the coding standard than the emphasis on writing
good code. Be careful that the coding standard doesn't become its own
end.

Also, for "statically strongly typed" languages like C++, the
consensus on these boards is to avoid Hungarian notation and instead
use meaningful names.
 
P

Phlip

daniel.coudr... said:
we are in the process of selecting / devising a C++ coding standard to
use internally to our company our be able to provide feedback to our
customers about.

How about "every line of code, every branch, and every argument shall have
documenting unit tests"?
 
P

Phlip

Jeff said:
100% code coverage means you screwed something up. Any non-trivial
project contains unreachable code. That code exists only to handle
assertion failures, unexpected default cases in switch statements, etc.
It's never meant to be executed; it's only there as documentation.

That's where the bugs roll in from the field - the program just excepted and
died. The programmer thought nobody could get to a given block, and just wrote a
throw 'death' there. Then the program got there.

Branching into untested error handling code is the oldest trick in the book for
getting your pager to go off at night...
 
P

Phlip

Jeff said:
100% code coverage means you screwed something up. Any non-trivial
project contains unreachable code. That code exists only to handle
assertion failures, unexpected default cases in switch statements, etc.
It's never meant to be executed; it's only there as documentation.

Profound irony: While some platforms are indulging in Mock Abuse - using mocks
to avoid improving their code refactoring despite TDD...

....Some other platforms might not using enough Mocks for one of their actual
purposes: Simulating hardware errors so you can TDD their handlers.
 
J

joshuamaurice

Profound irony: While some platforms are indulging in Mock Abuse - using mocks
to avoid improving their code refactoring despite TDD...

...Some other platforms might not using enough Mocks for one of their actual
purposes: Simulating hardware errors so you can TDD their handlers.

I hope you're not saying that c-style asserts killing the process have
no place in real code. I would disagree with you sir. Are you?
 
A

andreas.koestler

My highest mission is to disagree (accurately) with anything anyone says on the
interthing. Even when I can't decipher their polarities.

However, we might possibly find common ground somewhere in here:

   http://c2.com/cgi/wiki?DoNotUseAssertions

I added the following verbiage, where "assertion" specifically meant the C
assert.h macro:

   Step zero: Refactor comments into assertions
   Step one: Refactor assertions out of the code into unit tests
   Step three: Escalate the remaining assertions into program exceptions

Am aware the count is off.

So you see that software's state is a flow, where the moment point of software
could conceivably have an assert() in live code, in debug mode, possibly with a
surprise awaiting the end-users in production mode. Yet the flow of software
development must always trend away from that situation...

I had to make a decision on whether to use MISRA or JSF coding
standards (purely due to regulatory reasons) for a safety critical
project. Imho MISRA is not as mature and thought through than JSF is.
Keep in mind though that they both contain 200+ rules so unless you're
using a static code analysis tool (like the LDRA toolsuit) you will
have trouble to enforce the coding standard. If coding standard
compliance is critical this might become a problem.
 
I

Ian Collins

Jeff said:
100% code coverage means you screwed something up. Any non-trivial
project contains unreachable code.

Any badly implemented project contains unreachable code. If code is
unreachable, it should never get written and if if does, the compiler
and/or lint will complain.
That code exists only to handle
assertion failures, unexpected default cases in switch statements, etc.
It's never meant to be executed; it's only there as documentation.

But it can and should be tested. One day a hapless maintainer is gong
to change something that causes the code to be run.
 
J

joshuamaurice

Any badly implemented project contains unreachable code.  If code is
unreachable, it should never get written and if if does, the compiler
and/or lint will complain.


But it can and should be tested.  One day a hapless maintainer is gong
to change something that causes the code to be run.

I think the claim meant that c-style asserts used to assert that an
internal class invariant is true results in code which is never
executed. The invariant is always true, so the code which brings up
the debugger or terminates when the assert fails is unreachable for
correct code.

It is my belief that given equal resources to write the code, one can
write more robust code by using c-style asserts as sanity checks and
for class invariants. These are the kind of checks which are very
useful for development, to catch programmer bugs. The alternative is
basically unthinkable: to allow every single method everywhere to
return a failure code if an internal invariant is broken. Allowing
everything to return an error code (or throw an exception) introduces
much complexity for asserts which are basically untrippable in correct
code. That is the claim made.

The claim is not that we should use c-style asserts to validate user
input, input from untrusted sources, or input to a general purpose
library (though in select cases that may be appropriate as well, such
as if the mutex library detects a deadlock). The argument is that c-
style asserts are incredibly useful tool when used to verify
invariants which are only modifiable from a small number of places,
such as an internal class invariant.

Finally, note that I very much oppose returning an error code or
throwing an exception when such a c-style assert is hit for C or C++
code. If such an assert is triggered, it's quite possible that memory
corruption or some other kind of corruption has destroyed your
process. It is much better to die, possibly by going into a debuggable
state, than it is to continue along in a corrupted state. Continuing
along when in a corrupted state risks corruption of user data, etc.
 
D

dancoud

Hello Phlip,

your statement is relevant, however it is somehow missing the point as
both your suggestions and the enforcement of a well tough-out coding /
design standard contribute to improve software quality and software
engineering efficiency.

I would rather avoid to fail a test altogether thanks to proper
coding / design practices than make sure there is 100% U.T. coverage
and fix my broken code afterwards, i.d. when somlething fails during
testing (be it Unit, Integration or V&V testing).

All the more, coding standards, when properly designed and enforced
have educational purposes that 100% test coverage do not exhibit: they
tend to lead to good code which works and not only code that works
because, though it might have been poorly devised, there was such a
test effort that "we fixed it so that it finally works".

In the end, both approaches are complementary and usefull, I am just
trying to figure-pout the coding standard aspect at the moment,
testing being addressed separately.

Besrt regards,
 
D

dancoud

Hello,

I do agree with your point regarding Hungarian notation.

As to a coding standard being an end in itself, we (within my
organization) are well aware of that pittfall and are focused on
helping software engineer produce better code rather than getting good
metrics w/o a purpose.

Regarding not spending to much effort into devising a coding standard,
my whole point is there: I'd like to leverage existing experience /
feedback stemming from actual deployment of such coding standards to
be able to make a quick decision with a reasonable likelihood that the
selected standard will prove relevant and effective for the kind of
software we customarily engineer.

Best regards,
 
I

Ian Collins

I think the claim meant that c-style asserts used to assert that an
internal class invariant is true results in code which is never
executed. The invariant is always true, so the code which brings up
the debugger or terminates when the assert fails is unreachable for
correct code.

Maybe, but there should be tests that trigger those asserts.
 
I

Ian Collins

dancoud wrote:
[please don't top-post]
> I do agree with your point regarding Hungarian notation.
>
> As to a coding standard being an end in itself, we (within my
> organization) are well aware of that pitfall and are focused on
> helping software engineer produce better code rather than getting good
> metrics w/o a purpose.

A good start.
> Regarding not spending to much effort into devising a coding standard,
> my whole point is there: I'd like to leverage existing experience /
> feedback stemming from actual deployment of such coding standards to
> be able to make a quick decision with a reasonable likelihood that the
> selected standard will prove relevant and effective for the kind of
> software we customarily engineer.

Some organisations get too hung up on petty details in coding standards.
My last team started of with a 20 odd page standard (written buy me)
and ended up with a wiki best practice page.

After a relatively short time, a common standard just evolved, they were
happy with it and I was happy to ditch my document. As long as all the
code looks like it had been written by the same person and can be
understood by all the team, the need for petty rules vanishes.

The most important things to document are agreed dos and don'ts.
 
J

James Kanze

Define "documenting unit tests". A unit test doesn't document
anything. And passing a unit test doesn't prove readability.
Unit tests are an essential part of the development process (and
given the possible standards cited by the original poster, and
the environment from which they come, I'm rather certain he's
aware of this). But you still need guidelines for such basic
things as naming conventions, indentation, file organization...
100% code coverage means you screwed something up. Any
non-trivial project contains unreachable code. That code
exists only to handle assertion failures, unexpected default
cases in switch statements, etc.
It's never meant to be executed; it's only there as documentation.

That's true to a point, but an assertion is more than
documentation---a failed assertion means that there is an error
elsewhere in the code. And while I don't normally plan it that
way, the failed case in the assertion often ends up being
"tested" before the working case.
 
J

James Kanze

My highest mission is to disagree (accurately) with anything
anyone says on the interthing. Even when I can't decipher
their polarities.
However, we might possibly find common ground somewhere in
here:

The author of which obviously doesn't understand software
engineering, or reliability issues.
I added the following verbiage, where "assertion" specifically
meant the C assert.h macro:
Step zero: Refactor comments into assertions

Good idea when possible. Preconditions, for example, should be
expressed as asssertions, when possible. Sometimes, it's not
practical for performance reasons. Something like a binary
search requires that the array passed to it be sorted; asserting
this more or less defeats the purpose of the binary search,
since it requires linear execution time. And of course, some
things simply cannot be verified from within the program. (A
precondition of, say, std::vector<>::push_back is that no other
thread is currently accessing the vector; that if other threads
can access the vector, then the accesses must be externally
synchronized.)

I'm not sure about your use of "refactor" here, though. Just
because you add assertions (in the implementation of the
function, thus in the source file) doesn't mean you should
remove the comments (in the header file).
Step one: Refactor assertions out of the code into unit tests

Again, I question the use of the word "refactor". And I'm not
too sure what you're really asking for: client code of the
function with assertions should definitely have unit tests to
ensure that it never triggers the assertion. But testing isn't
perfect, shit happens, and the exception should stay in so that
it triggers if an error occurs after delivery.
Step three: Escalate the remaining assertions into program exceptions

If you're talking about C++ exceptions here, this is definitely
wrong. An assertion is a check for a possible software error.
If there is a software error, you want to terminate the program
as quickly as possible, executing as little additional code as
possible. The last thing you want to do is a stack walkback,
calling destructors on possibly corrupt objects.
Am aware the count is off.
So you see that software's state is a flow, where the moment
point of software could conceivably have an assert() in live
code, in debug mode, possibly with a surprise awaiting the
end-users in production mode. Yet the flow of software
development must always trend away from that situation...

If the code has been carefully tested and reviewed, the
probability of an assertion triggering in the field is very,
very low. But you still have to keep it in there, just in
case---no process is perfect.
 
J

James Kanze

dancoud wrote:

[...]
After a relatively short time, a common standard just evolved,
they were happy with it and I was happy to ditch my document.
As long as all the code looks like it had been written by the
same person and can be understood by all the team, the need
for petty rules vanishes.

A written standard is still useful for new programmers, so that
they know what is expected of them.
The most important things to document are agreed dos and
don'ts.

Exactly. (And if all of the code looks like it was written by
the same person, there are probably more of those than you
think. Some of them pretty petty: do you put a space before a
semi-colon or not?)
 
I

Ian Collins

James said:
dancoud wrote:
[...]
After a relatively short time, a common standard just evolved,
they were happy with it and I was happy to ditch my document.
As long as all the code looks like it had been written by the
same person and can be understood by all the team, the need
for petty rules vanishes.

A written standard is still useful for new programmers, so that
they know what is expected of them.
The most important things to document are agreed dos and
don'ts.

Exactly. (And if all of the code looks like it was written by
the same person, there are probably more of those than you
think. Some of them pretty petty: do you put a space before a
semi-colon or not?)

You don't, you set your editor to do it for you!

A common editor setup or pretty-print on check in is the place to impose
such mechanical rules.
 
P

Phlip

dancoud said:
I would rather avoid to fail a test altogether thanks to proper
coding / design practices

I'm pimping Test-Driven Development. Testing is not a phase, it is how you write
the code. So many developer tests will fail, just before you write the code to
pass them. You are testing the tests, at the most valuable point in the process.

That, in turn, has a profound effect on design quality.
All the more, coding standards, when properly designed and enforced
have educational purposes that 100% test coverage do not exhibit: they
tend to lead to good code which works and not only code that works
because, though it might have been poorly devised, there was such a
test effort that "we fixed it so that it finally works".

What I meant was the Prime Directive of any coding standard should _not_ be
where to place the {}, it should be "test-first". And nobody in TDD-land
practices "we just fixed it until it finally works". If your code is in a rut,
you revert back to where it worked better, and then start again.
 
D

dancoud

Phlip said:
I'm pimping Test-Driven Development. Testing is not a phase, it is how you write
the code. So many developer tests will fail, just before you write the code to
pass them. You are testing the tests, at the most valuable point in the process.

That, in turn, has a profound effect on design quality.


What I meant was the Prime Directive of any coding standard should _not_ be
where to place the {}, it should be "test-first". And nobody in TDD-land
practices "we just fixed it until it finally works". If your code is in a rut,
you revert back to where it worked better, and then start again.

Phlip,

I am not ditching TDD (actually I would advocate it).

Nevertheless my initial question still holds: do you guys, out there
in Embedded Software land, have feedback to provide about the real
world deployment of either MISRAC C++ 2008 and / or JSF++ in your R&D
organizations?

I am not intending to advocate coding standards over other QA
practices but merely to figure-out adoption level and known pitfalls
of well defined ones.

BTW, if you look at either one, you'll soon realize that there is much
more at stake there than where to place a curly brace ...

Best regards,
 
N

Noah Roberts

James said:
Again, I question the use of the word "refactor".

Another "agile development" term.

http://en.wikipedia.org/wiki/Code_refactoring
http://www.refactoring.com/

http://books.google.com/books?id=1M...5vHnDQ&sa=X&oi=book_result&ct=result&resnum=4

Looks like the first Chapter, a sample refactoring session, is at least
partially there.

It's probably a rather common technique even among those who don't know
what it is. I was using it before I learned there were actually people
studying how to do it more effectively and safely.

You should also google "code smell". Smells are often tied to the
refactor that is meant to get rid of them.

There's nothing that says you need to be doing agile development to use
these techniques or concepts. The idea of refactoring, as applied in
agile development, does require full unit test coverage though. The
whole point is that you change your code and run the same tests that
existed before...thus telling you that you probably haven't broken anything.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,065
Latest member
OrderGreenAcreCBD

Latest Threads

Top