Coding skills

K

Kelsey Bjarnason

[snips]

I've been around for some time, and I am yet to see a single case where
a customer (external or internal) knows upfront what requirements she
actually has.
It is a sign of maturity of the programmer to anticipate (and suggest!)
changes in specifications, - and write code with guards against
attempted violations.

Sure. Now let me ask you this: how often do *you* write your code to,
oh, cache all the local OS files so that if something goes wrong, you can
recover?

Right, you don't. Part of the contract the software is designed to is
that it has a functioning OS. It's not the application's job to ensure
this.
If things /really/ work, the skills have been adequate (by definition).
Not every parking garage has to be an architectural marvel. None should
collapse under its weight.
Coding skills stand out immediately in e.g. code reviews.

This assumes the reviewers know what they're looking for, and at, which
is not always the case.

Take the case in question: every predictable failure - server outages,
for example - is dealt with in the code, which is *very* robust about
such things.

So what caused our failure? Was it a server outage? No. It was someone
breaking the contract. The code is designed a particular way, based on a
particular flow of data through the system. That flow was tampered with,
with undesirable results. Yet in normal operation - "normal" including
all failure modes that the environment can actually expect to encounter
in usage - causes no such problem.

So, is this a case of a bad design, bad code? No; it's a case of someone
violating the contract to which the code is written. So what'll a code
review show? Oh, yes, the code does not deal with the case of someone
with admin level access injecting invalid data into the system. Well, of
course not; it wasn't designed to, as in normal operation it does not
*get* invalid data, and in every error mode which can be predicted, it
simply gets no data at all.

You cannot prevent all failures; you can only prevent the ones which are
predictable. Nor can you even *detect* all failures, only the ones which
stand out in some algorithmically detectable way - but even there, the
effort to do so may simply not be worth it: if it requires some thumb-
fingered twit to manually mess things up in the first place, do you spend
an extra three weeks writing code to deal with that, or do you just tell
said thumb-fingered twit "don't do that"?
 
A

Ark Khasin

Kelsey said:
[snips]
changes in specifications, - and write code with guards against
attempted violations.

Sure. Now let me ask you this: how often do *you* write your code to,
oh, cache all the local OS files so that if something goes wrong, you can
recover?

Right, you don't. Part of the contract the software is designed to is
that it has a functioning OS. It's not the application's job to ensure
this.
Disputable. Haven't you seen application-level workarounds for known OS
err... quirks? Including inspection of OS version etc.?
If you /assume/ a functioning OS, the OS is included in your product V&V.
It's trivially a quality issue. A quality level acceptable for a word
processor is different from that of an aircraft engine controller. But
you knew that.
This assumes the reviewers know what they're looking for, and at, which
is not always the case.
We aren't talking just pro forma reviews, are we?
Take the case in question: every predictable failure - server outages,
for example - is dealt with in the code, which is *very* robust about
such things.

So what caused our failure? Was it a server outage? No. It was someone
breaking the contract. The code is designed a particular way, based on a
particular flow of data through the system. That flow was tampered with,
with undesirable results. Yet in normal operation - "normal" including
all failure modes that the environment can actually expect to encounter
in usage - causes no such problem.

So, is this a case of a bad design, bad code? No; it's a case of someone
violating the contract to which the code is written. So what'll a code
review show? Oh, yes, the code does not deal with the case of someone
with admin level access injecting invalid data into the system. Well, of
course not; it wasn't designed to, as in normal operation it does not
*get* invalid data, and in every error mode which can be predicted, it
simply gets no data at all.
I would suggest to separate the discussion of bad designs vs. bad code.
I've seen horrific code implementing a sensible design (the thing
magically works but isn't maintainable) and elegant code implementing an
idiotic design (the thing doesn't work, but programmers' artifacts are
salvaged for the next spin of the design).
You cannot prevent all failures; you can only prevent the ones which are
predictable. Nor can you even *detect* all failures, only the ones which
stand out in some algorithmically detectable way - but even there, the
effort to do so may simply not be worth it: if it requires some thumb-
fingered twit to manually mess things up in the first place, do you spend
an extra three weeks writing code to deal with that, or do you just tell
said thumb-fingered twit "don't do that"?
Think of a deliberate attack on your system. How clever must an attacker
be to break it? How protective should you be? Is the corresponding
decision-making a part of the design process? Is it a part of the
`contract', whatever the term means?
There is a practitioner's technique - FMEDA - not ideal but sufficiently
practical, which includes brainstorming on what - at all - can possibly
go wrong. Then you classify all failures as detected and undetected in
your design. You do it consciously. If your work affects lives or
well-being of people (e.g., managing medical records or warehousing of
warheads), you additionally classify failures as dangerous or not. You
then explicitly accept a tolerable risk level of each class of failures
as your design input.
If that becomes part of the `contract' and is accepted by your customer,
then indeed breaking it is none of your fault.
 
K

Kelsey Bjarnason

[snips]

Disputable. Haven't you seen application-level workarounds for known OS
err... quirks?

Sure. Now, in Windows terms, delete the registry and see how well things
work. Oh, wait, they don't. Thus an application is obligated to cache
the entire registry and be able to reinstall it as needed? Which also
means every application runs at highest privilege levels?

No, that's silly. The application's job is to _do it's job_ not to
babysit the OS.

Think of a deliberate attack on your system. How clever must an attacker
be to break it?

In our case, very. He has to be inside the LAN. Not just the normal
office LAN, either, but inside a private sub-LAN to which only two
machines have access; one is not on the main LAN at all, the other is
well secured.
How protective should you be?

Against admins who have and need access to do their jobs? The very
people who are paid to administer those systems? Protecting yourself
from the very people paid to work on the systems seems a bit silly.
Is the corresponding
decision-making a part of the design process? Is it a part of the
`contract', whatever the term means?

You're unfamiliar with the concept? I'll explain it.

A program is expected to do certain things, certain ways. In order to do
that, it needs certain basic guarantees: a machine to run on, with a
working OS, power, whatever usernames and passwords and the like are
required, the ability to connect to other machines it needs to
communicate with and so forth.

It also needs to know other conditions it will deal with. If it is a
world-facing program, it needs to know that, as the security issues are
different from those applying to a machine which is locked in a vault
with no network access.

It also needs to know other issues, such as "the power here goes down for
four hours every Thursday night" or "sometimes the AC fails and when it
does, the UPS goes into thermal shutdown, taking the system with it."

It also needs to know what it is expected to do, and how.

Collectively, these are the "contract". The program honours its side of
the contract by doing what it's supposed to do, correctly, while coping
with the known failure conditions, as well as any unknown but predictable
failure conditions. It should be able to, say, cope with an unexpected
power failure, if the nature of the application is such that a power
failure would have unacceptable consequences.

The flip side of that is that the program needs the contract to be
honoured by the environment in which it works. Thus the occasional
unscheduled power outage, or the network going down now and then, or a
server it needs to talk to failing to respond, these are predictable
failures.

Some bonehead coming along and injecting bogus data into the primary
database, outside the program's control, is *not* such a condition. The
only ones who have the access to do that also, in theory, have the
knowledge not to. It is an unrealistic demand to have the program guard
itself against that in general, even more so when it is nigh-on
impossible to detect the problem except when it comes to the final
output, many steps later - many programs later.
There is a practitioner's technique - FMEDA - not ideal but sufficiently
practical, which includes brainstorming on what - at all - can possibly
go wrong. Then you classify all failures as detected and undetected in
your design. You do it consciously. If your work affects lives or
well-being of people (e.g., managing medical records or warehousing of
warheads), you additionally classify failures as dangerous or not. You
then explicitly accept a tolerable risk level of each class of failures
as your design input.
If that becomes part of the `contract' and is accepted by your customer,
then indeed breaking it is none of your fault.

Exactly. So let's take an example.

Several years ago, I read a book about Three Mile Island. According to
it (I don't know the veracity of what it said, I merely use it here as an
example) the designs of the plant included valve sensors to determine
whether the valves were open or closed. The designs were such that a
valve would only report as closed (or open) if it actually *was* closed
(or open).

One can envision any of a number of ways to achieve this, not least of
which is simply having a portion of the valve make or break an electrical
contact as it moves up to open or down to close. If it ain't closed,
there's no circuit on the "closed" side, so the indicator circuit doesn't
operate.

Instead, according to the book, it was actually built a little
differently, with the indicators tied to the power side of the valves.
Thus when power was applied, the indicator reported "closed", when there
was no power, it reported "open".

One slight difference in operation: in the first case, the indicators
would never report "closed" unless they were; a stuck valve, for example,
would result in no contact being made, no "closed" report being issued.
In the latter case, however, a stuck valve still had power, it just
wasn't actually closed - but because of the power, it would report
"closed".

Suppose you're writing the software for such a system. You know that if
temperatures go above a certain point, you need to close some valves,
open others. You know - as part of the contract - that when a valve says
"closed" it really is closed. So you check the state of the valves,
close the ones which need closing, check to ensure they are, in fact
closed and all is good.

Except it ain't, because the valve monitors are lying to you; they're
telling you valves are closed when they're not.

So, do we blame the code? Or do we blame the thumb-fingered idiot who
installed the wrong sort of sensor? The code lived up to its side of the
contract, the other side didn't. The contract was violated.

You can put the best programmer in the world on a project and still get
bogus results, if the contract to which he designed the program is
violated. This doesn't make him a bad programmer, it means he wrote the
program to work in one set of conditions, a set of conditions which
changed.

Perhaps the simplest example of that is hiring someone to write a Windows
program then complaining when it doesn't work in Linux, or on a Mac.
Well, of course not; the contract the program was written to said
"Windows" not "An arbitrarily changing operating system". Is it the
code's fault it doesn't work? No. It's the fault of a violated contract.
 
M

mirzamisamhusain

Malcolm McLean wrote, On 17/02/08 22:45:







I suggest you sue the BCS for misleading advertising then, since under
"About us" on their home page they say, "BCS is the leading professional
body for those working in IT."


If you are going to use your own definitions for terms then you need to
state what definition you are using otherwise no one will know what you
means. So in this case what you intended is true but what you actually
stated is clearly false. Tell me, have you redefined all the terms used
in your field of work as well, such as molecule?

i want to know that how will i create any pro gramme in c and i am
unable to understand c programming and i join a computer institute so
i want to ask the question about it.
thanks
MIRZA MISAM HUSAIN <[email protected]>
 
S

santosh

(e-mail address removed) wrote:

i want to know that how will i create any pro gramme in c and i am
unable to understand c programming and i join a computer institute so
i want to ask the question about it.
thanks
MIRZA MISAM HUSAIN <[email protected]>

Start with this tutorial. If you have any difficulty ask here, but
provide details.

<http://www.eskimo.com/~scs/cclass/cclass.html>

Also see the comp.lang.c FAQ, but it will only make sense after you have
learned the basics from the above tutorial.

<http://www.c-faq.com/>
 
F

Flash Gordon

santosh wrote, On 27/02/08 07:32:
(e-mail address removed) wrote:



Start with this tutorial. If you have any difficulty ask here, but
provide details.

<http://www.eskimo.com/~scs/cclass/cclass.html>

That is a good tutorial.
Also see the comp.lang.c FAQ, but it will only make sense after you have
learned the basics from the above tutorial.

<http://www.c-faq.com/>

That is good reference material.

However, in my opinion you should have a good text book/reference as
well as a good tutorial. K&R2 is good for this (the bibliography in the
FAQ tells you what K&R2 is).
 
K

Keith Thompson

Flash Gordon said:
santosh wrote, On 27/02/08 07:32: [...]
Start with this tutorial. If you have any difficulty ask here, but
provide details.

<http://www.eskimo.com/~scs/cclass/cclass.html>

That is a good tutorial.

I haven't looked at it much, but I have little doubt that you're
correct.
That is good reference material.

Agreed, but it's not a general reference for the C language (and it's
not intended to be one). Its intent is to clear up points of
confusion, not to teach or fully describe the language. It's
extremely useful *after* you've started to learn the language.
However, in my opinion you should have a good text book/reference as
well as a good tutorial. K&R2 is good for this (the bibliography in
the FAQ tells you what K&R2 is).

Yes, K&R2 is excellent -- but I'd say it's primarily a tutorial,
though it also has a reference section at the back. In that sense,
K&R2 and <http://www.eskimo.com/~scs/cclass/cclass.html> probably
serve much the same purpose.

H&S5 (Harbison & Steele, 5th edition) is a good reference.

The definitive reference, of course, is the language standard, but
it's emphatically not a tutorial.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,780
Messages
2,569,611
Members
45,279
Latest member
LaRoseDermaBottle

Latest Threads

Top