status of Programming by Contract (PEP 316)?

T

Terry Reedy

| This really intrigues me - how do you program a dam? - and why is it
| critical?
|
| Most dams just hold water back.

Most big dams also generate electricity. Even without that, dams do not
just hold water back, they regulate the flow over a year or longer cycle.
A full dam is great for power generation and useless for flood control. An
empty dam is great for flood control and useless for power generation. So
both power generation and bypass release must be regulated in light of
current level, anticipated upstream precipitation, and downstream
obligations. Downstream obligations can include both a minimum flow rate
for downstream users and a maximum rate so as to not flood downstream
areas.

tjr
 
A

Alex Martelli

Ricardo Aráoz said:
We should remember that the level
of security of a 'System' is the same as the level of security of it's
weakest component,

Not true (not even for security, much less for reliability which is
what's being discussed here).

It's easy to see how this assertion of yours is totally wrong in many
ways...

Example 1: a toy system made up of subsystem A (which has a probability
of 90% of working right) whose output feeds into subsystem B (which has
a probability of 80% of working right). A's failures and B's faliures
are statistically independent (no common-mode failures, &c).

The ``level of goodness'' (probability of working right) of the weakest
component, B, is 80%; but the whole system has a ``level of goodness''
(probability of working right) of just 72%, since BOTH subsystems must
work right for the whole system to do so. 72 != 80 and thus your
assertion is false.

More generally: subsystems "in series" with independent failures can
produce a system that's weaker than its weakest component.


Example 2: another toy system made up of subsystems A1, A2 and A3, each
trying to transform the same input supplied to all of them into a 1 bit
result; each of these systems works right 80% of the time, statistically
independently (no common-mode failures, &c). The three subsystems'
results are reconciled by a simple majority-voting component M which
emits as the system's result the bit value that's given by two out of
three of the Ai subsystems (or, of course, the value given unanimously
by all) and has extremely high reliability thanks to its utter
simplicity (say 99.9%, high enough that we can ignore M's contribution
to system failures in a first-order analysis).

The whole system will fail when all Ai fail together (probability
0.2**3) or when 2 out of them fail while the hird one is working
(probability 3*0.8*0.2**2):
0.10400000000000004

So, the system as a whole has a "level of goodness" (probability of
working right) of almost 90% -- again different from the "weakest
component" (each of the three Ai's), in this case higher.

More generally: subsystems "in parallel" (arranged so as to be able to
survive the failure of some subset) with indipendent failures can
produce a system that's stronger than its weakest component.


Even in the field of security, which (changing the subject...) you
specifically refer to, similar considerations apply. If your assertion
was correct, then removing one component would never WEAKEN a system's
security -- it might increase it if it was the weakest, otherwise it
would leave it intact. And yet, a strong and sound tradition in
security is to require MULTIPLE components to be all satisfied e.g. for
access to secret information: e.g. the one wanting access must prove
their identity (say by retinal scan), possess a physical token (say a
key) AND know a certain secret (say a password). Do you really think
that, e.g., removing the need for the retinal scan would make the
system's security *STRONGER*...? It would clearly weaken it, as a
would-be breaker would now need only to purloin the key and trick the
secret password out of the individual knowing it, without the further
problem of falsifying a retinal scan successfully. Again, such security
systems exist and are traditional exactly because they're STRONGER than
their weakest component!


So, the implication accompanying your assertion, that strenghtening a
component that's not the weakest one is useless, is also false. It may
indeed have extremely low returns on investment, depending on system's
structure and exact circumstances, but then again, it may not; nothing
can be inferred about this ROI issue from the consideration in question.


Alex
 
A

Alex Martelli

Russ said:
Oops! I didn't think of that. The idea of putting one before every
return certainly doesn't appeal to me. So much for that idea.

try:
blah blah with as many return statements as you want
finally:
something that gets executed unconditionally at the end

You'll need some convention such as "all the return statements are of
the same form ``return result''" (where the result may be computed
differently each time), but that's no different from the conventions you
need anyway to express such things as ``the value that foobar had at the
time the function was called''.


Alex
 
C

Carl Banks

Who said they were the same?

The word I used was "comparable".
I said that just because it doesn't take
lives it doesn't mean it isn't important. I wasn't going to reply to
not extend this, but this misunderstanding of your was bugging me.

Well, I wasn't talking about "importance", actually. Importance is
really a matter for sociologists. Perhaps you can find a sociologist who
would agree that a health monitoring system is more important than an
aircraft control system (probably wouldn't be too hard, actually,
especially if it's a military aircraft). I could hardly argue with that.

But, frankly, importance to society is only a small part of what
determines criticalness of the application; and criticalness is what
factors into a decision on what programming language to use. Here are
some of the main factors that determine criticalness:

How much time is there between failure and catastrophe? What is the cost
(societal and/or economic) of a catastrophe? How recoverable is a
failure? What is the degree of difficulty of the programming? Do small
errors accumulate? How many government regulations does the code have to
meet? What is the acceptable failure rate before an application is
allowed to deploy? How much money is being spent to ensure flawless
operation before it even deploys?

By these criteria, Google web search, your health monitoring system, a
bank transaction system, etc., hardly compare to something like aircraft
control. I'm sorry.

I use Python on systems that deal with human health and wrong
calculations may have severe impact on a good sized population. Using
Python.

Cool. Not that, by itself, would be enough to make me feel good about
Python on airplanes.

As with nuclear reactors, dams, airplanes and so on we have a lot of
redundancy and a lot of checkpoints. No one is crazy to take them out
or even to remove some kind of dispositive to allow manual intervention
at critical points.

Really? I must work with crazy people then, because we are working on a
full authority control system: no bypassing the computer, no manual
intervention.


Carl Banks
 
C

Carl Banks

Carl Banks a écrit :

20 years ago, there was *no* computer at all in nuclear reactors.

But they had electronic (analog) systems that were (supposedly) just as
heavily regulated and scrutinized as the digital computers of today; and
were a lot more scrutinized than, say, the digital computers that banks
were using.


Carl Banks
 
?

=?ISO-8859-1?Q?Ricardo_Ar=E1oz?=

Alex said:
Not true (not even for security, much less for reliability which is
what's being discussed here).

It's easy to see how this assertion of yours is totally wrong in many
ways...

Example 1: a toy system made up of subsystem A (which has a probability
of 90% of working right) whose output feeds into subsystem B (which has
a probability of 80% of working right). A's failures and B's faliures
are statistically independent (no common-mode failures, &c).

The ``level of goodness'' (probability of working right) of the weakest
component, B, is 80%; but the whole system has a ``level of goodness''
(probability of working right) of just 72%, since BOTH subsystems must
work right for the whole system to do so. 72 != 80 and thus your
assertion is false.

More generally: subsystems "in series" with independent failures can
produce a system that's weaker than its weakest component.


Example 2: another toy system made up of subsystems A1, A2 and A3, each
trying to transform the same input supplied to all of them into a 1 bit
result; each of these systems works right 80% of the time, statistically
independently (no common-mode failures, &c). The three subsystems'
results are reconciled by a simple majority-voting component M which
emits as the system's result the bit value that's given by two out of
three of the Ai subsystems (or, of course, the value given unanimously
by all) and has extremely high reliability thanks to its utter
simplicity (say 99.9%, high enough that we can ignore M's contribution
to system failures in a first-order analysis).

The whole system will fail when all Ai fail together (probability
0.2**3) or when 2 out of them fail while the hird one is working
(probability 3*0.8*0.2**2):

0.10400000000000004

So, the system as a whole has a "level of goodness" (probability of
working right) of almost 90% -- again different from the "weakest
component" (each of the three Ai's), in this case higher.

More generally: subsystems "in parallel" (arranged so as to be able to
survive the failure of some subset) with indipendent failures can
produce a system that's stronger than its weakest component.


Even in the field of security, which (changing the subject...) you
specifically refer to, similar considerations apply. If your assertion
was correct, then removing one component would never WEAKEN a system's
security -- it might increase it if it was the weakest, otherwise it
would leave it intact. And yet, a strong and sound tradition in
security is to require MULTIPLE components to be all satisfied e.g. for
access to secret information: e.g. the one wanting access must prove
their identity (say by retinal scan), possess a physical token (say a
key) AND know a certain secret (say a password). Do you really think
that, e.g., removing the need for the retinal scan would make the
system's security *STRONGER*...? It would clearly weaken it, as a
would-be breaker would now need only to purloin the key and trick the
secret password out of the individual knowing it, without the further
problem of falsifying a retinal scan successfully. Again, such security
systems exist and are traditional exactly because they're STRONGER than
their weakest component!


So, the implication accompanying your assertion, that strenghtening a
component that's not the weakest one is useless, is also false. It may
indeed have extremely low returns on investment, depending on system's
structure and exact circumstances, but then again, it may not; nothing
can be inferred about this ROI issue from the consideration in question.


Alex

You win the argument, and thanks you prove my point. You typically
concerned yourself with the technical part of the matter, yet you
completely ignored the point I was trying to make.
That is that in real world applications formally proving the application
is not only an enormous resource waster but it also pays very little if
at all. I think most cases will fall under point (1) of your post, and
as so many other not formally proven subsystems are at work at the same
time there will be hardly any gain in doing it.
Point (2) of your post would be my preferred solution, and what is
usually done in hardware.
In the third part of your post, regarding security, I think you went off
the road. The weakest component would not be one of the requisites of
access, the weakest component I was referring to would be an actual
APPLICATION, e.g. an ftp server. In that case, if you have several
applications running your security will be the security of the weakest
of them.
 
A

Alex Martelli

Ricardo Aráoz said:
...
You win the argument, and thanks you prove my point. You typically
concerned yourself with the technical part of the matter, yet you
completely ignored the point I was trying to make.

That's because I don't particularly care about "the point you were
trying to make" (either for or against -- as I said, it's a case of ROI
for different investments [in either security, or, more germanely to
this thread, reliability] rather than of useful/useless classification
of the investments), while I care deeply about proper system thinking
(which you keep failing badly on, even in this post).
In the third part of your post, regarding security, I think you went off
the road. The weakest component would not be one of the requisites of
access, the weakest component I was referring to would be an actual
APPLICATION,

Again, F- at system thinking: a system's components are NOT just
"applications" (what's the alternative to their being "actual", btw?),
nor is it necessarily likely that an application would be the weakest
one of the system's components (these wrong assertions are in addition
to your original error, which you keep repeating afterwards).

For example, in a system where access is gained *just* by knowing a
secret (e.g., a password), the "weakest component" is quite likely to be
that handy but very weak architectural choice -- or, seen from another
viewpoint, the human beings that are supposed to know that password,
remember, and keep it secret. If you let them choose their password,
it's too likely to be "fred" or other easily guessable short word; if
you force them to make it at least 8 characters long, it's too likely to
be "fredfred"; if you force them to use length, mixed case and digits,
it's too likely to be "Fred2Fred". If you therefore decide that
passwords chosen by humans are too weak and generate one for them,
obtaining, say, "FmZACc2eZL", they'll write it down (perhaps on a
post-it attached to their screen...) because they just can't commit to
memory a lot of long really-random strings (and nowadays the poor users
are all too likely to need to memorize far too many passwords). A
clever attacker has many other ways to try to steal passwords, from
"social engineering" (pose as a repair person and ask the user to reveal
their password as a prerequisite of obtaining service), to keystroke
sniffers of several sorts, fake applications that imitate real ones and
steal the password before delegating to the real apps, etc, etc.

Similarly, if all that's needed is a physical token (say, some sort of
electronic key), that's relatively easy to purloin by traditional means,
such as pickpocketing and breaking-and-entering; certain kind of
electronic keys (such as the passive unencrypted RFID chips that are
often used e.g. to control access to buildings) are, in addition,
trivially easy to "steal" by other (technological) means.

Refusing to admit that certain components of a system ARE actually part
of the system is weak, blinkered thinking that just can't allow halfway
decent system design -- be that for purposes of reliability, security,
availability, or whatever else. Indeed, if certain part of the system's
architecture are OUTSIDE your control (because you can't redesign the
human mind, for example;-), all the more important then to make them the
focus of the whole design (since you must design AROUND them, and any
amelioration of their weaknesses is likely to have great ROI -- e.g., if
you can make the users take a 30-minutes short course in password
security, and accompany that with a password generator that makes
reasonably memorable though random ones, you're going to get substantial
returns on investment in any password-using system's security).
e.g. an ftp server. In that case, if you have several
applications running your security will be the security of the weakest
of them.

Again, false as usual, and for the same reason I already explained: if
your system can be broken by breaking any one of several components,
then it's generally WEAKER than the weakest of the components. Say that
you're running on the system two servers, an FTP one that can be broken
into by 800 hackers in the world, and a SSH one that can only be broken
into by 300 hackers in the world; unless every single one of the hackers
who are able to break into the SSH server is *also* able to break into
the FTP one (a very special case indeed!), there are now *MORE* than 800
hackers in the world that can break into your system as a whole -- in
other words, again and no matter how often you repeat falsities to the
contraries without a shred of supporting argument, your assertion is
*FALSE*, and in this case your security is *WEAKER* than the security of
the weaker of the two components.

I do not really much care what point(s) you are trying to make through
your glib and false assertions: I *DO* care that these falsities, these
extremely serious errors that stand in the way of proper system
thinking, be never left unchallenged and uncorrected. Unfortunately a
*LOT* of people (including, shudder, ones who are responsible for
architecting, designing and implementing some systems) are under very
serious misapprehensions that impede "system thinking", some of the same
ilk as your falsities (looking at only PART of the system and never the
whole, using far-too-simplified rules of thumbs to estimate system
properties, and so forth), some nearly reversed (missing opportunities
to make systems *simpler*, overly focusing on separate components, &c).

As to your specific point about "program proofs" being likely overkill
(which doesn't mean "useless", but rather means "low ROI" compared to
spending comparable resources in other reliability enhancements), that's
probably true in many cases. But when a probably-true thesis is being
"defended" by tainted means, such as false assertions and excessive
simplifications that may cause serious damage if generally accepted and
applied to other issues, debunking the falsities in question is and
remains priority number 1 for me.


Alex
 
R

Russ

On Sep 1, 4:25 am, Bryan Olson
Design-by-contract (or programming-by-contract) shines in large
and complex projects, though it is not a radical new idea in
software engineering. We pretty much generally agree that we want
strong interfaces to encapsulate implementation complexity.
That's what design-by-contract is really about.

There is no strong case for adding new features to Python
specifically for design-by-contract. The language is flexible
enough to support optionally-executed pre-condition and
post-condition checks, without any extension. The good and bad
features of Python for realizing reliable abstraction are set
and unlikely to change. Python's consistency and flexibility
are laudable, while duck-typing is a convenience that will
always make it less reliable than more type-strict languages.


Excellent points. As for "no strong case for adding new features to
Python specifically for design-by-contract," if you mean adding
something to language itself, I agree, but I see nothing wrong with
adding it to the standard libraries, if that is possible without
changing the language itself. Someone please correct me if I am wrong,
but I think PEP adds only to the libraries.
 
R

Russ

On Sep 1, 6:51 pm, (e-mail address removed) (Alex Martelli)
try:
blah blah with as many return statements as you want
finally:
something that gets executed unconditionally at the end

Thanks. I didn't think of that.

So design by contract *is* relatively easy to use in Python already.
The main issue, I suppose, is one of aesthetics. Do I want to use a
lot of explicit function calls for pre and post-conditions and "try/
finally" blocks in my code to get DbC (not to mention a global
variable to enable or disable it)?

I suppose if I want it badly enough, I will. But I also happen to be a
bit obsessive about the appearance of my code, and this does
complicate it a bit. The nice thing about having it in the doc string
(as per PEP 316) is that, while it is inside the function, it is also
separate from the actual code in the function. I like that. As far as
I am concerned, the self-test code shouldn't be tangled up with the
primary code.

By the way, I would like to make a few comments about the
"reliability" of Python code. Apparently I offended you the other day
by claiming or implying that Python code is inherently unreliable. I
think it is probably possible to write very reliable code in Python,
particularly for small to medium sized applications, but you probably
need top notch software engineers to do it. And analyzing code or
*proving* that a program is correct is technically harder without
static typing. In highly regulated safety critical domains, you need
more than just reliable code; you need to *demonstrate* or *prove* the
reliability somehow.

I personally use Python for its clean syntax and its productivity with
my time, so I am certainly not denigrating it. For the R&D work I do,
I think it is very appropriate. But I did raise a few eyebrows when I
first started using it. I used C++ several years ago, and I thought
about switching to Ada a few years ago, but Ada just seems to be
fading away (which I think is very unfortunate, but that's another
story altogether).

In any case, when you get right down to it, I probably don't know what
the hell I'm talking about anyway, so I will bring this rambling to a
merciful end.

On, one more thing. I see that the line wrapping on Google Groups is
finally working for me after many months. Fantastic! I can't help but
wonder if my mentioning it to you a few days ago had anything to do
with it.
 
M

Michele Simionato

Someone please correct me if I am wrong,
but I think PEP adds only to the libraries.

You are wrong, PEPs also add to the core language. Why don't you give
a look
at the PEP parade on python.org?

Michele Simionato
 
R

Russ

On, one more thing. I see that the line wrapping on Google Groups is
finally working for me after many months. Fantastic! I can't help but
wonder if my mentioning it to you a few days ago had anything to do
with it.

Well, it's working on the input side anyway.
 
P

Paul Rubin

Russ said:
Thanks. I didn't think of that.
So design by contract *is* relatively easy to use in Python already.
The main issue, I suppose, is one of aesthetics. Do I want to use a
lot of explicit function calls for pre and post-conditions and "try/
finally" blocks in my code to get DbC (not to mention a global
variable to enable or disable it)?

I still don't understand why you don't like the decorator approach,
which can easily implement the above.
I personally use Python for its clean syntax and its productivity with
my time, so I am certainly not denigrating it. For the R&D work I do,
I think it is very appropriate. But I did raise a few eyebrows when I
first started using it. I used C++ several years ago, and I thought
about switching to Ada a few years ago, but Ada just seems to be
fading away (which I think is very unfortunate, but that's another
story altogether).

It seems to be getting displaced by Java, which has some of the same
benefits and costs as Ada does.

I've gotten interested in static functional languages (including proof
assistants like Coq, that can generate certified code from your
mechanically checked theorems). But I haven't done anything serious
with any of them yet. I think we're in a temporary situation where
all existing languages suck (some more badly than others) but the
functional languages seem like a more promising direction to get out
of this hole.
 
R

Russ

I still don't understand why you don't like the decorator approach,
which can easily implement the above.

Well, maybe decorators are the answer. If a function needs only one
decorator for all the conditions and invariants (pre and post-
conditions), and if it can just point to functions defined elsewhere
(rather than defining everything inline), then perhaps they make
sense. I guess I need to read up more on decorators to see if this is
possible.

In fact, the ideal would be to have just a single decorator type, say
"contract" or "self_test", that takes an argument that points to the
relevant functions to use for the function that the decorator applies
to. Then the actual self-test functions could be pushed off somewhere
else, and the "footprint" on the primary code would be minimal
 
?

=?ISO-8859-1?Q?Ricardo_Ar=E1oz?=

Alex said:
Ricardo Aráoz said:
...
You win the argument, and thanks you prove my point. You typically
concerned yourself with the technical part of the matter, yet you
completely ignored the point I was trying to make.

That's because I don't particularly care about "the point you were
trying to make" (either for or against -- as I said, it's a case of ROI
for different investments [in either security, or, more germanely to
this thread, reliability] rather than of useful/useless classification
of the investments), while I care deeply about proper system thinking
(which you keep failing badly on, even in this post).

And here you start, followed by 'F- at system thinking', 'glib and false
assertions', 'falsities', etc.
I don't think you meant anything personal, how could you, we don't know
each other. But the outcome feels like a personal attack instead of an
attack on the ideas exposed.
If that's not what you intended, you should check your communication
abilities and see what is wrong. If that is what you meant well...

So I will not answer your post. I'll let it rest for a while till I
don't feel the sting, then I'll re-read it and try to learn as much as I
can from your thoughts (thank you for them). And even though some of
your thinking process I find objectionable I will not comment on it as
I'm sure it will start some new flame exchange which will have a lot to
do with ego and nothing to do with python.
 
A

Aahz

Excellent points. As for "no strong case for adding new features to
Python specifically for design-by-contract," if you mean adding
something to language itself, I agree, but I see nothing wrong with
adding it to the standard libraries, if that is possible without
changing the language itself. Someone please correct me if I am wrong,
but I think PEP adds only to the libraries.

You're wrong, but even aside from that, libraries need to prove
themselves useful before they get added.
--
Aahz ([email protected]) <*> http://www.pythoncraft.com/

"Many customs in this life persist because they ease friction and promote
productivity as a result of universal agreement, and whether they are
precisely the optimal choices is much less important." --Henry Spencer
http://www.lysator.liu.se/c/ten-commandments.html
 
B

Bruno Desthuilliers

Russ a écrit :
(snip)
Frankly, Mr. Holden, I'm getting a bit tired of the clannish behavior
here, where
"outsiders" like me are held to a higher standard than your "insider"
friends. I don't know who you are, nor do I care what you and your
> little group think about me.

If you took time to follow discussions on this group, you'd notice that
newcomers are usually welcome. It's not a problem of being an "outsider"
of an "insider", and there's nothing clannish (or very few, specially
when compared to some other places on usenet), it's mostly a problem
with your attitude. And please notice, once again, that I'm not talking
about *you* - as a person - but about how you behave(d).

(snip)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,774
Messages
2,569,599
Members
45,165
Latest member
JavierBrak
Top