PEP 3107 and stronger typing (note: probably a newbie question)

L

Lenard Lindstrom

Hendrik said:
Does anybody know for sure if it is in fact possible to
design a language completely free from conditional jumps?

At the lower level, I don't think you can get away with
conditional calls - hence the "jumps with dark glasses",
continue and break.

I don't think you can get that functionality in another way.

Think of solving the problem of reading a text file to find
the first occurrence of some given string - can it be done
without either break or continue? (given that you have to
stop when you have found it)

Pascal has no break, continue or return. Eiffel doesn't even have a
goto. In such imperative languages boolean variables are used a lot.

from StringIO import StringIO
lines = StringIO("one\ntwo\nthree\nfour\n")
line_number = 0
eof = False
found = False
while not (eof or found):
line = lines.readline()
eof = line == ""
found = line == "three\n"
if not found:
line_number += 1


if found:
print "Found at", line_number
else:
print "Not found"
# prints "Found at 2"
 
P

Paul Rubin

Ben Finney said:
This seems to make the dangerous assumption that the programmer has
the correct program in mind, and needs only to transfer it correctly
to the computer.

Well, I hope the programmer can at least state some clear
specficiations that a correct program should implement, e.g. it should
not freeze or crash.
I would warrant that anyone who understands exactly how a program
should work before writing it, and makes no design mistakes before
coming up with a program that works, is not designing a program of any
interest.

I'm not sure what the relevance of this is. Programmers generally
have an idea of what it is that they're trying to write. They then
have to make a bunch of design and implementation decisions as they go
along. Soemtimes those decisions are subtly incorrect, and the more
errors the computer can identify automatically, the fewer errors are
likely to remain in the program.
Certainly. Which is why the trend continues toward developing programs
such that mistakes of all kinds cause early, obvious failure -- where
they have a much better chance of being caught and fixed *before*
innocent hands get ahold of them.

Here is what Edward Lee wrote about developing a multi-threaded Java
app (http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf):

A part of the Ptolemy Project experiment was to see whether
effective software engineering practices could be developed for an
academic research setting. We developed a process that included a
code maturity rating system (with four levels, red, yellow, green,
and blue), design reviews, code reviews, nightly builds,
regression tests, and automated code coverage metrics [43]. The
portion of the kernel that ensured a consistent view of the
program structure was written in early 2000, design reviewed to
yellow, and code reviewed to green. The reviewers included
concurrency experts, not just inexperienced graduate students
(Christopher Hylands (now Brooks), Bart Kienhuis, John Reekie, and
myself were all reviewers). We wrote regression tests that
achieved 100 percent code coverage. The nightly build and
regression tests ran on a two processor SMP machine, which
exhibited different thread behavior than the development machines,
which all had a single processor. The Ptolemy II system itself
began to be widely used, and every use of the system exercised
this code. No problems were observed until the code deadlocked on
April 26, 2004, four years later.

It is certainly true that our relatively rigorous software
engineering practice identified and fixed many concurrency
bugs. But the fact that a problem as serious as a deadlock that
locked up the system could go undetected for four years despite
this practice is alarming. How many more such problems remain? How
long do we need test before we can be sure to have discovered all
such problems? Regrettably, I have to conclude that testing may
never reveal all the problems in nontrivial Multithreaded code.

Now consider that the code you test today on a 2-core processor may be
running on a 100-core processor in 4 years, making any concurrency
errors that much worse. You can't test on the 100-core cpu today
because it doesn't exist yet, but your code still has to run on it
correctly once it's out there. Heck, if you're writing in CPython (no
parallelism at all, due to the GIL) you might be in for a surprise as
soon as real parallelism arrives on even 2 cores. It gets worse,
maybe you're writing operating system or language runtime code
(e.g. parallel garbage collector for Java). That means there will be
hostile code on the same machine, written by diabolical maniacs who
know every detail of your implementation, and who deliberately do
stuff (e.g. write code to hit specific cache lines or cause page
faults at fiendishly precisely chosen moments) specifically to cause
weird timing conditions and trigger bugs. Testing is not enough to
ensure that such attacks won't succeed.

I remember an old programming book (might even have been Knuth vol. 3)
recommending using the Shell sorting algorithm (a very simple exchange
sort that runs in about O(n**1.4) time) instead of an O(n log n)
algorithm because Shell's algorithm (being simpler) was less likely to
end up with implementation errors. Today I think an experienced
programmer needing to code a sort routine wouldn't hesitate to code a
good algorithm, because our languages and coding/testing methods are
better than they were in the Fortran and assembly language era when
that book was written. But there are still more complex problems
(like shared memory multiprocessing) that we're advised to stay away
from because they're so fraught with hazards. We instead use
approaches that are simpler but take perforamnce penalties. (As a
non-concurrency example, we might implement geometric search through
some kind of list scanning algorithm instead of using more efficient
but complicated k-d trees). With higher reliability languages we'll
hopefully be in better shape to tackle those complex problems instead
of retreating from them, without expecting to spend eternity hunting
down mysterious bugs.
 
H

Hendrik van Rooyen

Pascal has no break, continue or return. Eiffel doesn't even have a
goto. In such imperative languages boolean variables are used a lot.

Thanks did not know this.
from StringIO import StringIO
lines = StringIO("one\ntwo\nthree\nfour\n")
line_number = 0
eof = False
found = False
while not (eof or found):
line = lines.readline()
eof = line == ""
found = line == "three\n"
if not found:
line_number += 1


if found:
print "Found at", line_number
else:
print "Not found"
# prints "Found at 2"

All right, so it is possible, with some legerdemain,
forcing the main loop to stop when the condition
is met.
However, I find the above as clear as mud, compared to:

line_number = 0

for line in my_file:
line_number += 1
if "the Target" in line:
break

print line_number, line

- Hendrik
 
L

Lenard Lindstrom

Hendrik said:
Thanks did not know this.


All right, so it is possible, with some legerdemain,
forcing the main loop to stop when the condition
is met.
However, I find the above as clear as mud, compared to:

line_number = 0

for line in my_file:
line_number += 1
if "the Target" in line:
break

print line_number, line

- Hendrik

I am not defending Pascal or similar languages. Pascal was an
educational language designed to teach structured programming by
removing temptation. The same example in Pascal would be less
convoluted, thought, since Pascal has an explicit end-of-file test
function (No, this is not a Python feature request.) Personally I don't
miss the continuous retesting of conditions made necessary in Pascal.
 
P

Paul Rubin

Diez B. Roggisch said:
For example, SPARK does not support dynamic allocation of memory so
things such as pointers and heap variables are not supported.

Right, Spark uses a profile intended for embedded systems, so
no unpredictable gc delays etc.
Which is not to say that trivial code couldn't have errors, and if
it's extremely cruical code, it's important that it hasn't errors. But
all I can see is that you can create trustable, simple sub-systems in
such a language. And by building upon them, you can ensure that at
least these won't fail.

Yes, that's the usual approach.
But to stick with the example: if the path-planning of the UAV that
involves tracking a not before-known number of enemy aircrafts steers
the UAV into the ground, no proven-not-to-fail radius calculation will
help you.

Whether the program uses dynamic memory allocation or not, there
either has to be some maximum number of targets that it can be
required to track, or else it's subject to out-of-memory errors.
If a maximum number is specified, then memory requirements can
be computed from that.
 
J

Juergen Erhard

Guy Steele used to describe functional programming -- the evaluation
of lambda-calculus without side effects -- as "separation of Church
and state", a highly desirable situation ;-).

(For non-FP nerds, the above is a pun referring to Alonzo Church, who
invented lambda calculus in the 1920's or so).

Wow, I didn't realize I was an FP nerd :)

On proving programs correct... from my CS study days I distinctly
remember thinking "sure, you can prove it correct, but you cannot do
actual useful stuff with it". We might have come a long way since
then (late 80s :p), but I don't hold out much hope (especially since
the halting problem does exist, and forever will).

Bye, J
 
J

John Nagle

Juergen said:
On proving programs correct... from my CS study days I distinctly
remember thinking "sure, you can prove it correct, but you cannot do
actual useful stuff with it". We might have come a long way since
then (late 80s :p), but I don't hold out much hope (especially since
the halting problem does exist, and forever will).

The halting problem only applies to systems with infinite memory.

Proof of correctness is hard, requires extensive tool support, and
increases software development costs. But it does work.

Check out the "Spec#" effort at Microsoft for current work.
Work continues in Europe on Extended Static Checking for Java,
which was developed at DEC before DEC disappeared.

John Nagle
 
K

Kay Schluehr

The halting problem only applies to systems with infinite memory.

Sure. But knowing that memory is limited doesn't buy you much because
you achieve an existential proof at best: you can show that the
program must run out of memory but you have to run the program to know
where this happens for arbitrary input values. Moreover you get always
different values for different memory limits. So in the general case
you can't improve over just letting the program run and notice what
happens.
 
D

Diez B. Roggisch

Paul said:
Right, Spark uses a profile intended for embedded systems, so
no unpredictable gc delays etc.


Yes, that's the usual approach.


Whether the program uses dynamic memory allocation or not, there
either has to be some maximum number of targets that it can be
required to track, or else it's subject to out-of-memory errors.
If a maximum number is specified, then memory requirements can
be computed from that.

In other words: you pre-allocate everything and can then proove what -
that the program is going to be stop working?

What does that buy you - where is "I'm crashed becaus I ran out of
memory trying to evade the seventh mig" better than "sorry, you will be
shot down because I'm not capable of processing more enemie fighters -
but hey, at least I'm still here to tell you!"

Sorry, but if that's supposed to make a point for static typechecking, I
don't buy it. Crippling the execution model so that it fits a
proving-scheme which essentially says "don't do anything complex with
me" is no good.

Diez
 
P

Paul Rubin

Kay Schluehr said:
Sure. But knowing that memory is limited doesn't buy you much because
you achieve an existential proof at best: you can show that the
program must run out of memory but you have to run the program to know
where this happens for arbitrary input values. Moreover you get always
different values for different memory limits. So in the general case
you can't improve over just letting the program run and notice what
happens.

Program verification is nothing like the halting problem. It's not
done by dropping the program into some tool and getting back a proof
of correctness or incorrectness. The proof is developed, by the
programmer, at the same time the program is developed. Verifiability
means that the programmer-supplied proof is precise enough to be
checked by the computer.

Think of when you first learned to program. Most non-programmers have
an idea of what it means to follow a set of steps precisely. For
example they could tell you an algorithm for sorting a shelf full of
books alphabetically by author. Programming means expressing such
steps in much finer detail than humans usually require, to the point
where a machine can execute it; it takes a lot of patience and
knowledge of special techniques, more than non-programmers know how to
do, but we programmers are used to it and even enjoy it.

Now think of something like a code review. There is a line of code in
your program (maybe even an assert statement), that depends on some
variable "x" being greater than 3. You must have had a reason for
thinking x>3 there, so if the reviewer asks why you thought that, you
can explain your argument until the reviewer is convinced.

Do you see where this is going? Code verification means expressing
such an argument in much finer detail than humans usually require, to
the point where a machine can be convinced by it; it takes a lot of
patience and knowledge of special techniques, more than most of us
regular "practical" programmers currently know how to do, but the
methods are getting more accessible and are regularly used in
high-assurance software. In any case there is nothing magic about
it--just like programming, it's a means of expressing precisely to a
machine what you already know informally how to express to a human.
 
P

Paul Rubin

Diez B. Roggisch said:
What does that buy you - where is "I'm crashed becaus I ran out of
memory trying to evade the seventh mig" better than "sorry, you will
be shot down because I'm not capable of processing more enemie
fighters - but hey, at least I'm still here to tell you!"

Come on, this is real-time embedded software, not some Visual Basic
app running in Windows. The internal table sizes etc. are chosen
based on the amount of physical memory available, which has tended to
be pretty small until maybe fairly recently.
 
D

Diez B. Roggisch

Paul said:
Come on, this is real-time embedded software, not some Visual Basic
app running in Windows. The internal table sizes etc. are chosen
based on the amount of physical memory available, which has tended to
be pretty small until maybe fairly recently.

Since when did we restrict ourselves to such an environment? I was under the
impression that this thread is about the merits and capabilities of static
type-checking?

Besides, even if I accept these constraints - I still think the question is
valid: what goof is a proof if it essentially proofs that the code is
severely limited? After all, memory-allocation _might_ create a crash
depending on the actual circumstances - but it might not. Then, you trade a
mere possibility against a certain limitation.

No, I don't buy that. As I said before: for the most trivial of tasks that
might be a good thing. But I'm not convinced that restricting one self in
such a manner for more complex tasks isn't going to be a better development
path.

Diez
 
S

Steve Holden

John said:
The halting problem only applies to systems with infinite memory.

Proof of correctness is hard, requires extensive tool support, and
increases software development costs. But it does work.

Check out the "Spec#" effort at Microsoft for current work.
Work continues in Europe on Extended Static Checking for Java,
which was developed at DEC before DEC disappeared.
The issue I have with correctness proofs (at least as they were
presented in the 1980s - for all I know the claims may have become more
realistic now) is that the proof of correctness can only relate to some
highly-formal specification so complex that it's only comprehensible to
computer scientists.

In other words, we can never "prove" that a program does what the
customer wants, because the semantics are communicated in different
ways, involving a translation which, since it is necessarily performed
by humans using natural language, will always be incomplete at best and
inaccurate at worst.

regards
Steve
--
Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden
--------------- Asciimercial ------------------
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
----------- Thank You for Reading -------------
 
P

Paul Rubin

Steve Holden said:
The issue I have with correctness proofs (at least as they were
presented in the 1980s - for all I know the claims may have become
more realistic now) is that the proof of correctness can only relate
to some highly-formal specification so complex that it's only
comprehensible to computer scientists.

Just like software as it is now, is only comprehensible to
programmers. A long while back there was a language called COBOL
designed to be comprehensible to non-programmers, and it actually did
catch on, but it was cumbersome and limited even in its time and in
its application era, not so readable to non-programmers after all, and
is not used much in new projects nowadays.
In other words, we can never "prove" that a program does what the
customer wants, because the semantics are communicated in different
ways, involving a translation which, since it is necessarily performed
by humans using natural language, will always be incomplete at best
and inaccurate at worst.

The types of things one tries to prove are similar to what is less
formally expressed as localized test cases in traditional software.
E.g. for a sorting routine you'd want to prove that output[n+1] >=
output[n] for all n and for all inputs. As the Intel FDIV bug
incident reminds us, even billions of test inputs are not enough to
show that the routine does the right thing for EVERY input.
 
J

John Nagle

Paul said:
Program verification is nothing like the halting problem. It's not
done by dropping the program into some tool and getting back a proof
of correctness or incorrectness. The proof is developed, by the
programmer, at the same time the program is developed. Verifiability
means that the programmer-supplied proof is precise enough to be
checked by the computer.

Think of when you first learned to program. Most non-programmers have
an idea of what it means to follow a set of steps precisely. For
example they could tell you an algorithm for sorting a shelf full of
books alphabetically by author. Programming means expressing such
steps in much finer detail than humans usually require, to the point
where a machine can execute it; it takes a lot of patience and
knowledge of special techniques, more than non-programmers know how to
do, but we programmers are used to it and even enjoy it.

Now think of something like a code review. There is a line of code in
your program (maybe even an assert statement), that depends on some
variable "x" being greater than 3. You must have had a reason for
thinking x>3 there, so if the reviewer asks why you thought that, you
can explain your argument until the reviewer is convinced.

Do you see where this is going? Code verification means expressing
such an argument in much finer detail than humans usually require, to
the point where a machine can be convinced by it; it takes a lot of
patience and knowledge of special techniques, more than most of us
regular "practical" programmers currently know how to do, but the
methods are getting more accessible and are regularly used in
high-assurance software. In any case there is nothing magic about
it--just like programming, it's a means of expressing precisely to a
machine what you already know informally how to express to a human.

Ah. At last, an intelligent comment about program proving.

One of the biggest problems with proof of correctness is that,
as a first step, you need a language where you don't have to "lie to
the language". That is, the language must have sufficient power to
express even the ugly stuff.

Python isn't bad in that regard. A good example is the "struct"
module, where you can explicitly convert a string of bytes into a
Python tuple. That's an inherently risky operation, but Python has
a safe and unambiguous way to express it that doesn't depend on the
machine representation of Python types. In C and C++, this is often
done using casts, and what's going on is neither explicit nor checked.
Note that it's quite possible to implement something like the "struct"
module for C, and it could even be compiled into hard code. (GCC,
for example, knows about "printf" strings at compile time, so that's
not an unacceptable idea.)

The big problem with C and C++ in this area is the "array=pointer"
convention. Again, that's lying to the compiler. Programmers have to
write "char* p" which is a pointer to a char, when they mean an array
of characters. That's inherently wrong. It was a good idea in the 1970s
when it was developed, but we know better now.

If you were doing proofs of correctness for Python, the biggest headache
would be checking for all the places where some object like a module or function
might be dynamically modified and making sure nobody was patching the code.

Enough on this issue, though.

John Nagle
 
P

Paul Rubin

Diez B. Roggisch said:
Since when did we restrict ourselves to such an environment? I was under the
impression that this thread is about the merits and capabilities of static
type-checking?

One branch of the discussion brought up Spark Ada, and I think the
cited article said they used the Ravenscar profile, which means
embedded applications.
No, I don't buy that. As I said before: for the most trivial of
tasks that might be a good thing. But I'm not convinced that
restricting one self in such a manner for more complex tasks isn't
going to be a better development path.

There are applications where you care first and foremost about the
reliability of the end result, and accept whatever difficulty that
might cause for the development path. Not all programming is Visual
Basic.
 
S

Steve Holden

Paul said:
Just like software as it is now, is only comprehensible to
programmers. A long while back there was a language called COBOL
designed to be comprehensible to non-programmers, and it actually did
catch on, but it was cumbersome and limited even in its time and in
its application era, not so readable to non-programmers after all, and
is not used much in new projects nowadays.
The trouble there, though, is that although COBOL was comprehensible (to
a degree) relatively few people have the rigor of thought necessary to
construct, or even understand, an algorithm of any kind.
In other words, we can never "prove" that a program does what the
customer wants, because the semantics are communicated in different
ways, involving a translation which, since it is necessarily performed
by humans using natural language, will always be incomplete at best
and inaccurate at worst.

The types of things one tries to prove are similar to what is less
formally expressed as localized test cases in traditional software.
E.g. for a sorting routine you'd want to prove that output[n+1] >=
output[n] for all n and for all inputs. As the Intel FDIV bug
incident reminds us, even billions of test inputs are not enough to
show that the routine does the right thing for EVERY input.

Well I can buy that as a reasonable answer to my point. The early claims
of the formal proof crowd were somewhat overblown and unrealistic, but
that certainly doesn't negate the value of proofs such as the one you
describe.

regards
Steve
--
Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden
--------------- Asciimercial ------------------
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
----------- Thank You for Reading -------------
 
K

Kay Schluehr

As the Intel FDIV bug
incident reminds us, even billions of test inputs are not enough to
show that the routine does the right thing for EVERY input.

When I remember correctly the FDIV bug was due to a wrongly filled
lookup table and occurred only for certain bitpattern in the divisor.
I'm not sure how a formal proof on the structure of the algorithm
could help here? Intel repaired the table i.e. the data not the
algorithm that acted upon it.
 
P

Paul Rubin

Kay Schluehr said:
When I remember correctly the FDIV bug was due to a wrongly filled
lookup table and occurred only for certain bitpattern in the divisor.
I'm not sure how a formal proof on the structure of the algorithm
could help here? Intel repaired the table i.e. the data not the
algorithm that acted upon it.

I would expect the table contents to be part of the stuff being
proved. I hear that they do use formal proofs for their floating
point stuff now, as does AMD. AMD apparently uses ACL2 (a theorem
prover written in Common Lisp, info about AMD is from the ACL2 web
site) but I don't know what Intel uses. I've figured that the FDIV
problem was one of the motivations for this but I'm not certain of it.
 
J

John Nagle

No, Intel had assumed a symmetry that wasn't actually true.
I would expect the table contents to be part of the stuff being
proved. I hear that they do use formal proofs for their floating
point stuff now, as does AMD. AMD apparently uses ACL2 (a theorem
prover written in Common Lisp, info about AMD is from the ACL2 web
site) but I don't know what Intel uses. I've figured that the FDIV
problem was one of the motivations for this but I'm not certain of it.

Yes, here's two of the papers describing proofs of AMD
floating point unit correctness.

Division: "http://citeseer.ist.psu.edu/53148.html"

Square root: "http://portal.acm.org/citation.cfm?id=607571"

A longer paper:

http://www.computationallogic.com/news/amd.html

This stuff is hard to do, but notice that AMD hasn't had any recalls
for floating point problems.

John Nagle
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,483
Members
44,902
Latest member
Elena68X5

Latest Threads

Top