PEP 3107 and stronger typing (note: probably a newbie question)

S

Steve Holden

Donn said:
Someday we will look at "variables" like we look at goto.
How very functional. I believe some people naturally think in terms of
state transformations and some in terms of functional evaluation. I am
pretty firmly in the former camp myself, so variables are a natural
repository for state to me.
I've wondered if programmers might differ a lot in how much they
dread errors, or how they react to different kinds of errors.
For example, do you feel a pang of remorse when your program
dies with a traceback - I mean, what a horrible way to die?
Do you resent the compiler's reprimands about your code errors,
or nagging about warnings? Maybe the language implementation's
job has as much to do with working around our feelings as anything
else.
That's an interesting point of view. I certainly don't take the somewhat
anthropomorphic approach you describe above. My first response to any
"error" message is to ask myself "what did I do wrong *now*?" - the very
fact that we talk about "error messages" implies a point of view that's
discouraging to new users: "you did something wrong, fix it and try again".

I do think that most language implementations could spend more time, and
more sympathetic thought, on creating messages that were less pejorative
and more indicative of the required corrective actions. But after forty
years programming I know myself well enough to understand that I am the
most likely cause of incorrect results.

It's always amusing to see a newbie appear on this list and suggest that
there's a bug in some long-standing feature of the language. It's always
a possibility, but the probability is pretty low.

regards
Steve
--
Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden
--------------- Asciimercial ------------------
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
----------- Thank You for Reading -------------
 
A

Alex Martelli

Donn Cave said:
I've wondered if programmers might differ a lot in how much they
dread errors, or how they react to different kinds of errors.

That's quite possible. I'm reminded of a by-now commonplace theory,
well summarized at
<http://www.clemmer-group.com/articles/article_128.aspx>:

"""
The only place you should try "doing it right the first time" is with
established, repetitive processes. Beyond that, this quality improvement
cliché can kill innovation. A major study from the American Quality
Foundation concluded, "We don't do things right the first time. Trial
and error -- making mistakes, experiencing failures, and learning from
them -- is how we improve. We need mistakes in order to learn; they are
an integral part of how we get better."
"""

If what you wonder about, and the theory mentioned by Clemmer and
detailed by the AQF, are both true, then this may help explain why some
programmers are fiercely innovative why other, equally intelligent ones,
prefer to stick with some plodding, innovation-killing process that only
works well for repetitive tasks: the latter group may be the ones who
"dread errors", and therefore miss the "making mistakes, experiencing
failures, and learning from them" that is "how we improve".


Alex
 
H

Hendrik van Rooyen

John Nagle said:
I've worked in big mainframe shops, where an operating system
crash caused everything to suddenly freeze, red lights came on all
over the building, and a klaxon sounded. I've worked for aerospace
companies, where one speaks of "defects", not "bugs", and there
are people around whose job involves getting in high performance
aircraft and flying with the software written there. I've worked
with car-sized robot vehicles, ones big enough to injure people.

This gives one a stronger feeling about wanting to eliminate
software defects early.

Does this actually cause you to make less goofy mistakes?

- I find that no matter how careful I try to be, I still f***k up.
not so much in the big things, its the little things that get me...

And the errors seem to come in runs - almost like a "bad hair day".

- Hendrik
 
P

Paul Rubin

If what you wonder about, and the theory mentioned by Clemmer and
detailed by the AQF, are both true, then this may help explain why some
programmers are fiercely innovative why other, equally intelligent ones,
prefer to stick with some plodding, innovation-killing process that only
works well for repetitive tasks: the latter group may be the ones who
"dread errors", and therefore miss the "making mistakes, experiencing
failures, and learning from them" that is "how we improve".

The idea of designing languages with more and more support for
ensuring program correctness is to put the established, repetitive
processes into the computer where it belongs, freeing the programmer
to be innovative while still providing high assurance of that the
program will be right the first time. And some of the most innovative
work in software is going into this area today.

Also, taking a learn-from-mistakes approach is fine and dandy if the
consequences of the mistakes stay contained to those who make them.
It's completely different if the consequences are imposed on other
people who weren't part of the process. Vast amounts of software
today (and I mean the stuff that clpy denizens write for random web
servers or desktop apps, not just scary stuff like nuclear reactor
code) has the potential to screw people who had nothing to do with the
development process. It's unreassuring to hear the the developers say
"oh cool, we learned from the mistake" when that happens. So, it's
irresponsible to deliberately choose development processes that
externalize risks onto outsiders that way.
 
P

Paul Rubin

Donn Cave said:
I've wondered if programmers might differ a lot in how much they
dread errors, or how they react to different kinds of errors.
For example, do you feel a pang of remorse when your program
dies with a traceback - I mean, what a horrible way to die?

I'm reminded of the time I found out that a program I had worked on
had crashed due to a coding bug. It was the control software for an
ATM switch. I had moved on from that job a year or so earlier, but I
found out about the crash because it took out vast swaths of data
communications for the whole US northeast corridor for 2+ days (almost
like the extended power outage of 2003) and it was on the front page
of the New York Times. The first thing I thought of was that a
certain subroutine I had rewritten was the culprit. I got on the
phone with a guy I had worked with to ask what the situation was, and
I was very relieved to find out that the error was in a part of the
code that I hadn't been anywhere near.

That program was a mess of spaghetti C code but even more carefully
written code keeps crashing the same way. It was one of the incidents
that now has me interested in the quest for type-safe languages with
serious optimizing compilers, that will allow us to finally trash
every line of C code currently in existence ;-).
 
B

Ben Finney

Paul Rubin said:
The idea of designing languages with more and more support for
ensuring program correctness is to put the established, repetitive
processes into the computer where it belongs, freeing the programmer
to be innovative while still providing high assurance of that the
program will be right the first time.

This seems to make the dangerous assumption that the programmer has
the correct program in mind, and needs only to transfer it correctly
to the computer.

I would warrant that anyone who understands exactly how a program
should work before writing it, and makes no design mistakes before
coming up with a program that works, is not designing a program of any
interest.
Also, taking a learn-from-mistakes approach is fine and dandy if the
consequences of the mistakes stay contained to those who make them.
It's completely different if the consequences are imposed on other
people who weren't part of the process.

Certainly. Which is why the trend continues toward developing programs
such that mistakes of all kinds cause early, obvious failure -- where
they have a much better chance of being caught and fixed *before*
innocent hands get ahold of them.
 
D

Donn Cave

How very functional. I believe some people naturally think in terms of
state transformations and some in terms of functional evaluation. I am
pretty firmly in the former camp myself, so variables are a natural
repository for state to me.

Don't worry - there will be a state transformation monad for you!

Nature or nurture? it would be interesting to see some identical twin
studies on novice programmers. Since few of us were exposed first
to strictly functional programming, though, you have to wonder. In
its day, goto was of course very well loved.

Donn Cave, (e-mail address removed)
 
D

Donn Cave

Ben Finney said:
This seems to make the dangerous assumption that the programmer has
the correct program in mind, and needs only to transfer it correctly
to the computer.

I would warrant that anyone who understands exactly how a program
should work before writing it, and makes no design mistakes before
coming up with a program that works, is not designing a program of any
interest.

I don't get it. Either you think that the above mentioned support
for program correctness locks program development into a frozen
stasis at its original conception, in which case you don't seem to
have read or believed the whole paragraph and haven't been reading
much else in this thread. Certainly up to you, but you wouldn't be
in a very good position to be drawing weird inferences as above.

Or you see original conception of the program as so inherently
suspect, that random errors introduced during implementation can
reasonably be seen as helpful, which would be an interesting but
unusual point of view.

Donn Cave, (e-mail address removed)
 
P

Paul Rubin

Donn Cave said:
Don't worry - there will be a state transformation monad for you!

Nature or nurture? it would be interesting to see some identical twin
studies on novice programmers. Since few of us were exposed first
to strictly functional programming, though, you have to wonder. In
its day, goto was of course very well loved.

Guy Steele used to describe functional programming -- the evaluation
of lambda-calculus without side effects -- as "separation of Church
and state", a highly desirable situation ;-).

(For non-FP nerds, the above is a pun referring to Alonzo Church, who
invented lambda calculus in the 1920's or so).
 
C

Chris Mellon

I don't get it. Either you think that the above mentioned support
for program correctness locks program development into a frozen
stasis at its original conception, in which case you don't seem to
have read or believed the whole paragraph and haven't been reading
much else in this thread. Certainly up to you, but you wouldn't be
in a very good position to be drawing weird inferences as above.

Or you see original conception of the program as so inherently
suspect, that random errors introduced during implementation can
reasonably be seen as helpful, which would be an interesting but
unusual point of view.

You can't prove a program to be correct, in the sense that it's proven
to do what it's supposed to do and only what it's supposed to do. You
can prove type-correctness, and the debate is really over the extent
that a type-correct program is also behavior-correct.

Personally, I remain unconvinced, but am open to evidence - I've only
heard anecdotes which are readily discounted by other anecdotes. I
absolutely do not believe that pure functional style programming,
where we shun variables, is the best model for computing, either now
or in the future.
 
P

Paul Rubin

Chris Mellon said:
You can't prove a program to be correct, in the sense that it's proven
to do what it's supposed to do and only what it's supposed to do. You
can prove type-correctness, and the debate is really over the extent
that a type-correct program is also behavior-correct.

But every mathematical theorem corresponds to a type, so if you can
formalize an argument that your code has a certain behavior, then a
type checker can statically verify it. There are starting to be
programming languages with embedded proof assistants, like Concoqtion
(google for it).

Of course you can only prove formal properties of programs, which says
nothing about whether the application is doing the right thing for
what the user needs. However, you're still way ahead of the game if
you have a machine-checked mathematical proof that (say) your
multi-threaded program never deadlocks or clobbers data through race
conditions, instead of merely a bunch of test runs in which you didn't
observe deadlock or clobbered data. Similarly for things like
floating-point arithmetic, Intel and AMD now use formal verification
on their designs, apparently unlike in the days of the notorious
original Pentium FDIV implementation, which passed billions of test
vectors and then shipped with an error. Larger applications like the
Javacard bytecode interpreter have been certified for various
properties and pretty soon we may start seeing certified compilers and
OS kernels. Things have come a long way since the 1970's.
Personally, I remain unconvinced, but am open to evidence - I've only
heard anecdotes which are readily discounted by other anecdotes. I
absolutely do not believe that pure functional style programming,
where we shun variables, is the best model for computing, either now
or in the future.

I wonder if you looked at the Tim Sweeney presentation that I linked
to a few times upthread.
 
J

John Nagle

Donn said:
In its day, goto was of course very well loved.

No, it wasn't. By 1966 or so, "GOTO" was starting to look
like a bad idea. It was a huge hassle for debugging.

It took another decade to get the iteration constructs
approximately right (FORTRAN was too restrictive, ALGOL
was too general), and to deal with problems like the
"dangling ELSE", but it was clear quite early that GOTO
was on the way out.

There was a detour through BASIC and its obsession with
line numbers ("GOSUB 14000" - bad subroutine call construct)
that took a while to overcome. Much of that was from trying
to cram interactive systems into computers with very modest
capacity per user.

John Nagle
 
J

John Nagle

Chris said:
You can't prove a program to be correct, in the sense that it's proven
to do what it's supposed to do and only what it's supposed to do.

Actually, you can prove quite a bit about programs with the right tools.
For example, proving that a program cannot subscript out of range is
quite feasible. (Although not for C, where it's most needed; that language
is just too ill-defined.) You can prove that certain assertions about
an object always hold when control is outside the object. You can
prove that certain entry conditions for a function are satisfied by
all its callers.

Take a look at what the "Spec#" group at Microsoft is doing.
There's also some interesting USAF work on avionics at

http://www.stsc.hill.af.mil/crossTalk/2006/09/0609SwardGerkenCasey.html

But it's not a very active field right now. The basic problem is that
C and C++ aren't well suited to proof of correctness, yet none of the
other hard-compiled languages have any significant market share left.

This is irrelevant to Python issues, though.

John Nagle
 
H

Hendrik van Rooyen

Donn Cave said:
In its day, goto was of course very well loved.

Does anybody know for sure if it is in fact possible to
design a language completely free from conditional jumps?

At the lower level, I don't think you can get away with
conditional calls - hence the "jumps with dark glasses",
continue and break.

I don't think you can get that functionality in another way.

Think of solving the problem of reading a text file to find
the first occurrence of some given string - can it be done
without either break or continue? (given that you have to
stop when you have found it)

I can't think of a way, even in assembler, to do this without
using a conditional jump - but maybe my experience has
poisoned my mind, as I see the humble if statement as a plethora
of local jumps...

- Hendrik
 
B

Ben Finney

John Nagle said:
No, it wasn't. By 1966 or so, "GOTO" was starting to look like a
bad idea. It was a huge hassle for debugging.

This is interesting. Do you have any references we can read about this
assertion -- specifically, that "GOTO" was not well loved (I assume
"by the programming community at large") even by around 1966?
 
P

Paul Rubin

Ben Finney said:
This is interesting. Do you have any references we can read about this
assertion -- specifically, that "GOTO" was not well loved (I assume
"by the programming community at large") even by around 1966?

Dijkstra's famous 1968 "GOTO considered harmful" letter
(http://www.acm.org/classics/oct95/) quotes a 1966 article by N. Wirth
and C.A.R. Hoare:

The remark about the undesirability of the go to statement is far from
new. I remember having read the explicit recommendation to restrict
the use of the go to statement to alarm exits, but I have not been
able to trace it; presumably, it has been made by C. A. R. Hoare. In
[1, Sec. 3.2.1.] Wirth and Hoare together make a remark in the same
direction in motivating the case construction: "Like the conditional,
it mirrors the dynamic structure of a program more clearly than go to
statements and switches, and it eliminates the need for introducing a
large number of labels in the program."

Reference: 1. Wirth, Niklaus, and Hoare C. A. R. A contribution
to the development of ALGOL. Comm. ACM 9 (June 1966), 413-432.
 
A

Antoon Pardon

Does anybody know for sure if it is in fact possible to
design a language completely free from conditional jumps?

I think you have to be more clear on what you mean. I would consider a
while loop as a conditional jump but I have the impression you don't.
Is that correct?
At the lower level, I don't think you can get away with
conditional calls - hence the "jumps with dark glasses",
continue and break.

Would you consider raise as belonging in this collection?
I don't think you can get that functionality in another way.

Think of solving the problem of reading a text file to find
the first occurrence of some given string - can it be done
without either break or continue? (given that you have to
stop when you have found it)

It depend on what the language offers. Should PEP 325 be implemented the
code would look something like:

do:
line = fl.readline()
while st not in line:
pass
 
M

Marc 'BlackJack' Rintsch

Does anybody know for sure if it is in fact possible to
design a language completely free from conditional jumps?

GOTO is unconditional. I guess it depends on what you call a jump.
At the lower level, I don't think you can get away with
conditional calls - hence the "jumps with dark glasses",
continue and break.

I don't think you can get that functionality in another way.

Don't think in terms of calls and jumps but conditionally evaluating
functions with no side effects.
Think of solving the problem of reading a text file to find
the first occurrence of some given string - can it be done
without either break or continue? (given that you have to
stop when you have found it)

Finding an element in a list in Haskell:

findFirst needle haystack = f 0 haystack where
f _ [] = -1
f i (x:xs) | x == needle = i
| otherwise = f (i+1) xs

No ``break`` or ``continue`` but a recursive function.
I can't think of a way, even in assembler, to do this without
using a conditional jump - but maybe my experience has
poisoned my mind, as I see the humble if statement as a plethora
of local jumps...

Technically yes, and with exceptions we have non local jumps all over the
place.

You are seeing the machine language behind it, and of course there are lots
of GOTOs, but not "uncontrolled" ones. The language above hides them and
allows only a limited set of jumps. Easing the proofs that the program is
correct. If you have a conditional call you can proof both alternative
calls and build the proof the the function that builds on those
alternatives on them.

Trying to guess how some source would look like in machine language isn't
that easy anymore the higher the abstraction level of the used
programming language is. In Haskell for example you have to keep in mind
that it uses lazy evaluation and that the compiler knows very much about
the program structure and data types and flow to do optimizations.

Ciao,
Marc 'BlackJack' Rintsch
 
D

Diez B. Roggisch

John said:
Actually, you can prove quite a bit about programs with the right
tools.
For example, proving that a program cannot subscript out of range is
quite feasible. (Although not for C, where it's most needed; that language
is just too ill-defined.)

You can - when there is input involved?
You can prove that certain assertions about
an object always hold when control is outside the object. You can
prove that certain entry conditions for a function are satisfied by
all its callers.

Take a look at what the "Spec#" group at Microsoft is doing.
There's also some interesting USAF work on avionics at

http://www.stsc.hill.af.mil/crossTalk/2006/09/0609SwardGerkenCasey.html

"""
For example, SPARK does not support dynamic allocation of memory so
things such as pointers and heap variables are not supported.
"""

Pardon me, but if that's the restrictions one has to impose to his
programs to make them verifiable, I'm not convinced that this is a way
to go for python - or any other language - that needs programs beyond
the most trivial tasks.

Which is not to say that trivial code couldn't have errors, and if it's
extremely cruical code, it's important that it hasn't errors. But all I
can see is that you can create trustable, simple sub-systems in such a
language. And by building upon them, you can ensure that at least these
won't fail.

But to stick with the example: if the path-planning of the UAV that
involves tracking a not before-known number of enemy aircrafts steers
the UAV into the ground, no proven-not-to-fail radius calculation will
help you.

Diez
 
D

Donn Cave

Paul Rubin said:
Ben Finney said:
This is interesting. Do you have any references we can read about this
assertion -- specifically, that "GOTO" was not well loved (I assume
"by the programming community at large") even by around 1966?

Dijkstra's famous 1968 "GOTO considered harmful" letter
(http://www.acm.org/classics/oct95/) quotes a 1966 article by N. Wirth
and C.A.R. Hoare:

The remark about the undesirability of the go to statement is far from
new. I remember having read the explicit recommendation to restrict
the use of the go to statement to alarm exits, but I have not been
able to trace it; presumably, it has been made by C. A. R. Hoare. In
[1, Sec. 3.2.1.] Wirth and Hoare together make a remark in the same
direction in motivating the case construction: "Like the conditional,
it mirrors the dynamic structure of a program more clearly than go to
statements and switches, and it eliminates the need for introducing a
large number of labels in the program."

Reference: 1. Wirth, Niklaus, and Hoare C. A. R. A contribution
to the development of ALGOL. Comm. ACM 9 (June 1966), 413-432.

So, all I need is comments from a computer scientist or two,
pointing out problems with the procedural/imperative programming
model, and we can establish that it was already hated and on its
way out by the late 90's? How about OOP? Already on its way out
by the time Python 1.0 hit the streets?

The thing that allows us to be so smug about the follies of the
past, is that we can pretend everyone knew better.

Donn Cave, (e-mail address removed)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,577
Members
45,052
Latest member
LucyCarper

Latest Threads

Top