Python from Wise Guy's Viewpoint

K

Kenny Tilton

Fergus said:
No, that is exactly right. Like the man said, read the archives for
comp.lang.ada.

Yep, I was wrong. They /did/ handle the overflow by leaving the
operation unguarded, trusting it to eventually bring down the system,
their design goal. Apologies to Dennis.
That's all true, but it is only part of the story, and selectively quoting
just that part is misleading in this context.

I quoted the entire paragraph and it seemed conclusive, so I did not
read the rest of the report. ie, I was not being selective, I just
assumed no one would consider crashing to be a form of error-handling.
My mistake, they did.

Well, the original question was, "Would Lisp have helped?". Let's see.
They dutifully went looking for overflowable conversions and decided
what to do with each, deciding in this case to do something appropriate
for the A4 which was inappropriately allowed by management to go into
the A5 unexamined.

In Lisp, well, there are two cases. Did they have to dump a number into
a 16-bit hardware channel? There was some reason for the conversion. If
not, no Operand Error arises. It is an open question whether they decide
to check anyway for large values and abort if found, but this one arose
only during a sweep of all such conversions, so probably not.

But suppose they did have to dance to the 16-bit tune of some hardware
blackbox. they would go thru the same reasoning and decide to shut down
the system. No advantage to Lisp. But they'd have to do some work to
bring the system down, because there would be no overflow. So:

(define-condition e-hardware-broken (e-pre-ignition e-fatal)
((component-id :initarg :component-id :reader component-id)
(bad-value :initarg :bad-value :intiform nil :reader bad-value)
...etc etc...

And then they would have to kick it off, and the exception handler of
the controlling logic would get a look at the condition on the way out.
Of course, it also sees operand errors, so one can only hope that at
some point during testing they for some reason had /some/ condition of
type e-pre-ignition get trapped by the in-flight supervisor, at which
point someone would have said either throw it away or why is that module
still running?

Or, if they were as meticulous with their handlers as they were with
numeric conversions, they would have during the inventory of explicit
conditions to handle gotten to the pre-ignition module conditions and
decided, "what does that software (which should not even be running)
know about the hardware that the rest of the system does not know?".

The case is not so strong now, but the odds are still better with Lisp.

kenny
 
H

Hannu Kankaanp??

Alex Martelli said:
Yes -- which is exactly why many non-programmers would prefer the
parentheses-less notation -- with more obvious names of course;-).
E.g.:
emitwarning URGENT "meltdown imminent!!!"
DOES look nicer to non-programmers than
emitwarning(URGENT, "meltdown imminent!!!")

It depends on the background of the non-programmer. I'd
say most non-programmers who turn into programmers have at
least some math experience, so they won't be scared to type
1 + 2 instead of "give me the answer to one plus two, thank you".
The latter group we can always guide to COBOL ;) (if my
understanding of that language is correct). And the former
group should be familiar with the function notation.

Perhaps, despite of Guido's urge for "programming for everyone",
Python has been designed with such a group in mind that has at
least some hope of becoming programmers ;)
I think that making return optional is slightly error-prone,
but it DOES make the language easier to learn for newbies --
newbies often err, in Python, by writing such code as
def double(x): x+x
which indicates the lack of 'return' IS more natural than its
mandatory presence.

You're right. That definition of double is closer to what
programming newbies probably have learned in math, than one
with "return". But that's not the point I was arguing really.
It was that Pythonistas prefer the explicit "return" and don't
want it to be changed -- So it's silly to present it as one of
Python's flaws.

Well ok, that was a pretty bold claim with no extensive
studies to back it up, and even contradicts my previously
expressed need to be compatible with math. So sure, it's
a tradeoff, but unlike the 'no-parens'-syntax, explicit
return adds to code readability without affecting the basic
notation as comprehensively as the lack of parens in
function calls (such as making higher-order functions less
intuitive to use).

Actually my preference is to either always require return when
there's something to return, or never allow return. Making it
optional just leads to less uniformity. And disallowing it
entirely in an imperative language wouldn't be such a wise
move either.
 
P

Pascal Bourguignon

Fergus Henderson said:

The post at that url writes about the culture of the Ariane team, but
I would say that it's even a more fundamental problem of our culture
in general: we build brittle stuff with very little margin for error.
Granted, it would be costly to increase physical margin, but in this
case, adopting a point of view more like _robotics_ could help. Even
in case of hardware failure, there's no reason to shut down the mind;
just go on with what you have.
 
K

Kenny Tilton

Markus said:
Dennis is right: it was indeed a specification problem. AFAIK, the coder
had actually even proved formally that the exception could not arise
with the spec of Ariana 4. Lisp code, too, can suddenly raise unexpected
exceptions. The default behaviour of the system was to abort the mission
for safety reasons by blasting the rocket. This wasn't justified in this
case, but one is always more clever after the event...




Indeed. Values this extreme were considered impossible on Ariane 4 and
taken as indication of such a serious failure that it would justify
aborting the mission.

Yes, I have acknowledged in another post that I was completely wrong in
my guesswork: everything was intentional and signed-off on by many.

A small side-note: as I now understand things, the idea was not to abort
the mission, but to bring down the system. The thinking was that the
error would signify a hardware failure, and with any luck shutting down
would mean either loss of the backup system (if that was where the HW
fault occurred) or correctly falling back on the still-functioning
backup system if the supposed HW fault had been in the primary unit. ie,
an HW fault would likely be isolated to one unit.

kenny
 
E

Erann Gat

[Discussing the Arianne failure]
A small side-note: as I now understand things, the idea was not to abort
the mission, but to bring down the system. The thinking was that the
error would signify a hardware failure, and with any luck shutting down
would mean either loss of the backup system (if that was where the HW
fault occurred) or correctly falling back on the still-functioning
backup system if the supposed HW fault had been in the primary unit. ie,
an HW fault would likely be isolated to one unit.

That's right. This is why hardware folks spend a lot of time thinking
about common mode failures, and why software folks could learn a thing or
two from the hardware folks in this regard.

E.
 
S

Steve Schafer

Even in case of hardware failure, there's no reason to shut down the
mind; just go on with what you have.

When the thing that failed is a very large rocket having a very large
momentum, and containing a very large amount of very volatile fuel, it
makes sense to give up and shut down in the safest possible way.

Also keep in mind that this was a "can't possibly happen" failure
scenario. If you've deemed that it is something that can't possibly
happen, you are necessarily admitting that you have no idea how to
respond in a meaningful way if it somehow does happen.

-Steve
 
T

Terry Reedy

Markus Mottl said:
Note that I am not defending ADA in any way or arguing against FPLs: in
fact, being an FPL-advocate myself I do think that FPLs (including Lisp)
have an edge what concerns writing safe code. But the Ariane-example just
doesn't support this claim. It was an absolutely horrible management
mistake to not check old code for compliance with the new spec. End
of story...

The investigating commission reported about 5 errors that, in series,
allowed the disaster. As I remember, another nonprogrammer/language
one was in mockup testing. The particular black box, known to be
'good', was not included, but just simulated according to its expected
behavior. If it has been included, and a flight similated in real
time with appropriate tilting and shaking, it should probably have
given the spurious abort message that it did in the real flight.

TJR
 
A

Alex Martelli

Gerrit said:
I wonder to what extent this statement is true. I know at least
1 Ruby programmer who came from Python, but this spot check should
not be trusted, since I know only 1 Ruby programmer and only 1
former Python programmer <g>. But I have heard that there are a
lot of former Python programmers in the Ruby community. I think
it is safe to say that of all languages Python programmers migrate
to, Ruby is the strongest magnet. OTOH, the migration of this part
of the Python community to Ruby may have been completed already,
of course.

Python and Ruby are IMHO very close, thus "compete" for roughly
the same "ecological niche". I still don't have enough actual
experience in "production" Ruby code to be able to say for sure,
but my impression so far is that -- while no doubt there's a LOT
of things for which they're going to be equally good -- Python's
simplicity and uniformity help with application development for
larger groups of programmers, while Ruby's extreme dynamism and
more variegated style may be strengths for experimentation, or
projects with one, or few and very well-attuned and experienced,
developers. I keep coming back to Python (e.g. because I have
no gmpy in Ruby for my own pet personal projects...:) but I do
mean to devote more of my proverbial "copious spare time" to
Ruby explorations (e.g., porting gmpy, otherwise it's unlikely
I'll ever get all that much combinatorial arithmetics done...;-).


Alex
 
J

Joachim Durchholz

Pascal said:
The post at that url writes about the culture of the Ariane team, but
I would say that it's even a more fundamental problem of our culture
in general: we build brittle stuff with very little margin for error.
Granted, it would be costly to increase physical margin,

Which is exactly why the margin is kept as small as possible.
Occasionally, it will be /too/ small.

Anybody seen a car model series, every one working perfectly from the
first one?
From what I read, every new model has its small quirks and
"near-perfect" gotchas. The difference is just that you're not allowed
to do that in expensive things like rockets (which is, among many other
things, one of the reasons why space vehicles and aircraft are so d*mn
expensive: if something goes wrong, you can't just drive them on the
nearest parking lot and wait for maintenance and repair...)
> but in this
case, adopting a point of view more like _robotics_ could help. Even
in case of hardware failure, there's no reason to shut down the mind;
just go on with what you have.

As Steve wrote, letting a rocket carry on regardless isn't a good idea
in the general case: it would be a major disaster if it made it to the
next coast and crashed into the next town. Heck, it would be enough if
the fuel tanks leaked, and the whole fuel rained down on a ship
somewhere in the Atlantic - most rocket fuels are toxic.

Regards,
Jo
 
P

Pascal Bourguignon

Steve Schafer said:
When the thing that failed is a very large rocket having a very large
momentum, and containing a very large amount of very volatile fuel, it
makes sense to give up and shut down in the safest possible way.

You have to define a "dangerous" situation. Remember that this
"safest possible way" is usually to blow the rocket up. AFAIK, while
this parameter was out of range, there was no instability and the
rocket was not uncontrolable.

Also keep in mind that this was a "can't possibly happen" failure
scenario. If you've deemed that it is something that can't possibly
happen, you are necessarily admitting that you have no idea how to
respond in a meaningful way if it somehow does happen.

My point. This "can't possibly happen" failure did happen, so clearly
it was not a "can't possibly happen" physically, which means that the
problem was with the software. We know it, but what I'm saying is that
a smarter software could have deduced it on fly.

We all agree that it would be better to have a perfect world and
perfect, bug-free, software. But since that's not the case, I'm
saying that instead of having software that behaves like simple unix C
tools, where as soon as there is an unexpected situation, it calls
perror() and exit(), it would be better to have smarter software that
can try and handle UNEXPECTED error situations, including its own
bugs. I would feel safer in an AI rocket.
 
T

Tim Sweeney

THE GOOD:
THE BAD:

1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
90% of the code is function applictions. Why not make it convenient?

9. Syntax for arrays is also bad [a (b c d) e f] would be better
than [a, b(c,d), e, f]

Agreed with your analysis, except for these two items.

#1 is a matter of opinion, but in general:

- f(x,y) is the standard set by mathematical notation and all the
mainstream programming language families, and is library neutral:
calling a curried function is f(x)(y), while calling an uncurried
function is f(x,y).

- "f x y" is unique to the Haskell and LISP families of languages, and
implies that most library functions are curried. Otherwise you have a
weird asymmetry between curried calls "f x y" and uncurried calls
which translate back to "f(x,y)". Widespread use of currying can lead
to weird error messages when calling functions of many parameters: a
missing third parameter in a call like f(x,y) is easy to report, while
with curried notation, "f x y" is still valid, yet results in a type
other than what you were expecting, moving the error up the AST to a
less useful obvious.

I think #9 is inconsistent with #1.

In general, I'm wary of notations like "f x" that use whitespace as an
operator (see http://www.research.att.com/~bs/whitespace98.pdf).
 
M

Marcin 'Qrczak' Kowalczyk

- "f x y" is unique to the Haskell and LISP families of languages, and
implies that most library functions are curried.

No, Lisp doesn't curry. It really writes "(f x y)", which is different
from "((f x) y)" (which is actually Scheme, not Lisp).

In fact the syntax "f x y" without mandatory parens fits non-lispish
non-curried syntaxes too. The space doesn't have to be left- or
right-associative; it just binds all arguments at once, and this
expression is different both from "f (x y)" and "(f x) y".

The only glitch is that you have to express application to 0 arguments
somehow. If you use "f()", you can't use "()" as an expression (for
empty tuple for example). But when you accept it, it works. It's my
favorite function application syntax.
 
A

Andrew Dalke

Pascal Bourguignon:
We all agree that it would be better to have a perfect world and
perfect, bug-free, software. But since that's not the case, I'm
saying that instead of having software that behaves like simple unix C
tools, where as soon as there is an unexpected situation, it calls
perror() and exit(), it would be better to have smarter software that
can try and handle UNEXPECTED error situations, including its own
bugs. I would feel safer in an AI rocket.

Since it was written in Ada and not C, and since it properly raised
an exception at that point (as originally designed), which wasn't
caught at a recoverable point, ending up in the default "better blow
up than kill people" handler ... what would your AI rocket have
done with that exception? How does it decide that an UNEXPECTED
error situation can be recovered? How would you implement it?
How would you test it? (Note that the above software wasn't
tested under realistic conditions; I assume in part because of cost.)

I agree it would be better to have software which can do that.
I have no good idea of how that's done. (And bear in mind that
my XEmacs session dies about once a year, eg, once when NFS
was acting flaky underneath it and a couple times because it
couldn't handle something X threw at it. ;)

The best examples of resilent architectures I've seen come from
genetic algorithms and other sorts of feedback training; eg,
subsumptive architectures for robotics and evolvable hardware.
There was a great article in CACM on programming an FPGA
via GAs, in 1998/'99 (link, anyone?). It worked quite well (as
I recall) but pointed out the hard part about this approach is
that it's hard to understand, and the result used various defects
on the chip (part of the circuit wasn't used but the chip wouldn't
work without it) which makes the result harder to mass produce.

Andrew
(e-mail address removed)
 
S

Steve Schafer

AFAIK, while this parameter was out of range, there was no instability
and the rocket was not uncontrolable.

That's perfectly true, but also perfectly irrelevant. When your
carefully designed software has just told you that your rocket, which,
you may recall, is traveling at several thousand metres per second, has
just entered a "can't possibly happen" state, you don't exactly have a
lot of time in which to analyze all of the conflicting information and
decide which to trust and which not to trust. Whether that sort of
decision-making is done by engineers on the ground or by human pilots or
by some as yet undesigned intelligent flight control system, the answer
is the same: Do the safe thing first, and then try to figure out what
happened.

All well-posed problems have boundary conditions, and the solutions to
those problems are bounded as well. No matter what the problem or its
means of solution, a boundary is there, and if you somehow cross that
boundary, you're toast. In particular, the difficulty with AI systems is
that while they can certainly enlarge the boundary, they also tend to
make it fuzzier and less predictable, which means that testing becomes
much less reliable. There are numerous examples where human operators
have done the "sensible" thing, with catastrophic consequences.
My point.

Well, actually, no. I assure you that my point is very different from
yours.
This "can't possibly happen" failure did happen, so clearly it was not
a "can't possibly happen" physically, which means that the problem was
with the software.

No, it still was a "can't possibly happen" scenario, from the point of
view of the designed solution. And there was nothing wrong with the
software. The difficulty arose because the solution for one problem was
applied to a different problem (i.e., the boundary was crossed).
it would be better to have smarter software that can try and handle
UNEXPECTED error situations

I think you're failing to grasp the enormity of the concept of "can't
possibly happen." There's a big difference between merely "unexpected"
and "can't possibly happen." "Unexpected" most often means that you
haven't sufficiently analyzed the situation. "Can't possibly happen," on
the other hand, means that you've analyzed the situation and determined
that the scenario is outside the realm of physical or logical
possibility. There is simply no meaningful means of recovery from a
"can't possibly happen" scenario. No matter how smart your software is,
there will be "can't possibly happen" scenarios outside the boundary,
and your software is going to have to shut down.
I would feel safer in an AI rocket.

What frightens me most is that I know that there are engineers working
on safety-critical systems that feel the same way. By all means, make
your flight control system as sophisticated and intelligent as you want,
but don't forget to include a simple, reliable, dumber-than-dirt
ejection system that "can't possibly fail" when the "can't possibly
happen" scenario happens.

Let me try to summarize the philosophical differences here: First of
all, I wholeheartedly agree that a more sophisticated software system
_may_ have prevented the destruction of the rocket. Even so, I think the
likelihood of that is rather small. (For some insight into why I think
so, you might want to take a look at Henry Petroski's _To Engineer is
Human_.) Where we differ is how much impact we believe that more
sophisticated software would have on the problem. I get the impression
that you believe that an AI-based system would drastically reduce
(perhaps even eliminate?) the "can't possibly happen" scenario. I, on
the other hand, believe that even the most sophisticated system enlarges
the boundary of the solution space by only a very small amount--the area
occupied by "can't possibly happen" scenarios remains far greater than
that occupied by "software works correctly and saves the rocket"
scenarios.

-Steve
 
M

Matthew Danish

1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
90% of the code is function applictions. Why not make it convenient?

9. Syntax for arrays is also bad [a (b c d) e f] would be better
than [a, b(c,d), e, f]
#1 is a matter of opinion, but in general:

- f(x,y) is the standard set by mathematical notation and all the
mainstream programming language families, and is library neutral:
calling a curried function is f(x)(y), while calling an uncurried
function is f(x,y).

And lambda notation is: \xy.yx or something like that. Math notation is
rather ad-hoc, designed for shorthand scribbling on paper, and in
general a bad idea to imitate for programming languages which are
written on the computer in an ASCII editor (which is one thing which
bothers me about ML and Haskell).
- "f x y" is unique to the Haskell and LISP families of languages, and
implies that most library functions are curried. Otherwise you have a
weird asymmetry between curried calls "f x y" and uncurried calls
which translate back to "f(x,y)".

Here's an "aha" moment for you:

In Haskell and ML, the two biggest languages with built-in syntactic
support for currying, there is also a datatype called a tuple (which is
a record with positional fields). All functions, in fact, only take a
single argument. The trick is that the syntax for tuples and the syntax
for currying combine to form the syntax for function calling:

f (x, y, z) ==> calling f with a tuple (x, y, z)
f x (y, z) ==> calling f with x, and then calling the result with (y, z).

This, I think, is a win for a functional language. However, in a
not-so-functionally-oriented language such as Lisp, this gets in the way
of flexible parameter-list parsing, and doesn't provide that much value.
In Lisp, a form's meaning is determined by its first element, hence (f x
y) has a meaning determined by F (whether it is a macro, or functionally
bound), and Lisp permits such things as "optional", "keyword" (a.k.a. by
name) arguments, and ways to obtain the arguments as a list.

"f x y", to Lisp, is just three separate forms (all symbols).
Widespread use of currying can lead
to weird error messages when calling functions of many parameters: a
missing third parameter in a call like f(x,y) is easy to report, while
with curried notation, "f x y" is still valid, yet results in a type
other than what you were expecting, moving the error up the AST to a
less useful obvious.

Nah, it should still be able to report the line number correctly.
Though I freely admit that the error messages spat out of compilers like
SML/NJ are not so wonderful.
I think #9 is inconsistent with #1.

I think that if the parser recognizes that it is directly within a [ ]
form, it can figure out that these are not function calls but rather
elements, though it would require that function calls be wrapped in (
)'s now. And the grammar would be made much more complicated I think.

Personally, I prefer (list a (b c d) e f).
In general, I'm wary of notations like "f x" that use whitespace as an
operator (see http://www.research.att.com/~bs/whitespace98.pdf).

Hmm, rather curious paper. I never really though of "f x" using
whitespace as an operator--it's a delimiter in the strict sense. The
grammar of ML and Haskell define that consecutive expressions form a
function application. Lisp certainly uses whitespace as a simple
delimiter. I'm not a big fan of required commas because it gets
annoying when you are editting large tables or function calls with many
parameters. The behavior of Emacs's C-M-t or M-t is not terribly good
with extraneous characters like those, though it does try.
 
M

Michael Geary

In general, I'm wary of notations like "f x" that use whitespace as an
Hmm, rather curious paper. I never really though of "f x" using
whitespace as an operator--it's a delimiter in the strict sense. The
grammar of ML and Haskell define that consecutive expressions form a
function application. Lisp certainly uses whitespace as a simple
delimiter...

Did you read the cited paper *all the way to the end*?

-Mike
 
P

Pascal Bourguignon

Andrew Dalke said:
Pascal Bourguignon:

Since it was written in Ada and not C, and since it properly raised
an exception at that point (as originally designed), which wasn't
caught at a recoverable point, ending up in the default "better blow
up than kill people" handler ... what would your AI rocket have
done with that exception? How does it decide that an UNEXPECTED
error situation can be recovered?

By having a view at the big picture!

The blow up action would be activated only when the big picture shows
that the AI has no control of the rocket and that it is going down.

How would you implement it?

Like any AI.
How would you test it? (Note that the above software wasn't
tested under realistic conditions; I assume in part because of cost.)

In a simulator. In any case, the point is to have a software that is
able to handle even unexpected failures.

I agree it would be better to have software which can do that.
I have no good idea of how that's done. (And bear in mind that
my XEmacs session dies about once a year, eg, once when NFS
was acting flaky underneath it and a couple times because it
couldn't handle something X threw at it. ;)

XEmacs is not AI.
The best examples of resilent architectures I've seen come from
genetic algorithms and other sorts of feedback training; eg,
subsumptive architectures for robotics and evolvable hardware.
There was a great article in CACM on programming an FPGA
via GAs, in 1998/'99 (link, anyone?). It worked quite well (as
I recall) but pointed out the hard part about this approach is
that it's hard to understand, and the result used various defects
on the chip (part of the circuit wasn't used but the chip wouldn't
work without it) which makes the result harder to mass produce.

Andrew
(e-mail address removed)

In any case, you're right, the main problem may be that it was
specified to blow up when an unhandled exception was raised...
 
A

Andrew Dalke

Me:
Pascal Bourguignon:
In a simulator. In any case, the point is to have a software that is
able to handle even unexpected failures.

Like I said, the existing code was not tested in a simulator. Why
do you think some AI code *would* be tested for this same case?
(Actually, I believe that an AI would need to be trained in a
simulator, just like humans, but that it would require so much
testing as to preclude its use, for now, in rocket control systems.)

Nor have you given any sort of guideline on how to implement
this sort of AI in the first place. Without it, you've just restated
the dream of many people over the last few centuries. It's a
dream I would like to see happen, which is why I agreed with you.
XEmacs is not AI

Yup, which is why the smiley is there. You said that C was
not the language to use (cf your perror/exit comment) and implied
that Ada wasn't either, so I assumed you had a more resiliant
programming language in mind. My response was to point
out that Emacs Lisp also crashes (rarely) given unexpected
errors and so imply that Lisp is not the answer.

Truely I believe that programming languages as we know
them are not the (direct) solution, hence my pointers to
evolvable hardware and similar techniques.

Even then, we still have a long way to go before they
can be used to control a rocket. They require a lot of
training (just like people) and software simulators just
won't cut it. The first "AI"s will replace those things
we find simple and commonplace [*] (because our brain
evolved to handle it), and not hard and rare.

Andrew
(e-mail address removed)
[*]
In thinking of some examples, I remembered a passage in
on of Cordwainer Smith's stories. In them, dogs, cats,
eagles, cows, and many other animals were artifically
endowed with intelligence and a human-like shape.
Turtles were bred for tasks which required long patience.
For example, one turtle was assigned the task of standing
by a door in case there was trouble, which he did for
100 years, without complaint.
 
D

Dennis Lee Bieber

Andrew Dalke fed this fish to the penguins on Monday 20 October 2003
21:41 pm:

For example, one turtle was assigned the task of standing
by a door in case there was trouble, which he did for
100 years, without complaint.
I do hope he was allowed time-out for the occassional lettuce leaf or
other veggies... <G>

--
 
M

Matthew Danish

Did you read the cited paper *all the way to the end*?

Why bother? It says "April 1" in the Abstract, and got boring about 2
paragraphs later. I should have scare-quoted "operator" above, or
rather the lack of one, which is interpreted as meaning function
application.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,770
Messages
2,569,583
Members
45,075
Latest member
MakersCBDBloodSupport

Latest Threads

Top