Python from Wise Guy's Viewpoint

P

Pascal Bourguignon

Garry Hodgson said:
so assume this AI software was running on Ariane 5, and the same
condition occurs. based on the previously referenced design
assumptions, it is told that there's been a hardware failure, and that
numerical calculations can no longer be trusted. how does it cope
with this?

I just read yesterday an old paper by Sussman about how they designed
a Lisp on a chip, including the garbage collector and the eval
function. Strangely enough that did not included any ALU (only a test
for zero and an incrementer, for address scanning).

You can implement an eval without arithmetic and you can implement
theorem prover above it still without arithmetic. You can still do a
great deal of thinking without any arithmetic...

in this case, a restart would cause the same calculations to occur,
and the same failure to be reported.

In this case, since the problem was not in the supposed AI controlling
agent, there would have been no restart.

absolutely right. though, in this case, this wouldn't have helped either.
the fatal error was a process error, and it occurred long before launch.

I think it would have been helped. For example, an architecture like
the Shuttle's where there are five computer differently programmed
would have helped, because at least one of the computers would not
have had the Ariane-4 module.
 
J

Joachim Durchholz

Pascal said:
[...] For example, an architecture like
the Shuttle's where there are five computer differently programmed
would have helped, because at least one of the computers would not
have had the Ariane-4 module.

Even the Ariane team is working under budget constraints. Obviously, in
this case, the budget didn't allow a re-check of the SRI design wrt.
Ariane-5 specifications, much less programming the same software five(!)
times over.

Besides, programming the same software multiple times would have helped
regardless of whether you're doing it with an AI or traditionally. I
still don't see how AI could have helped prevent the Ariane-5 crash. As
far as I have seen, any advances in making chips or programs smarter
have consistently been offset by higher testing efforts: you still have
to formally specify what the system is supposed to do, and then test
against that specification.
Actually, AI wouldn't have helped in the least bit here: the
specification was wrong, so even an AI module, at whatever
sophistication level, wouldn't have worked.

The only difference is that AI might allow people to write higher-level
specifications. I.e. something like "the rocket must be stable" instead
of "the rocket must not deviate more than 12.4 degrees from the
vertical"... but even "the rocket must be stable" would have to be
broken down into much more technical terms, with leeway for much the
same design and specification errors as those that caused the Ariane-5
software to lose control.

Regards,
Jo
 
P

Pascal Costanza

William said:
What do you mean by "reducing the expressive power of the language"? There
are many general purpose statically typed programming languages that are
Turing complete, so it's not a theoretical consideration, as you allude.

For example, static type systems are incompatible with dynamic
metaprogramming. This is objectively a reduction of expressive power,
because programs that don't allow for dynamic metaprogramming can't be
extended in certain ways at runtime, by definition.
Empirically, i write a lot of O'Caml code, and i never have to write
something in a non-intuitive manner to work around the type system. On the
contrary, every type error the compiler catches in my code indicates code
that *doesn't make sense*. I'd hate to imagine code that doesn't make
sense passing into regression testing. What if i forget to test a
non-sensical condition?

You need some testing discipline, which is supported well by unit
testing frameworks.
On the flip-side of the coin, i've also written large chunks of Scheme
code, and I *did* find myself making lots of nonsense errors that weren't
caught until run time, which significantly increased development time
and difficulty.

Furthermore, thinking about types during the development process keeps me
honest: i'm much more likely to write code that works if i've spent some
time understanding the problem and the types involved. This sort of
pre-development thinking helps to *eliminate* potential sources for bugs,
not introduce them. Even Scheme advocates encourage this (as in Essentials
of Programming Languages by Friedman, Wand, and Haynes).

Yes, thinking about a problem to understand it better occasionally helps
to write better code. This has nothing to do with static typing. This
could also be achieved by placing some other arbitrary restrictions on
your coding style.
When do programmers know better? An int is an int and a string is a
string, and nary the twain shall be treated the same. I would rather
``1 + "bar"'' signal an error at compile time than at run time.

Such code would easily be caught very soon in your unit tests.


Pascal
 
P

Pascal Costanza

Andrew said:
Pascal Costanza:



Given what I know of embedded systems, I can effectively
guarantee you that all the code on the rocket was proven
to halt in not only a finite amount of time but a fixed amount of
time.

Yes, this is a useful restriction for a certian scenario. I don't have
anything against restrictions put on code, provided these restrictions
are justified.

Static type systems are claimed to generally improve your code. I don't
see that.


Pascal
 
J

Joachim Durchholz

Pascal said:
....because static type systems work by reducing the expressive power of
a language. It can't be any different for a strict static type system.
You can't solve the halting problem in a general-purpose language.

The final statement is correct, but you don't need to solve the halting
problem: it's enough to allow the specification of some easy-to-prove
properties, without hindering the programmer too much.

Most functional languages with a static type system don't require that
the programmer writes down the types, they are inferred from usage. And
the type checker will complain as soon as the usage of some data item is
inconsistent.
IOW if you write
a = b + "asdf"
the type checker will infer that both a and b are strings; however, if
you continue with
c = a + b + 3
it will report a type error because 3 and "adsf" don't have a common
supertype with a "+" operation.

It's the best of both worlds: no fuss with type declarations (which is
one of the less interesting things one spends time with) while getting
good static checking.
(Nothing is as good in practice as it sounds in theory, and type
inference is no exception. Interpreting type error messages requires
some getting used to - just like interpreting syntax error messages is a
bit of an art, leaving one confounded for a while until one "gets it".)
(Now you could argue that current sophisticated type systems cover 90%
of all cases and that this is good enough, but then I would ask you for
empirical studies that back this claim. ;)

My 100% subjective private study reveals not a single complaint about
over-restrictive type systems in comp.lang.functional in the last 12 months.

Regards,
Jo
 
F

Fergus Henderson

Pascal Costanza said:
...because static type systems work by reducing the expressive power of
a language. It can't be any different for a strict static type system.
You can't solve the halting problem in a general-purpose language.

Most modern "statically typed" languages (e.g. Mercury, Glasgow Haskell,
OCaml, C++, Java, C#, etc.) aren't *strictly* statically typed anyway.
They generally have some support for *optional* dynamic typing.

This is IMHO a good trade-off. Most of the time, you want static typing;
it helps in the design process, with documentation, error checking, and
efficiency. Sometimes you need a bit more flexibility than the
static type system allows, and then in those few cases, you can make use
of dynamic typing ("univ" in Mercury, "Dynamic" in ghc,
"System.Object" in C#, etc.). The need to do this is not uncommon
in languages like C# and Java that don't support parametric polymorphism,
but pretty rare in languages that do.
I think soft typing is a good compromise, because it is a mere add-on to
an otherwise dynamically typed language, and it allows programmers to
override the decisions of the static type system when they know better.

Soft typing systems give you dynamic typing unless you explicitly ask
for static typing. That is the wrong default, IMHO. It works much
better to add dynamic typing to a statically typed language than the
other way around.
 
P

Pascal Costanza

Matthias said:
It depends a whole lot on what you consider "expressive". In my book,
static type systems (at least some of them) work by increasing the
expressive power of the language because they let me express certain
intended invariants in a way that a compiler can check (and enforce!)
statically, thereby expediting the discovery of problems by shortening
the edit-compile-run-debug cycle.

The set of programs that are useful but cannot be checked by a static
type system is by definition bigger than the set of useful programs that
can be statically checked. So dynamically typed languages allow me to
express more useful programs than statically typed languages.
In my own experience they seem to cover at least 99%.

I don't question that. If this works well for you, keep it up. ;)
(And where are _your_ empirical studies which show that "working around
language restrictions increases the potential for bugs"?)

I don't need a study for that statement because it's a simple argument:
if the language doesn't allow me to express something in a direct way,
but requires me to write considerably more code then I have considerably
more opportunities for making mistakes.


Pascal
 
P

Pascal Costanza

Fergus said:
Most modern "statically typed" languages (e.g. Mercury, Glasgow Haskell,
OCaml, C++, Java, C#, etc.) aren't *strictly* statically typed anyway.
They generally have some support for *optional* dynamic typing.

This is IMHO a good trade-off. Most of the time, you want static typing;
it helps in the design process, with documentation, error checking, and
efficiency.

+ Design process: There are clear indications that processes like
extreme programming work better than processes that require some kind of
specification stage. Dynamic typing works better with XP than static
typing because with dynamic typing you can write unit tests without
having the need to immediately write appropriate target code.

+ Documentation: Comments are usually better for handling documentation.
;) If you want your "comments" checked, you can add assertions.

+ Error checking: I can only guess what you mean by this. If you mean
something like Java's checked exceptions, there are clear signs that
this is a very bad feature.

+ Efficiency: As Paul Graham puts it, efficiency comes from profiling.
In order to achieve efficiency, you need to identify the bottle-necks of
your program. No amount of static checks can identify bottle-necks, you
have to actually run the program to determine them.
Sometimes you need a bit more flexibility than the
static type system allows, and then in those few cases, you can make use
of dynamic typing ("univ" in Mercury, "Dynamic" in ghc,
"System.Object" in C#, etc.). The need to do this is not uncommon
in languages like C# and Java that don't support parametric polymorphism,
but pretty rare in languages that do.

I wouldn't count the use of java.lang.Object as a case of dynamic
typing. You need to explicitly cast objects of this type to some class
in order to make useful method calls. You only do this to satisfy the
static type system. (BTW, this is one of the sources for potential bugs
that you don't have in a decent dynamically typed language.)
Soft typing systems give you dynamic typing unless you explicitly ask
for static typing. That is the wrong default, IMHO. It works much
better to add dynamic typing to a statically typed language than the
other way around.

I don't think so.


Pascal
 
P

Pascal Costanza

Joachim said:
Most functional languages with a static type system don't require that
the programmer writes down the types, they are inferred from usage. And
the type checker will complain as soon as the usage of some data item is
inconsistent.

I know about type inference. The set of programs that can be checked
with type inference is still a subset of all useful programs.
My 100% subjective private study reveals not a single complaint about
over-restrictive type systems in comp.lang.functional in the last 12
months.

I am not surprised. :)


Pascal
 
F

Fergus Henderson

Joachim Durchholz said:
My 100% subjective private study reveals not a single complaint about
over-restrictive type systems in comp.lang.functional in the last 12 months.

While I tend to agree that such complaints are rare, such complaints also
tend to be language-specific, and thus get posted to language-specific
forums, e.g. the Haskell mailing list, the Clean mailing list, the OCaml
mailing list, etc., rather than to more general forums like
comp.lang.functional.
 
M

Matthew Danish

Most modern "statically typed" languages (e.g. Mercury, Glasgow Haskell,
OCaml, C++, Java, C#, etc.) aren't *strictly* statically typed anyway.
They generally have some support for *optional* dynamic typing.

This is IMHO a good trade-off. Most of the time, you want static typing;
it helps in the design process, with documentation, error checking, and
efficiency. Sometimes you need a bit more flexibility than the
static type system allows, and then in those few cases, you can make use
of dynamic typing ("univ" in Mercury, "Dynamic" in ghc,
"System.Object" in C#, etc.). The need to do this is not uncommon
in languages like C# and Java that don't support parametric polymorphism,
but pretty rare in languages that do.

The trouble with these `dynamic' extensions is that they are `dynamic
type systems' from a statically typed viewpoint. A person who uses
truly dynamically typed languages would not consider them to be the
same thing.

In SML, for example, such an extension might be implemented using a sum
type, even using an `exn' type so that it can be extended in separate
places. The moment this system fails (and when a true dynamic system
carries on) is when such a type is redefined. The reason is because the
new type is not considered to be the same as the old type, due to
generativity of type names, and old code requires recompilation.

I'm told Haskell has extensions that will work around even this, but the
last time I tried to play with those, it failed miserably because
Haskell doesn't really support an interactive REPL so there was no way
to test it. (Maybe this was ghc's fault?)

As for Java/C#, downcasting is more of an example of static type systems
getting in the way of OOP rather than of a dynamic type system. (It's
because those languages are the result of an unholy union between the
totally dynamic Smalltalk and the awkwardly static C++).
Soft typing systems give you dynamic typing unless you explicitly ask
for static typing. That is the wrong default, IMHO. It works much
better to add dynamic typing to a statically typed language than the
other way around.

I view static typing as an added analysis stage. In that light, it
makes no sense to `add' dynamic typing to it. Also, I think that static
typing should be part of a more comprehensive static analysis phase
which itself is part of a greater suite of tests.
 
M

Matthias Blume

Pascal Costanza said:
The set of programs that are useful but cannot be checked by a static
type system is by definition bigger than the set of useful programs
that can be statically checked.

By whose definition? What *is* your definition of "useful"? It is
clear to me that static typing improves maintainability, scalability,
and helps with the overall design of software. (At least that's my
personal experience, and as others can attest, I do have reasonably
extensive experience either way.)

A 100,000 line program in an untyped language is useless to me if I am
trying to make modifications -- unless it is written in a highly
stylized way which is extensively documented (and which usually means
that you could have captured this style in static types). So under
this definition of "useful" it may very well be that there are fewer
programs which are useful under dynamic typing than there are under
(modern) static typing.
So dynamically typed languages allow
me to express more useful programs than statically typed languages.

There are also programs which I cannot express at all in a purely
dynamically typed language. (By "program" I mean not only the executable
code itself but also the things that I know about this code.)
Those are the programs which are protected against certain bad things
from happening without having to do dynamic tests to that effect
themselves. (Some of these "bad things" are, in fact, not dynamically
testable at all.)
I don't question that. If this works well for you, keep it up. ;)

Don't fear. I will.
I don't need a study for that statement because it's a simple
argument: if the language doesn't allow me to express something in a
direct way, but requires me to write considerably more code then I
have considerably more opportunities for making mistakes.

This assumes that there is a monotone function which maps token count
to error-proneness and that the latter depends on nothing else. This
is a highly dubious assumption. In many cases the few extra tokens
you write are exactly the ones that let the compiler verify that your
thinking process was accurate (to the degree that this fact is
captured by types). If you get them wrong *or* if you got the
original code wrong, then the compiler can tell you. Without the
extra tokens, the compiler is helpless in this regard.

To make a (not so far-fetched, btw :) analogy: Consider logical
statements and formal proofs. Making a logical statement is easy and
can be very short. It is also easy to make mistakes without noticing;
after all saying something that is false while still believing it to
be true is extremely easy. Just by looking at the statement it is
also often hard to tell whether the statement is right. In fact,
computers have a hard time with this task, too. Theorem-proving is
hard.
On the other hand, writing down the statement with a formal proof is
impossible to get wrong without anyone noticing because checking the
proof for validity is trivial compared to coming up with it in the
first place. So even though writing the statement with a proof seems
harder, once you have done it and it passes the proof checker you can
rest assured that you got it right. The longer "program" will have fewer
"bugs" on average.

Matthias
 
A

Andrew Dalke

Pascal Bourguignon:
You can implement an eval without arithmetic and you can implement
theorem prover above it still without arithmetic. You can still do a
great deal of thinking without any arithmetic...

But theorem proving and arithmetic are isomorphic. TP-> arithmetic
is obvious. Arithmetic -> TP is through Godel.
I think it would have been helped. For example, an architecture like
the Shuttle's where there are five computer differently programmed
would have helped, because at least one of the computers would not
have had the Ariane-4 module.

Manned space projects get a lot more money for safety checks.
If a few rockets blow up for testing then it's still cheaper than
quintupling the development costs.

Andrew
(e-mail address removed)
 
A

Andrew Dalke

Pascal Costanza:
The set of programs that are useful but cannot be checked by a static
type system is by definition bigger than the set of useful programs that
can be statically checked. So dynamically typed languages allow me to
express more useful programs than statically typed languages.

Ummm, both are infinite and both are countably infinite, so those sets
are the same size. You're falling for Hilbert's Paradox.

Also, while I don't know a proof, I'm pretty sure that type inferencing
can do addition (and theorem proving) so is equal in power to
programming.
I don't need a study for that statement because it's a simple argument:
if the language doesn't allow me to express something in a direct way,
but requires me to write considerably more code then I have considerably
more opportunities for making mistakes.

The size comparisons I've seen (like the great programming language
shootout) suggest that Ocaml and Scheme require about the same amount
of code to solve small problems. Yet last I saw, Ocaml is strongly typed
at compile time. How do you assume then that strongly&statically typed
languages require "considerable more code"?

Andrew
(e-mail address removed)
 
M

Marshall Spight

Pascal Costanza said:
Such code would easily be caught very soon in your unit tests.

Provided you think to write such a test, and expend the effort
to do so. Contrast to what happens in a statically typed language,
where this is done for you automatically.

Unit tests are great; I heartily endorse them. But they *cannot*
do everything that static type checking can do. Likewise,
static type checking *cannot* do everything unit testing
can do.

So again I ask, why is it either/or? Why not both? I've had
*great* success building systems with comprehensive unit
test suites in statically typed languages. The unit tests catch
some bugs, and the static type checking catches other bugs.


Marshall
 
L

Lulu of the Lotus-Eaters

Joachim Durchholz said:
My 100% subjective private study reveals not a single complaint about
over-restrictive type systems in comp.lang.functional in the last 12
months.

I also read c.l.functional (albeit only lightly). In the last 12
months, I have encountered dozens of complaints about over-restrictive
type sytems in Haskell, OCaml, SML, etc.

The trick is that these complaints are not phrased in precisely that
way. Rather, someone is trying to do some specific task, and has
difficulty arriving at a usable type needed in the task. Often posters
provide good answers--Durchholz included. But the underlying complaint
-really was- about the restrictiveness of the type system.

That's not even to say that the overall advantages of a strong type
system are not worthwhile--even perhaps better than more dynamic
languages. But it's quite disingenuous to claim that no one ever
complains about it. Obviously, no one who finds a strong static type
system unacceptable is going to be committed to using, e.g.
Haskell--the complaint doesn't take the form of "I'm taking my marbles
and going home".

Yours, Lulu...
 
D

Dennis Lee Bieber

Pascal Bourguignon fed this fish to the penguins on Wednesday 22
October 2003 13:44 pm:

I think it would have been helped. For example, an architecture like
the Shuttle's where there are five computer differently programmed
would have helped, because at least one of the computers would not
have had the Ariane-4 module.

Are you sure? What if all the variants reused code from variants of
the A-4... They could all have had different versions of the same
problem...

Merely comparing the A-4 requirements to the A-5 requirements and then
testing the code associated with different performance limits would
also have found the problem...

There isn't much you can do, as a programmer, if the bean counters up
above decree: Use this module -- unchanged -- since it works on the
previous generation... No, you don't need to test it -- we said it
works...

--
 
D

Donn Cave

Quoth Lulu of the Lotus-Eaters <[email protected]>:
|> My 100% subjective private study reveals not a single complaint about
|> over-restrictive type systems in comp.lang.functional in the last 12
|> months.
|
| I also read c.l.functional (albeit only lightly). In the last 12
| months, I have encountered dozens of complaints about over-restrictive
| type sytems in Haskell, OCaml, SML, etc.
|
| The trick is that these complaints are not phrased in precisely that
| way. Rather, someone is trying to do some specific task, and has
| difficulty arriving at a usable type needed in the task. Often posters
| provide good answers--Durchholz included. But the underlying complaint
| -really was- about the restrictiveness of the type system.
|
| That's not even to say that the overall advantages of a strong type
| system are not worthwhile--even perhaps better than more dynamic
| languages. But it's quite disingenuous to claim that no one ever
| complains about it. Obviously, no one who finds a strong static type
| system unacceptable is going to be committed to using, e.g.
| Haskell--the complaint doesn't take the form of "I'm taking my marbles
| and going home".

No one said that strict typing is free, requires no effort or learning
from the programmer. That would be ridiculous - of course a type system
is naturally restrictive, that's its nature. A restrictive system that
imposes a constraint on the programmer, who needs to learn about that
in order to use the language effectively. `Over-restrictive' is
different. If there are questions about static typing, it does not
follow that it's over-restrictive, nor that the questions constitute
a complaint to that effect.

Donn Cave, (e-mail address removed)
 
M

Marshall Spight

Pascal Costanza said:
I wouldn't count the use of java.lang.Object as a case of dynamic
typing. You need to explicitly cast objects of this type to some class
in order to make useful method calls. You only do this to satisfy the
static type system. (BTW, this is one of the sources for potential bugs
that you don't have in a decent dynamically typed language.)

Huh? The explicit-downcast construct present in Java is the
programmer saying to the compiler: "trust me; you can accept
this type of parameter." In a dynamically-typed language, *every*
call is like this! So if this is a source of errors (which I believe it
is) then dynamically-typed languages have this potential source
of errors with every function call, vs. statically-typed languages
which have them only in those few cases where the programmer
explicitly puts them in.


Marshall
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,780
Messages
2,569,608
Members
45,241
Latest member
Lisa1997

Latest Threads

Top