Python's "only one way to do it" philosophy isn't good?

W

WaterWalk

I've just read an article "Building Robust System" by Gerald Jay
Sussman. The article is here:
http://swiss.csail.mit.edu/classes/symbolic/spring07/readings/robust-systems.pdf

In it there is a footprint which says:
"Indeed, one often hears arguments against building exibility into an
engineered sys-
tem. For example, in the philosophy of the computer language Python it
is claimed:
\There should be one|and preferably only one|obvious way to do
it."[25] Science does
not usually proceed this way: In classical mechanics, for example, one
can construct equa-
tions of motion using Newtonian vectoral mechanics, or using a
Lagrangian or Hamiltonian
variational formulation.[30] In the cases where all three approaches
are applicable they are
equivalent, but each has its advantages in particular contexts."

I'm not sure how reasonable this statement is and personally I like
Python's simplicity, power and elegance. So I put it here and hope to
see some inspiring comments.
 
G

Gabriel Genellina

I've just read an article "Building Robust System" by Gerald Jay
Sussman. The article is here:
http://swiss.csail.mit.edu/classes/symbolic/spring07/readings/robust-systems.pdf

In it there is a footprint which says:
"Indeed, one often hears arguments against building exibility into an
engineered sys-
tem. For example, in the philosophy of the computer language Python it
is claimed:
\There should be one|and preferably only one|obvious way to do
it."[25] Science does
not usually proceed this way: In classical mechanics, for example, one
can construct equa-
tions of motion using Newtonian vectoral mechanics, or using a
Lagrangian or Hamiltonian
variational formulation.[30] In the cases where all three approaches
are applicable they are
equivalent, but each has its advantages in particular contexts."

I'm not sure how reasonable this statement is and personally I like
Python's simplicity, power and elegance. So I put it here and hope to
see some inspiring comments.

I think the key is the word you ommited in the subject: "obvious". There
should be one "obvious" way to do it. For what I can remember of my first
love (Physics): if you have a small ball moving inside a spherical cup, it
would be almost crazy to use cartesian orthogonal coordinates and Newton's
laws to solve it - the "obvious" way would be to use spherical coordinates
and the Lagrangian formulation (or at least I hope so - surely
knowledgeable people will find more "obviously" which is the right way).
All classical mechanics formulations may be equivalent, but in certain
cases one is much more suited that the others.
 
T

Terry Reedy

| I've just read an article "Building Robust System" by Gerald Jay
| Sussman. The article is here:
|
http://swiss.csail.mit.edu/classes/symbolic/spring07/readings/robust-systems.pdf
|
| In it there is a footprint which says:
| "Indeed, one often hears arguments against building exibility into an
| engineered sys-
| tem. For example, in the philosophy of the computer language Python it
| is claimed:

For him to imply that Python is anti-flexibility is wrong. Very wrong..
He should look in a mirror. See below.

| \There should be one|and preferably only one|obvious way to do
| it."[25] Science does
| not usually proceed this way: In classical mechanics, for example, one
| can construct equa-
| tions of motion using Newtonian vectoral mechanics, or using a
| Lagrangian or Hamiltonian
| variational formulation.[30] In the cases where all three approaches
| are applicable they are
| equivalent, but each has its advantages in particular contexts."

And in those contexts, one would hope that the method with advantages is
somehow the obvious way to do it. Otherwise beginners might become like
Buriden's ass.

So I dispute that science is as different as he claims. And I do not see
any real value in the statement in that I do not see it saying anything
useful to the reader, at least not in this snippet.

| I'm not sure how reasonable this statement is and personally I like
| Python's simplicity, power and elegance. So I put it here and hope to
| see some inspiring comments.

How much has Mr. Sussman actually programmed in Python and what actual
problems did he find with the *implementation* of the philosophy? Without
something concrete, the complaint is rather bogus.

But here is what I find funny (and a bit maddening): G. J. Sussman is one
of the inventers of the Lisp dialect Scheme, a minimalist language that for
some things has only one way to do it, let alone one obvious way. Scheme
would be like physics with only one of the three ways. After all, if they
are equivalent, only one is needed.

For example, consider scanning the items in a collection. In Python, you
have a choice of recursion (normal or tail), while loops, and for
statements. For statements are usually the obvious way, but the other two
are available for personal taste and for specially situations such as
walking a tree (where one might use recursion to write the generator that
can then be used by for loops. In scheme, I believe you just have
recursion. Since iteration and recursion are equivalent, why have both?

Terry Jan Reedy
 
J

James Stroud

Terry said:
In Python, you have a choice of recursion (normal or tail)

Please explain this. I remember reading on this newsgroup that an
advantage of ruby (wrt python) is that ruby has tail recursion, implying
that python does not. Does python have fully optimized tail recursion as
described in the tail recursion Wikipedia entry? Under what
circumstances can one count on the python interpreter recognizing the
possibility for optimized tail recursion?

James


=====

Disclaimer: Mention of more than one programming language in post does
not imply author's desire to begin language v. language holy battle. The
author does not program in [some or all of the other languages mentioned
aside from the language topical to the newsgroup] and has no opinions on
the merits or shortcomings of said language or languages.

=====
 
C

Cousin Stanley

....
In scheme, I believe you just have recursion.
....

Cousin TJR ....

I'm a total scheme rookie starting only about 3 days ago
and one of the mechanisms I went looking for was a technique
for iteration ....

Found in the scheme docs about iteration supplied
via the reduce package ....

"Iterate and reduce are extensions of named-let
for writing loops that walk down one or more sequences
such as the elements of a list or vector, the characters
read from a port, or arithmetic series .... "

The following scheme session illustrates a trivial example ....
> , open reduce
>
> ( define ( list_loop this_list )
( iterate loop
( ( list* this_item this_list ) ) ; iterate expression
( ( new_list '( ) ) ) ; state expression
( loop ( cons ( * 2 this_item ) new_list ) ) ; body expression
( reverse new_list ) ) ) ; final expression
; no values returned
>
> ( define L '( 1 2 3 4 5 ) ) ; no values returned
>
( define result_i ( list_loop L ) )
; no values returned
>
> result_i '(2 4 6 8 10)
>

However, just as in Python the map function
might be both easier to code and more readable
in many cases ....
> ( define ( x2 n ) ( * 2 n ) ) ; no values returned
>
> ( define result_m ( map x2 L ) ) ; no values returned
>
> result_m
'(2 4 6 8 10)

Note ....

No lambdas in my scheme code either .... ;-)
 
B

Bjoern Schliessmann

Gabriel said:
For what I can
remember of my first love (Physics): if you have a small ball
moving inside a spherical cup, it would be almost crazy to use
cartesian orthogonal coordinates and Newton's laws to solve it -
the "obvious" way would be to use spherical coordinates and the
Lagrangian formulation (or at least I hope so

Yep, that's right.
- surely knowledgeable people will find more "obviously" which is
the right way).

No, this case is IMHO almost classical. Movement with planar
constraints can be solved quite easy using Lagrange.
All classical mechanics formulations may be equivalent, but
in certain cases one is much more suited that the others.

Or: Lagrange is the only obvious way to describe movement with
constraints.

Regards,


Björn
 
K

Kay Schluehr

Please explain this. I remember reading on this newsgroup that an
advantage of ruby (wrt python) is that ruby has tail recursion, implying
that python does not.

Proof by rumour? You can use first class continuations in Ruby to
eliminate tail calls in and define higher order function wrappers
( like Python decorators ). But I wouldn't call this "fully
optimized".
Does python have fully optimized tail recursion as
described in the tail recursion Wikipedia entry?

No.
 
T

Terry Reedy

| Terry Reedy wrote:
| > In Python, you have a choice of recursion (normal or tail)
|
| Please explain this.

I am working on a paper for Python Papers that will. It was inspired by
the question 'why doesn't Python do tail-recursion optimization'.

tjr
 
T

Terry Reedy

| > In scheme, I believe you just have recursion.

I was referring to the original mimimalist core language developed by Guy
and Sussman and as I remember it being used in the original edition of SICP
(see Wikipedia). I also remember statements explaining (truthfully) that
builtin iteration is not needed because it can be defined in terms of tail
recursion, which in Scheme is required to be optimized to be just as space
efficient.

I see in Wikipedia that Scheme has do loops (all versions?), but I do not
know if that was original or added. If the former, it was de-emphasized.
Hence my belief, even if mistaken.

| Cousin TJR ....
|
| I'm a total scheme rookie starting only about 3 days ago
| and one of the mechanisms I went looking for was a technique
| for iteration ....
|
| Found in the scheme docs about iteration supplied
| via the reduce package ....

Right. An add-on library package, not part of the core;-)

In Python, modules can add functions (and classes, etc), but not statement
syntax, so adding while statements defined in terms of recursion is not
possible.

Scheme is quite elegant and worth learning at least the basics of. My only
point was that Sussman is an odd person to be criticizing (somewhat
mistakingly) Python for being minimalist.

tjr
 
J

John Nagle

Bjoern said:
Gabriel Genellina wrote:

Having actually solved that problem in simulation, I can report
that it's easier in Cartesian coordinates. I used to use this as
a test of Falling Bodies, one of the first physics engines that
really worked on the hard cases.

Spherical coordinates seem attractive until you have to deal
with friction between the ball and cup. The ball is rotating, too,
and may be slipping with respect to the cup. Then the simple
Physics 101 approach isn't so simple any more.

John Nagle
Animats
 
J

Josiah Carlson

James said:
Please explain this. I remember reading on this newsgroup that an
advantage of ruby (wrt python) is that ruby has tail recursion, implying
that python does not. Does python have fully optimized tail recursion as
described in the tail recursion Wikipedia entry? Under what
circumstances can one count on the python interpreter recognizing the
possibility for optimized tail recursion?

Note that Terry said that you could do normal or tail recursion, he
didn't claim that either were optimized. As for why tail calls are not
optimized out, it was decided that being able to have the stack traces
(with variable information, etc.) was more useful than offering tail
call optimization (do what I say).

- Josiah
 
D

Dennis Lee Bieber

And in those contexts, one would hope that the method with advantages is
somehow the obvious way to do it. Otherwise beginners might become like
Buriden's ass.
Not to be confused with Balaam's ass...


(Why do I visualize Eddie Murphy's "Donkey" -- from the Shrek movies
-- as Buridan's ass? "Oh, that looks delicious" <step> "Wait, that one
looks even better" <two-steps other way> "no..." ad infinitum [Shrek:
"Make up your mind, Donkey... Before I feed them to you from both
ends"])
--
Wulfraed Dennis Lee Bieber KD6MOG
(e-mail address removed) (e-mail address removed)
HTTP://wlfraed.home.netcom.com/
(Bestiaria Support Staff: (e-mail address removed))
HTTP://www.bestiaria.com/
 
A

Alexander Schmolck

Josiah Carlson said:
Note that Terry said that you could do normal or tail recursion, he didn't
claim that either were optimized.

Well yeah, but without the implication how do the two words "or tail" add to
the information content of the sentence?
As for why tail calls are not optimized out, it was decided that being able
to have the stack traces (with variable information, etc.) was more useful
than offering tail call optimization

I don't buy this. What's more important, making code not fail arbitrarily (and
thus making approaches to certain problems feasible that otherwise wouldn't
be) or making it a be a bit easier to debug code that will fail arbitrarily?
Why not only do tail-call optimization in .pyo files and get the best of both
worlds?
(do what I say).

Where did you say run out of memory and fail? More importantly how do you say
"don't run out of memory and fail"?

'as
 
J

Josiah Carlson

Alexander said:
Well yeah, but without the implication how do the two words "or tail" add to
the information content of the sentence?

Normal and tail recursion are different, based upon whether or not one
can technically considered to be done with the stack frame.

def normal(arg):
if arg == 1:
return 1
return arg * normal(arg-1)

def tail(arg, default=1):
if arg == 1:
return arg * default
return tail(arg-1, default*arg)

I don't buy this. What's more important, making code not fail arbitrarily (and
thus making approaches to certain problems feasible that otherwise wouldn't
be) or making it a be a bit easier to debug code that will fail arbitrarily?
Why not only do tail-call optimization in .pyo files and get the best of both
worlds?

I didn't make the decisions, I'm just reporting what was decided upon.

Personally, I have never found the lack of tail call optimization an
issue for two reasons. The first is because I typically write such
programs in an iterative fashion. And generally for recursion for which
tail call optimization doesn't work (the vast majority of recursive
algorithms I use), I typically write the algorithm recursively first,
verify its correctness, then convert it into an iterative version with
explicit stack. I find it is good practice, and would be necessary
regardless of whether Python did tail call optimization or not.

Where did you say run out of memory and fail? More importantly how do you say
"don't run out of memory and fail"?

By virtue of Python's underlying implementation, Python "does what I
say", it doesn't "do what I mean". While I may not have explicitly
stated "run out of stack space", the underlying implementation *has*
limited stack space. You are stating that when you write a tail
recursive program, you want Python to optimize it away by destroying the
stack frames. And that's fine. There are tail-call optimization
decorators available, and if you dig into sourceforge, there should even
be a full patch to make such things available in a previous Python.

However, Python is not Lisp and is not partially defined by infinite
recursion (see sys.setrecursionlimit() ). Instead, it is limited by the
C call stack (in CPython), and makes guarantees regarding what will
always be available during debugging (the only thing that optimization
currently does in Python at present is to discard docstrings). If you
want to change what is available for debugging (to make things like tail
call optimization possible), you are free to write and submit a PEP.

In the mean time, you may need to do some source conversion.


- Josiah
 
S

Steven D'Aprano

I don't buy this.

Do you mean you don't believe the decision was made, or you don't agree
with the decision?

What's more important, making code not fail arbitrarily (and
thus making approaches to certain problems feasible that otherwise
wouldn't be) or making it a be a bit easier to debug code that will fail
arbitrarily? Why not only do tail-call optimization in .pyo files and
get the best of both worlds?

Are you volunteering? If you are, I'm sure your suggestion will be
welcomed gratefully.

Where did you say run out of memory and fail? More importantly how do
you say "don't run out of memory and fail"?

If we can live with a certain amount of "arbitrary failures" in simple
arithmetic, then the world won't end if tail recursion isn't optimized
away by the compiler. You can always hand-optimize it yourself.

dont_run_out_of_memory_and_fail = 10**(10**100) # please?
 
S

Steven D'Aprano

the only thing that optimization
currently does in Python at present is to discard docstrings

Python, or at least CPython, does more optimizations than that. Aside from
run-time optimizations like interned strings etc., there are a small
number of compiler-time optimizations done.

Running Python with the -O (optimize) flag tells Python to ignore
assert statements. Using -OO additionally removes docstrings.

Regardless of the flag, in function (and class?) definitions like the
following:

def function(args):
"Doc string"
x = 1
s = "this is a string constant"
"and this string is treated as a comment"
return s*x

The string-comment is ignored by the compiler just like "real" comments.
(The same doesn't necessarily hold for other data types.)


Some dead code is also optimized away:
.... if 0:
.... print "dead code"
.... return 2
.... 4 0 LOAD_CONST 1 (2)
3 RETURN_VALUE


Lastly, in recent versions (starting with 2.5 I believe) Python includes a
peephole optimizer that implements simple constant folding:

# Python 2.4.3 1 0 LOAD_CONST 1 (1)
3 LOAD_CONST 2 (2)
6 BINARY_ADD
7 RETURN_VALUE

# Python 2.5 1 0 LOAD_CONST 2 (3)
3 RETURN_VALUE


The above all holds for CPython. Other Pythons may implement other
optimizations.
 
J

James Stroud

Kay said:
Proof by rumour?

"Proof" if you define "proof" by asking for clarification about a vague
recollection of an even more vague posting. I think now I'll prove
Fermat's Last Theorem by hailing a cab.
 
B

bruno.desthuilliers

Please explain this. I remember reading on this newsgroup that an
advantage of ruby (wrt python) is that ruby has tail recursion, implying
that python does not. Does python have fully optimized tail recursion as
described in the tail recursion Wikipedia entry? Under what
circumstances can one count on the python interpreter recognizing the
possibility for optimized tail recursion?

I'm afraid Terry is wrong here, at least if he meant that CPython had
tail recursion *optimization*.

(and just for those who don't know yet, it's not a shortcoming, it's a
design choice.)
 
J

Josiah Carlson

Steven said:
Python, or at least CPython, does more optimizations than that. Aside from
run-time optimizations like interned strings etc., there are a small
number of compiler-time optimizations done.

Running Python with the -O (optimize) flag tells Python to ignore
assert statements. Using -OO additionally removes docstrings.

Oh yeah, asserts. I never run with -O, and typically don't use asserts,
so having or not having either isn't a big deal for me.
Regardless of the flag, in function (and class?) definitions like the
following:

def function(args):
"Doc string"
x = 1
s = "this is a string constant"
"and this string is treated as a comment"
return s*x

The string-comment is ignored by the compiler just like "real" comments.
(The same doesn't necessarily hold for other data types.)

I would guess it is because some other data types may have side-effects.
On the other hand, a peephole optimizer could be written to trim out
unnecessary LOAD_CONST/POP_TOP pairs.
Some dead code is also optimized away:

Obviously dead code removal happens regardless of optimization level in
current Pythons.
Lastly, in recent versions (starting with 2.5 I believe) Python includes a
peephole optimizer that implements simple constant folding:

Constant folding happens regardless of optimization level in current
Pythons.


So really, assert and docstring removals. Eh.

- Josiah
 
J

John Nagle

I'm afraid Terry is wrong here, at least if he meant that CPython had
tail recursion *optimization*.

(and just for those who don't know yet, it's not a shortcoming, it's a
design choice.)
 

Members online

Forum statistics

Threads
473,744
Messages
2,569,482
Members
44,900
Latest member
Nell636132

Latest Threads

Top