Python syntax in Lisp and Scheme

S

Sander Vesik

In comp.lang.scheme Grzegorz Chrupala said:
Scheme
(define vector-fill!
(lambda (v x)
(let ((n (vector-length v)))
(do ((i 0 (+ i 1)))
((= i n))
(vector-set! v i x)))))

Python
def vector_fill(v, x):
for i in range(len(v)):
v = x

To me the Python code is easier to read, and I can't possibly fathom
how somebody could think the Scheme code is easier to read. It truly
boggles my mind.


Pick a construct your pet language has specialized support, write an
ugly equivalent in a language that does not specifically support it
and you have proved your pet language to be superior to the other
language. (I myself have never used the "do" macro in Scheme and my
impression is few people do. I prefer "for-each", named "let" or the
CL-like "dotimes" for looping).


Whiile true, if solving a problem requires you to use a lot of constructs
that one language provides and for which youave to do lots of extra work
in teh other, one might aswell take the pragmatic approach that the other
language is better for the given problem at hand.
 
J

Jason Creighton

And this touches on yet another point of the Zen of Python:
explicit is better than implicit. Having a function
implicitly return the last expression it computes would
violate this point (and is in fact somewhat error-prone,
in my experience, in the several languages that adopt
this rule).

I don't mean to start a flamewar....ah, who am I kidding, of *course* I
mean to start a flamewar. :)

I just wish the Zen of Python (try "import this" on a Python interpreter
for those who haven't read it.) would make it clearer that "Explicit is
better than implicit" really means "Explicit is better than implicit _in
some cases_"

Look here:
[1, 4, 9]

Good grief! How could someone who doesn't understand list comprehensions
*ever* read and understand this? We'd better do it the explicit way:'
.... ary.append(x*x)
....

*Much* better! Now you don't have to understand list comprehensions to
read this code!

</sarcasm>

Of course, nobody is going to seriously suggest that list
comprehensions be removed from Python. (I hope). The point here is that,
for any level of abstraction, you have to understand how it works. (I
think) what the author of "The Zen of Python" (Tim Peters) means when
he says "Explicit is better than implicit" is "Don't do weird crazy
things behind the programmer's back, like automatically have varibles
initialized to different datatypes depending on how they are used, like
some programming language we could mention."

I agree with most of the rest of "The Zen of Python", except for the
"There should be one-- and preferably only one --obvious way to do it."
bit. I think it should be "There should be one, and preferably only one
, *easy* (And it should be obvious, if we can manage it) way to do it."

For instance, let us take the ternary operator. Ruby has at least two
constructs that will act like the ternary operator.

if a then b else c end

and

a ? b : c

The "if a then b else c end" bit works because of Ruby's "return value is
last expression" policy.

In a recent thread in comp.lang.ruby, you (Alex Martelli) said:

But for the life of me I just can't see why, when one has
"if a then b else c end" working perfectly as both an expression
and a control statement, one would WANT to weigh down the language
with an alternative but equivalent syntax "a?b:c".

<end quote>

The reason I would want to weigh down the language with an alternative
syntax is because sometimes a ? b : c is the *easy* way to do it.
Sometimes you don't want do say:

obj.method(arg1, (if boolean then goober else lala end))

Sometimes you just want to be able to say:

obj.method(arg1, arg2, boolean ? goober : lala)

But the Python folks seems to like having only one way to write
something, which I argee with, so long as we have at least one easy way
to write something.

So there is a balance to be struck here. Some people like the way Python
does things; some people do not. This is why we all hate each other. :)

No, really, that's why we have different languages.
In Ruby, the spaces of methods and data are separate (i.e.,
most everything is "an object" -- but, differently from
Python, methods are not objects in Ruby), and I do not
think, therefore, that you can write a method that builds
and returns another method, and bind the latter to a name --
but you can return an object with a .call method, a la:

def outer(a) proc do |b| a+=b end end

I would probably define this as;

def outer(a)
proc { |b| a+=b }
end

I prefer the { } block syntax for one-line blocks like that. And I don't
like stick a whole function definition on one line like that. Makes it
harder to read, IMHO.
x = outer(23)
puts x.call(100) # emits 123
puts x.call(100) # emits 223

[i.e., I can't think of any way you could just use x(100)
at the end of such a snippet in Ruby -- perhaps somebody
more expert of Ruby than I am can confirm or correct...?]

I will go on a little ego trip here and assume I'm more of a Ruby expert
than you are. :)

Yes, you are pretty much correct. There are some clever hacks you could
do, but for the most part, functional objects in Ruby come without
sugar.

Jason Creighton
 
C

Christoph

:
....
def outer(a) proc do |b| a+=b end end

x = outer(23)
puts x.call(100) # emits 123
puts x.call(100) # emits 223

[i.e., I can't think of any way you could just use x(100)
at the end of such a snippet in Ruby -- perhaps somebody
more expert of Ruby than I am can confirm or correct...?]

Guy is probably thinking about something like this

---
def outer(sym,a)
Object.instance_eval {
private # define a private method
define_method(sym) {|b| a+=b }
}
end

outer:)x,24)

p x(100) # 124
p x(100) # 224
---


but there is no way to write a ``method returning
method ::eek:uter in Ruby that could be used in the form

----
x = outer(24)
x(100)
----

On the other hand, using []-calling convention
and your original definition, you get - at least
visually - fairly close.

---
def outer(a) proc do |b| a+=b end end

x = outer(23)
puts x[100] # emits 123
puts x[100] # emits 223
 
S

Sander Vesik

In comp.lang.scheme David Rush said:
Emacs. I've noticed over the years that people don't really get Emacs
religion until they've started hacking elisp. I know that the frustration
of having almost-but-not-quite the behavior I wanted on top of having all
that source code was a powerful incentive for me to learn Lisp. Of course
my apreciation of Emacs only increased as I went...

I have at times almost gnawed off my hand to avoid going down that path.
I'd rather write cobol than elisp...
 
K

Kenny Tilton

Sander said:
I have at times almost gnawed off my hand to avoid going down that path.
I'd rather write cobol than elisp...

Mileage does vary :): http://alu.cliki.net/RtL Emacs Elisp

That page lists people who actually cite Elisp as at least one way they
got turned on to Lisp. I started the survey when newbies started showing
up on the c.l.l. door in still small but (for Lisp) significantly larger
numbers. Pail Graham holds a commanding lead, btw.

kenny
 
A

Andrew Dalke

Still have only made slight headway into learning Lisp since the
last discussion, so I've been staying out of this one. But

Kenny Tilton:
Take a look at the quadratic formula. Is that flat? Not. Of course
Python allows nested math (hey, how come!), but non-mathematical
computations are usually trees, too.

Since the quadratic formula yields two results, I expect most
people write it more like

droot = sqrt(b*b-4*a*c) # square root of the discriminate
x_plus = (-b + droot) / (4*a*c)
x_minus = (-b - droot) / (4*a*c)

possibly using a temp variable for the 4*a*c term, for a
slight bit better performance.
It occurred to me that, if the computation
is indeed the moral equivalent of the quadratic formula, calling various
lower-level functions instead of arithmetic operators, then it is
/worse/ to be reading a flattened version in which subexpression results
are pulled into local variable, because then one has to mentally
decipher the actual hierarchical computation from the bogus flat sequence.

But isn't that flattening *exactly* what occurs in math. Let me pull
out my absolute favorite math textbook - Bartle's "The Elements
of Real Analysis", 2nd ed.

I opened to page 213, which is in the middle of the book.

29.1 Definition. If P is a partition of J, then a Riemann-Stieltjes sum
of f with respect to g and corresponding to P = (x_0, x_1, ..., x_n) is a
real
number S(P; f, g) of the form

n
S(P; f, g) = SIGMA f(eta_k){g(x_k) - g(x_{k-1})}
k = 1

Here we have selected number eta_k satisying

x_{k-1} <= eta_k <= x_k for k = 1, 2, ..., n

There's quite a bit going on here behind the scenes which are
the same flattening you talk about. For examples: the definiton
of "partition" is given elsewhere, the notations of what f and g
mean, and the nomenclature "SIGMA"


Let's try mathematical physics, so I pulled out Arfken's
"Mathematical Methods for Physicists", 3rd ed.

About 1/3rd of the way through the book this time, p 399

Exercise 7.1.1 The function f(z) expanded in a Laurent series exhibits
a pole of order m at z = z_0. Show that the coefficient of (z-z_0)**-1,
a_{-1}, is given by
1 d[m-1]
a_{-1} = ------- * ------------- * ( (z-z_0)**m * f(z)) evalutated as )
(m-1)! d z [m-1]
x -> x_0

This requires going back to get the definition of a Laurent series,
and of a pole, knowing how to evaluate a function at a limit point,
and remembering the bits of notation which are so hard to express
in 2D ASCII. (the d[m-1]/dz[m-1] is meant to be the d/dz operator
taken m-1 times).

In both cases, the equations are flattened. They aren't pure trees
nor are they absolutely flat. Instead, names are used to represent
certain ideas -- that is, flatten them. Yes, it requires people to
figure out what these names mean, but on the other hand, that's part
of training.

And part of that training is knowing which terms are important
enough to name, and the balance between using using old
names and symbols and creating new ones.
So if we have:

(defun some-vital-result (x y z)
(finally-decide
(if (serious-concern x)
(just-worry-about x z)
(whole-nine-yards x
(composite-concern y z)))))

...well, /that/ visually conveys the structure of the algorithm, almost
as well as a flowchart (as well if one is accustomed to reading Lisp).
Unwinding that into an artificial flattening /hides/ the structure.

"Flat is better than nested." does not mean nested is always and
forever wrong. "Better" means there's a balance.

"Readability counts." is another guideline.
I do not know what Zen is, but I do now that is not Zen.

The foreigner came to the monestary, to learn more of the
ways of Zen. He listened to the monks then sat cross-legged
for a day, in the manner of initiates. Afterwards he complained
to the master saying that it would be impossible for him to reach
Nirvana because of the pains in his legs and back. Replied the
master, "try using a comfy chair," but the foreigner returned
home to his bed.

The Zen of Python outlines a set of guidelines. They are not
orthogonal and there are tensions between them. You can
take one to an extreme but the others suffer. That balance
is different for different languages. You judge the Zen of Python
using the Zen of Lisp.

Andrew
(e-mail address removed)
 
P

Paul Rubin

Kenny Tilton said:
That page lists people who actually cite Elisp as at least one way
they got turned on to Lisp. I started the survey when newbies started
showing up on the c.l.l. door in still small but (for Lisp)
significantly larger numbers. Pail Graham holds a commanding lead, btw.

I'd fooled around with other lisp systems before using GNU Emacs, but
reading the Emacs source code was how I first got to really understand
how Lisp works.
 
R

Robin Becker

Andrew Dalke said:
Since the quadratic formula yields two results, I expect most
people write it more like

droot = sqrt(b*b-4*a*c) # square root of the discriminate
x_plus = (-b + droot) / (4*a*c)
x_minus = (-b - droot) / (4*a*c)

possibly using a temp variable for the 4*a*c term, for a
slight bit better performance.

perhaps we should be using computer algebra as suggested in this paper
http://www.mmrc.iss.ac.cn/~ascm/ascm03/sample.pdf on computing the
solutions of quadratics.
 
D

David Eppstein

I have at times almost gnawed off my hand to avoid going down that path.
I'd rather write cobol than elisp...

Mileage does vary :): http://alu.cliki.net/RtL Emacs Elisp

That page lists people who actually cite Elisp as at least one way they
got turned on to Lisp. I started the survey when newbies started showing
up on the c.l.l. door in still small but (for Lisp) significantly larger
numbers. Pail Graham holds a commanding lead, btw.[/QUOTE]

Heh. Does that mean former TECO programmers will get turned on to Perl?
Hasn't had that effect for me yet...
 
S

Sander Vesik

In comp.lang.scheme David Rush said:
yes, via continuations which reify the
fundamental control operators in all languages

the exceptions SRFI and saying it is there as an extension would imho be a
better answer.
too many to count. The FAQ lists over twenty. IMO
there are about 9 'major' implementations which
have
relatively complete compliance to R5RS and/or
significant extension libraries

And the number is likely to continue increase over the years. Scheme is
very easy to implement, including as an extensions language inside the
runtime of something else. The same doesn't really hold for common lisp.
 
H

Hans Nowak

Lulu said:
(e-mail address removed) (Grzegorz Chrupala) wrote previously:
|shocked at how awkward Paul Graham's "accumulator generator" snippet is
|in Python:
|class foo:
| def __init__(self, n):
| self.n = n
| def __call__(self, i):
| self.n += i
| return self.n

Me too. The way I'd do it is probably a lot closer to the way Schemers
would do it:
def foo(i, accum=[0]):
... accum[0]+=i
... return accum[0]
...4

Shorter, and without an awkward class.

Yah, but instead it abuses a relatively obscure Python feature... the fact that
default arguments are created when the function is created (rather than when it
is called). I'd rather have the class, which is, IMHO, a better way to
preserve state than closures. (Explicit being better than implicit and all
that... :)
 
H

Hans Nowak

Grzegorz said:
As an example of how subjective this can be, most of the features you
mention as too complex for Python to support are in fact standard in Scheme
(true lexical scope, implicit return, no expression/statement distinction)
and yet Scheme is widely regarded as one of the simplest programming
languages out there, more so than Python.

Scheme, as a language, is arguably simpler than Python... it takes a few core
concepts and rigorously applies them everywhere. This makes the Scheme
language definition simpler than Python's. However, whether *programming in
Scheme* is simpler than *programming in Python* is a different issue
altogether. To do everyday things, should you really have to grok recursion,
deeply nested expressions, anonymous functions, complex list structures, or
environments? Of course, Python has all this as well (more or less), but they
usually don't show up in Python 101.
 
T

Terry Reedy

Grzegorz ChrupaÅ said:
As an example of how subjective this can be, most of the features you
mention as too complex for Python to support are in fact standard in Scheme
(true lexical scope, implicit return, no expression/statement distinction)

Another problem with simplicity is that introducing it in one place may
increase complexity in another place. [typos corrected]

[Python simplicity=>complexity example (scopes) snipped]

[I am leaving the reduced newsgroup list as is. If anything I write
below about Lisp does not apply to Scheme specificly, my aplogies in
advance.]

There is a basic Lisp example that some Lispers tend to gloss over, I
think to the ultimate detriment of promoting that more people
understand and possibly use Lisp (in whatever version).

Specifically, the syntactic simplification of unifying functions and
statements as S-expressions aids, is made possible by, and comes at
the cost of semantic complexification of the meaning of 'function
call' (or S-expression evaulation).

The 'standard' meaning in the languages I am previously familiar with
(and remember) is simple and uniform: evaluate the argument
expressions and somehow 'pass' the resulting values to the function to
be matched with the formal parameters. The only complication is in
the 'how' of the passing.

Lisp (and possibly other languages I am not familiar with) adds the
alternative of *not* evaluating arguments but instead passing them as
unevaluated expressions. In other words, arguments may be
*implicitly* quoted. Since, unlike as in Python, there is no
alternate syntax to flag the alternate argument protocol, one must, as
far as I know, memorize/learn the behavior for each function. The
syntactic unification masks but does not lessen the semantic
divergence. For me, it made learning Lisp (as far as I have gotten)
more complicated, not less, especially before I 'got' what going on.

In Python, one must explicitly quote syntactic function arguments
either with quote marks (for later possible eval()ing) or 'lambda :'
(for later possible calling). Inplicit quoting requires the alternate
syntax of either operator notation ('and' and 'or'-- but these are
exceptional for operators) or a statement. Most Python statements
implicitly quote at least part of the construct. (A print statement
implicitly stringifies its object values, but this too is special
handling.)

Question: Python has the simplicity of one unified assignment
statement for the binding of names, attributes, slot and slices, and
multiples thereof. Some Lisps have the complexity of different
functions for different types of targets: set, setq, putprop, etc.
What about Scheme ;-?

Terry J. Reedy
 
S

Steve VanDevender

Terry Reedy said:
Lisp (and possibly other languages I am not familiar with) adds the
alternative of *not* evaluating arguments but instead passing them as
unevaluated expressions. In other words, arguments may be
*implicitly* quoted. Since, unlike as in Python, there is no
alternate syntax to flag the alternate argument protocol, one must, as
far as I know, memorize/learn the behavior for each function. The
syntactic unification masks but does not lessen the semantic
divergence. For me, it made learning Lisp (as far as I have gotten)
more complicated, not less, especially before I 'got' what going on.

What you're talking about are called "special forms" and are definitely
not functions, and are used when it is semantically necessary to leave
something in an argument position unevaluated (such as in 'cond' or
'if', Lisp 'defun' or 'setq', or Scheme 'define' or 'set!').
Programmers create them using the macro facilities of Lisp or Scheme
rather than as function definitions. There are only a handful of
special forms one needs to know in routine programming, and each one has
a clear justification for being a special form rather than a function.

Lisp-family languages have traditionally held to the notion that Lisp
programs should be easily representable using the list data structure,
making it easy to manipulate programs as data. This is probably the
main reason Lisp-family languages have retrained the very simple syntax
they have, as well as why there is not different syntax for functions
and special forms.
Question: Python has the simplicity of one unified assignment
statement for the binding of names, attributes, slot and slices, and
multiples thereof. Some Lisps have the complexity of different
functions for different types of targets: set, setq, putprop, etc.
What about Scheme ;-?

Scheme has 'define', 'set!', and 'lambda' for identifier bindings (from
which 'let'/'let*'/'letrec' can be derived), and a number of mutation
operations for composite data types: 'set-car!'/'set-cdr!' for pairs,
'vector-set!' for mutating elements of vectors, 'string-set!' for
mutating strings, and probably a few others I'm forgetting.
 
O

Oren Tirosh

.
have been helped by python. I learned python then got interested in
it's functional side and ended up learning Scheme and Common Lisp. A
lot of new Scheme and Common Lisp developers I talk to followed the
same route. Python is a great language and I still use it for some
things.

Python is a gateway drug to much more dangerous stuff. Just say no to
functions as first-class objects. Before you know it you will be
snorting a dozen closing parentheses in a row.

Oren
 
R

Rene van Bevern

Emacs. I've noticed over the years that people don't really get Emacs
religion until they've started hacking elisp. I know that the frustration
of having almost-but-not-quite the behavior I wanted on top of having all
that source code was a powerful incentive for me to learn Lisp. Of course
my apreciation of Emacs only increased as I went...

hm. i really like LISP, but still don't get through emacs. After i
learned a bit LISP i wanted to try it again, and again i failed ;) i
know vim from the in- and out- side and just feel completely lost in
emacs.

i also like vim with gtk2 support more. not because of menu or toolbar,
which are usually switched off in my config, but because of antialiased
letters. I just don't like coding with bleeding eyes anymore ;)

*to me* vim just looks and feels much more smooth than emacs, so i don't
think that hacking LISP influences the choice of the editor much. it of
course makes people *try* Emacs because of its LISP support.

Rene
 
B

Bengt Richter

Bengt Richter wrote:
...

Well, I _like_ vegetables...


Very nice (apart from the yecchy name;-).


I don't see the advantage of explicity using an empty dict and then
updating it with kwds, vs using kwds directly.
^^-- not the same dict, as you've probably thought of by now, but
glad to see I'm not the only one who misread that ;-)

I.e., as you know, the contents of the dict passed to type is used to update the fresh class dict.
It's not the same mutable dict object, (I had to check)
>>> d={'foo':'to check id'}
>>> o = type('Using_d',(),d)()
>>> d['y']='a y value'
>>> o.__class__.__dict__.keys()
['__dict__', '__module__', 'foo', '__weakref__', '__doc__']

(If d were serving as class dict, IWT y would have shown up in the keys).

and also the instance dict is only a glimmer in the trailing ()'s eye
at the point the kwd dict is being passed to type ;-)
... def __init__(self, **kw): self.__dict__.update(kw)
...
>>> for inst in [mk(x=mk.__name__+'_x_value') for mk in (mkNSC, mkNSO, Bunch)]:
... cls=inst.__class__; classname = cls.__name__
... inst.y = 'added %s instance attribute y'% classname
... print '%6s: instance dict: %r' %(classname, inst.__dict__)
... print '%6s class dict keys: %r' %('', cls.__dict__.keys())
... print '%6s instance attr x: %r' %( '', inst.x)
... print '%6s instance attr y: %r' %( '', inst.y)
... print '%6s class var x : %r' %( '', cls.__dict__.get('x','<x not there>'))
... print
...
NSC: instance dict: {'y': 'added NSC instance attribute y'}
class dict keys: ['__dict__', 'x', '__module__', '__weakref__', '__doc__']
instance attr x: 'mkNSC_x_value'
instance attr y: 'added NSC instance attribute y'
class var x : 'mkNSC_x_value'

NSO: instance dict: {'y': 'added NSO instance attribute y', 'x': 'mkNSO_x_value'}
class dict keys: ['__dict__', '__module__', '__weakref__', '__doc__']
instance attr x: 'mkNSO_x_value'
instance attr y: 'added NSO instance attribute y'
class var x : '<x not there>'

Bunch: instance dict: {'y': 'added Bunch instance attribute y', 'x': 'Bunch_x_value'}
class dict keys: ['__dict__', '__module__', '__weakref__', '__doc__', '__init__']
instance attr x: 'Bunch_x_value'
instance attr y: 'added Bunch instance attribute y'
class var x : '<x not there>'

Note where x and y went. So NSC is nice and compact, but subtly different. E.g.,
Traceback (most recent call last):
File "<stdin>", line 1, in ?
AttributeError: 'NSC' object attribute 'x' is read-only

(Is that's a new message with 2.3?)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
AttributeError: 'NSC' object has no attribute 'x'

NS was for Name Space, and C vs O was for Class vs obj dict initialization ;-)

Regards,
Bengt Richter
 
K

Kenny Tilton

Andrew said:
Still have only made slight headway into learning Lisp since the
last discussion, so I've been staying out of this one. But

Kenny Tilton:



Since the quadratic formula yields two results, ...

I started this analogy, didn't I? <g>

I expect most
people write it more like

droot = sqrt(b*b-4*a*c) # square root of the discriminate
x_plus = (-b + droot) / (4*a*c)
x_minus = (-b - droot) / (4*a*c)

Not?:

(defun quadratic (a b c)
(let ((rad (sqrt (- (* b b) (* 4 a c)))))
(mapcar (lambda (plus-or-minus)
(/ (funcall plus-or-minus (- b) rad) (+ a a)))
'(+ -))))

:)
possibly using a temp variable for the 4*a*c term, for a
slight bit better performance.

Well it was a bad example because it does require two similar
calculations which can be done /faster/ by pre-computing shared
components. But then the flattening is about performance, and the
subject is whether deeply nested forms are in fact simpler than
flattened sequences where the algorithm itself would be drawn as a tree.
So the example (or those who stared at it too hard looking for
objections said:
But isn't that flattening *exactly* what occurs in math. Let me pull
out my absolute favorite math textbook - Bartle's "The Elements
of Real Analysis", 2nd ed.

Oh, god. I tapped out after three semesters of calculus. I am in deep
trouble. :)
I opened to page 213, which is in the middle of the book.

29.1 Definition. If P is a partition of J, then a Riemann-Stieltjes sum
of f with respect to g and corresponding to P = (x_0, x_1, ..., x_n) is a
real
number S(P; f, g) of the form

n
S(P; f, g) = SIGMA f(eta_k){g(x_k) - g(x_{k-1})}
k = 1

Here we have selected number eta_k satisying

x_{k-1} <= eta_k <= x_k for k = 1, 2, ..., n

There's quite a bit going on here behind the scenes which are
the same flattening you talk about. For examples: the definiton
of "partition" is given elsewhere, the notations of what f and g
mean, and the nomenclature "SIGMA"

In both cases, the equations are flattened. They aren't pure trees
nor are they absolutely flat. Instead, names are used to represent
certain ideas -- that is, flatten them.

No! Those are like subroutines; they do not flatten, they create call
trees, hiding and encapsulating the details of subcomputations.

We do precisely the same in programming, which is part of why flattening
can be avoided. When any local computation gets too long, there is
probably a subroutine to be carved out, or at least I can take 10 lines
and give it a nice readable name so I can avoid confronting too much
detail at any one time. But I don't throw away the structure of the
problem to get to simplicity.
.. You judge the Zen of Python
using the Zen of Lisp.

Hmmm, Zen constrained by the details of a computing language. Some
philosophy! :) What I see in "flat is better" is the mind imposing
preferred structure on an algorithm which has its own structure
independent of any particular observer/mind.

I am getting excellent results lately by always striving to conform my
code to the structure of the problem as it exists independently of me.
How can I know the structure independently of my knowing? I cannot, but
the problem will tell me if I screw up and maybe even suggest how I went
wrong. I make my code look like my best guess at the problem, then if I
have trouble, I try a different shape. I do not add bandaids and patches
to force my first (apparently mistaken) ideas on the problem. When the
problem stops resisting me, I know I have at least approximated its
shape. Often there is a "pieces falling into place" sensation that gives
me some confidence.

Lisp has both SETF and "all forms return a value", so it does not
interfere in the process. In rare cases where the functional paradigm is
inappropriate, I can run thru a sequence of steps to achieve some end.
Lisp stays out of the way.

Python (I gather from what I read here) /deliberately/ interferes in my
attempts to conform my code to the problem at hand, because the
designers have decreed "flat is better". Python rips a tool from my
hands without asking if, in some cases (I would say most) it might be
the right tool (where an algorithm has a tree-like structure).

I do not think that heavy-handedness can be defended by saying "Oh, but
this is not Lisp." It is just plain heavy-handed.

kenny

"Be the ball."
- Caddy Shack
 
S

Shriram Krishnamurthi

Terry Reedy said:
Lisp (and possibly other languages I am not familiar with) adds the
alternative of *not* evaluating arguments but instead passing them as
unevaluated expressions. In other words, arguments may be
*implicitly* quoted. Since, unlike as in Python, there is no
alternate syntax to flag the alternate argument protocol, one must, as
far as I know, memorize/learn the behavior for each function. The
syntactic unification masks but does not lessen the semantic
divergence. For me, it made learning Lisp (as far as I have gotten)
more complicated, not less, especially before I 'got' what going on.

I'm sorry -- you appear to be hopelessly confused on this point. I
can't comment on the dark corners of Common Lisp, but I do know all of
those corners of Scheme. Scheme is a true call-by-value language.
There are no functions in Scheme whose arguments are not evaluated.
Indeed, neithen a function definition, nor an argument location, has
the freedom to "not evaluate" an argument. We can reason about this
quite easily: the language provides no such syntactic annotation, and
the evaluator (as you might imagine) does not randomly make such a
choice. Therefore, it can't happen.

It is possible that you had a horribly confused, and therefore
confusing, Scheme instructor or text.
Question: Python has the simplicity of one unified assignment
statement for the binding of names, attributes, slot and slices, and
multiples thereof. Some Lisps have the complexity of different
functions for different types of targets: set, setq, putprop, etc.

Again, you're confused. SET, SETQ, etc are not primarily binding
operators but rather mutation operators. The mutation of identifiers
and the mutation of values are fundamentally different concepts.

Shriram
 
A

A.M. Kuchling

Python (I gather from what I read here) /deliberately/ interferes in my
attempts to conform my code to the problem at hand, because the
designers have decreed "flat is better". Python rips a tool from my
hands without asking if, in some cases (I would say most) it might be
the right tool (where an algorithm has a tree-like structure).

Oh, for Pete's sake... Python is perfectly capable of manipulating tree
structures, and claiming it "rips a tool from my hand" is simply silly.

The 19 rules are general principles that encapsulate various principles of
Python's design, but they're not hard-and-fast rules to be obeyed like a
legal code, and their meanings are unspecified. I have seen the "flat is
better than nested" rule cited against creating too many submodules in a
package, against nesting loops too deeply, against making code too dense.
You can project any meaning onto them you wish, much like Perlis's epigrams
or Zen koans.

--amk
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,009
Latest member
GidgetGamb

Latest Threads

Top