Python syntax in Lisp and Scheme

T

Terry Reedy

Steve VanDevender said:
on.

What you're talking about are called "special forms"

Perhaps by you and by Schemers generally, and perhaps even by modern
Common Lispers (I have no idea) but not by Winston and Horn in LISP
(1st edition, now into 3rd): "Appendix 2: Basic Lisp Functions ... A
FSUBR takes a variable number of arguments which may not be
evaluated.", which goes on to list AND, COND, DEFUN, PROG, etcetera,
along with normal SUBR and LSUBR (variable arg number) arg-evaluating
functions.
and are definitely not functions,

That is *just* what I thought, though I mentally used the word
'psuedofunction'.
and are used when it is semantically necessary to leave
something in an argument position unevaluated (such as in 'cond' or
'if', Lisp 'defun' or 'setq', or Scheme 'define' or 'set!').

Once I understood this, I noticed that the special forms mostly
correspond to Python statements or special operators, which have the
same necessity.

....
Lisp-family languages have traditionally held to the notion that Lisp
programs should be easily representable using the list data structure,
making it easy to manipulate programs as data.

This is definitely a plus. One of my current interests is in
meta-algorithms that convert between recursive and iterative forms of
expressing 'repetition with variation' (and not just tail recursion).
Better understanding Lisp has help my thinking about this.

Terry J. Reedy
 
T

Terry Reedy

I'm sorry -- you appear to be hopelessly confused on this point.

Actually, I think I have just achieved clarity: the one S-expression
syntax is used for at least different evaluation protocols -- normal
functions and special forms, which Lispers have also called FSUBR and
FEXPR *functions*. See the post by Steve VanDevender and my response
thereto.
There are no functions in Scheme whose arguments are not evaluated.

That depends on who defines 'function'. As you quoted, I said Lisp
(in general) and not Scheme specifically. I repeat my previous note:
"If anything I write below about Lisp does not apply to Scheme
specificly, my aplogies in advance."
It is possible that you had a horribly confused, and therefore
confusing, Scheme instructor or text.

I will let you debate this with LISP authors Winston and Horn. I also
read the original SICP (several years ago, and have forgotten some
details) and plan to look at the current version sometime.

Terry J. Reedy
 
P

prunesquallor

Terry Reedy said:
Actually, I think I have just achieved clarity: the one S-expression
syntax is used for at least different evaluation protocols -- normal
functions and special forms, which Lispers have also called FSUBR and
FEXPR *functions*. See the post by Steve VanDevender and my response
thereto.

There used to be FEXPR and FSUBRs in MacLisp, but Common Lisp never
had them. They had flags that indicated that their arguments were not
to be evaluated, but were otherwise `normal' functions.

The problem with FEXPRs is when you pass them around as first-class
values. Then it is impossible to know if any particular fragment of
code is going to be evaluated (in fact, it can dynamically change).
Needless to say, this presents problems to the compiler.

It generally became recognized that macros were a better solution.

So FSUBRs, which were primitives that did not evaluate their arguments
have been superseded by `special forms', which are syntactic constructs.

FEXPRs, which were user procedures that did not evaluate their
arguments have been superseded by macros.

Macros and special forms are generally not considered `functions'
because they are not first-class objects.
I will let you debate this with LISP authors Winston and Horn. I also
read the original SICP (several years ago, and have forgotten some
details) and plan to look at the current version sometime.

The original Winston and Horn book came out prior to SICP, which
itself came out prior to the creation of Common Lisp.
 
K

Kenny Tilton

A.M. Kuchling said:
Oh, for Pete's sake... Python is perfectly capable of manipulating tree
structures, and claiming it "rips a tool from my hand" is simply silly.

Well then I am glad I did not say it! :)

I am talking about coding up an algorithm, not manipulating a tree of
data. An example is:

(my-function ;; takes three parameters, which follow
(this-function x yz) ;; p1
(case x :)left 1):)right -1)) ;;p2
(if (some-other-function 'z)
42
'norwegian-blue)) ;; p3

where my-function gets passed the first two computations plus either 42
or 'norwegian-blue, ie, the value returned by the IF form.

Looks simple to me. But IIUC (I may not!) in Python IF is a statement,
so that would not work too well. I need an artificial extra statement to
satisfy an artifical rule.

kenny
 
R

Russell Wallace

I'd be interested to hear your reasons. *If* you take the sharp distinction
that python draws between statements and expressions as a given, then python's
syntax, in particular the choice to use indentation for block structure, seems
to me to be the best choice among what's currently on offer (i.e. I'd claim
that python's syntax is objectively much better than that of the C and Pascal
descendants

I'll claim C's syntax is objectively better - it has a clean
definition whereas Python's hasn't. Python isn't even consistent - it
uses whitespace some of the time and delimiters some of the time; if
it stuck to the decision to use whitespace it might be a bit less
repellent. Also Python's syntax has a whole category of pitfalls that
C's lacks.

In truth, though, my reason for being unwilling to use Python for
anything other than throwaway scripting isn't based on objective
criteria, it's based on a visceral revulsion - it just _feels_ wrong.
If Python feels right to you, then you should by all means use it.
 
A

Andrew Dalke

Kenny Tilton
Not?:

(defun quadratic (a b c)
(let ((rad (sqrt (- (* b b) (* 4 a c)))))
(mapcar (lambda (plus-or-minus)
(/ (funcall plus-or-minus (- b) rad) (+ a a)))
'(+ -))))

:)

Indeed no, I don't suspect most people write it this way.

Also, as Robin Becker pointed out in this branch, the above
equation fails numerically for several cases, when intermediate
terms are too large for floats or when there's a b*b is very
much greater than 4*a*c.
Well it was a bad example because it does require two similar
calculations which can be done /faster/ by pre-computing shared
components.

It was a bad example because it can have two return values.
But then the flattening is about performance

That's not relevant. That's why I said "possibly."
and the subject is whether deeply nested forms are in fact
simpler than flattened sequences where the algorithm
itself would be drawn as a tree.

You said your enlightenment came from analogy to
math, where writing everything as one expression is
better than writing things in different parts. I objected,
saying that math also uses ways to remove deep
hierarchies, including ways to flatten those equations.

It is not changing the subject. It is a rebuttal to one of
your justifications and an attempt at doing a comparison
with other fields, to bring in my view that there is a
balance between flat and deep, and the choice of
where the different tradeoffs are made is the essense
of the Zen of Python ... or of any other field of endevour.
No! Those are like subroutines; they do not flatten, they create call
trees, hiding and encapsulating the details of subcomputations.

But didn't I do that when I said "root = sqrt(b*b-4*a*c)"?
Why isn't that "like [a] subroutine"?
When any local computation gets too long, there is
probably a subroutine to be carved out, or at least I can take 10 lines
and give it a nice readable name so I can avoid confronting too much
detail at any one time.

There's also common subexpressions like my 'root' which
aren't appropriate to compute as a function; that is, it's only
used twice and writing discriminate(a, b, c) is too much hiding
given what it does. But it is one where it may make sense to
use a temporary variable because it makes the expression more
readable .. at least by those who prefer reading flatter equations
than you do. Again, it's that balance thing.

I must agree that the examples weren't the best I could do.
The problem is that books have built up the nomenclature
during the preceeding pages, so don't do the "let X be ..."
that you would see elsewhere. The people who do that
the most that I've seen are the fluid dynamics modellers,
so let me quote from a not atypical example I dug up

http://citeseer.nj.nec.com/cache/papers/cs/8949/http:zSzzSzcronos.rutgers.ed
uzSz~knightzSzpaperszSzaiaa96-0040.pdf/computation-of-d-asymmetric.pdf

===============================
Nomenclature

M_infinity Mach Number
M_t Turbulent Mach Number
Re_delta Reynolds Number, Based on
Incoming Boundary Layer Thickness
delta_infinity Incoming Boundary Layer Thickness
delta^* Boundary Layer Displacement Thickness
...
The Reynolds-average equations for conservation of mass,
momentum and energy are,
__ __ ~
del_t rho + del_k rho u_k = 0
__ ~ __ ~ ~
del_t rho u_i + del_k rho u_i * u_k =
_ ,, '' ___
- del_i p + del_k (-rho u_i u_k + tau_{ik})

...
where del_t = del/del t, del_k = del / del x_k and the
Einstein summantion convention is employed. The overbar
denotes a conventional Reynolds average, while
the overtilde is used to denote the Favre mass
average. A double superscript '' represents
flucuations with respect to the Favre average, while
a single superscript ' stands for fluctuations
respect to the Reynolds average.
___ ~
In the above equations, rho is the mean density, u_i
_
is the mass-averaged velocty, p is the mean pressure
_
and e is the mass-averaged total enery per unit mass.
The following relations are employed to evaluate
_ _
p and e:
_ ___ _
p = rho R T
_ _ ~ ~
e = c_v T + (1/2) u_i u_i + k

where k is the mass-averaged turbulence kinetic
energy
_____________
___ ,, ,,
rho k = (1/2) rho u_i u_i

===============================

All those terms, definitions, and relationships are
needed to understand the equations. Some of these are
external variables, others are simple substitutions
_ ___ _
(like p which is rho R T ) and some require multiple
_
substitutions, like e, which uses k, which is based
on another function).

This could all be written as one set of expressions,
_ _
without any of the substituted variables p and e,
but it wouldn't be done because that makes the structure
of the equation less obvious and overly verbose.
But I don't throw away the structure of the
problem to get to simplicity.

Who said anything about throwing it away? Again,
you interpreted things in an extreme view never
advocated by anyone here, much less me.

The questions are, when does the structure get too
complicated and what substructures are present which
can be used to simplify the overall structure without
losing understanding? And how can it be presented so
that others can more easily understand what's going on?

Hmmm, Zen constrained by the details of a computing language. Some
philosophy! :) What I see in "flat is better" is the mind imposing
preferred structure on an algorithm which has its own structure
independent of any particular observer/mind.

But it's well known there are different but equivalent ways
to approach the same problem. For example, in mechanics you
can use a Newtownian approach or a Lagrangian one and you'll
end up with the same answer even through the approaches are quite
different in style. In quantum mechanics, Schrodinger's wave
equation is identical to Heisenberg's matrix formulation.
In analysis, you can use measure theory or infinitesimals to
solve the same problem. Or as a simpler example, you can use
geometry to solve some problems easier than you can algebra.
(I recall a very nice, concise geometic proof of the Pythagorean
theorem.) Consider Ramanujan, who came up with very different
and powerful ways to think about problems, including alternate
ways to handle existing proofs. In computer science, all
recursive solutions can be made iterative, but there are times
when the recursive solution is easier to think about. We
know that NP-hard problems are identically hard (up to a
polynomial scaling factor) but using an algorithm for solving
a minimax problem might be more appropriate for a given task
than one meant for subgraph isomorphism, even if the two
solutions are equivalent.

By solving the problem using one of these alternatives, then
yes, your mind would impose a preferred structure to the solution.
So what? Sapir-Whorf is wrong and we can come up with new
new language and structures. It's hard to come up with new,
generally powerful solutions, but possible.
I am getting excellent results lately by always striving to conform my
code to the structure of the problem as it exists independently of me.
How can I know the structure independently of my knowing? I cannot, but
the problem will tell me if I screw up and maybe even suggest how I went
wrong.

Suppose you needed to find a given number in an unsorted list
of numbers. You know a priori that the number is in the list.
What algorithm do you use? My guess is you use a linear search,
which is O(N).

However, if you had a general purpose quantum computer available
to you then you could use Grover's algorithm and solve the problem
in O(sqrt(N)) time.

Did you even consider that alternative? Likely no, since
quantum algorithms still have little real life applicability.
(I think the biggest search so far has been in a list of 5
elements.)

If you didn't, then you must agree that your decision of how
to think of the problem and describe it on a computer is
constrained (not limited! - constraints can be broken) by
your training and the tools you have available to you.

Even if you did agree, can you prove there are no other
alternative solutions to your problem which are equally
graceful? If you say yes, I'm sure a Forth fan would disagree.

Python (I gather from what I read here) /deliberately/ interferes in my
attempts to conform my code to the problem at hand, because the
designers have decreed "flat is better". Python rips a tool from my
hands without asking if, in some cases (I would say most) it might be
the right tool (where an algorithm has a tree-like structure).

That's a false and deliberately antagonistic statement. Python
doesn't constrain you to squat - you're free to use any other
language you want to use. Python is another tool available to you,
and adding it to the universe of tools doesn't take any other one
away from you.

A programming language must balance between many and often
contrary factors. The fact that you might be good at deeply
hierarchical structures does not mean that others share your
abilities or preferences (and studies have shown that people
in general are not good at deep hierarchies - look at how
flat most user directories are) nor does it mean that every
programming language must support your favored way of thinking.

One of the balances is the expressability and power
available to a single user vs. the synergies available to
a group of people with different skills who have a consistent
worldview -- even if imposed from the outside. Ramanujan
made up his own notation for math, which has taken a lot
of time for others decode and so helping to reduce the impact
of his work on the rest of the world. Your viewpoint is
that there is no tradeoff, or rather that the balance point
is not much different than full single-person expressability.
Others here, including me, disagree.

In any case, your argument that mathematics equations can be
used as justification for prefering a deeply hierarchical
description has no basis in how people actually use and
express equations. If there was then FORTRAN would have
looked a lot different.

Andrew Dalke
(e-mail address removed)
 
A

Andrew Dalke

Jason Creighton:
I just wish the Zen of Python (try "import this" on a Python interpreter
for those who haven't read it.) would make it clearer that "Explicit is
better than implicit" really means "Explicit is better than implicit _in
some cases_"

http://dictionary.reference.com/search?q=better

better
adj. Comparative of good.
1. Greater in excellence or higher in quality.

"X is better than Y" does not mean "eschew Y for X".

Andrew
(e-mail address removed)
 
A

Andrew Dalke

Russell Wallace:
Also Python's syntax has a whole category of pitfalls that
C's lacks.

To be fair, there are a couple of syntax pitfalls Python doesn't
have that C has, like the dangling else

if (a)
if (b)
c++;
else /* indented incorrectly but valid */
c--

and left-right token disambiguation which turns

a+++b

into

a++ + b

(in early days of C, I did a =+1 only to find that =+ was
a deprecated version of += )

or in C++ makes it hader to do double level templates
because of the <<
If Python feels right to you, then you should by all means use it.

Yup, it fits my brain, but my brain ain't yours.

Andrew
(e-mail address removed)
 
B

Bengt Richter

I started this analogy, didn't I? <g>

I expect most

Not?:

(defun quadratic (a b c)
(let ((rad (sqrt (- (* b b) (* 4 a c)))))
(mapcar (lambda (plus-or-minus)
(/ (funcall plus-or-minus (- b) rad) (+ a a)))
'(+ -))))

:)
So cluttered ;-)
(and you evaluate a+a twice, why not (two-a (+ a a)) in the let ;-)

JFTHOI, a one-liner:

def quadr(a,b,c): return [op(-b, r)/d for r,d in [(sqrt(b*b-4*a*c),a+a)] for op in (add,sub)]

But that aside, don't you think the following 3 lines are more readable than your 5 above?

def quadratic(a,b,c):
rad = sqrt(b*b-4*a*c); den =a+a
return (-b+rad)/den, (-b-rad)/den

(Not tested byond what you see below ;-)

====< quadratic.py >=========================================================
from operator import add,sub
from math import sqrt
def quadr(a,b,c): return [op(-b, r)/d for r,d in [(sqrt(b*b-4*a*c),a+a)] for op in (add,sub)]

def quadratic(a,b,c):
rad = sqrt(b*b-4*a*c); den =a+a
return (-b+rad)/den, (-b-rad)/den

def funs(a,b,c,q):
def y(x): return a*x*x+b*x+c
def invy(y): return a and q(a,b,c-y) or [(y-c)/b, None]
return y,invy

def test():
for q in (quadr, quadratic):
for coeffs in [(1,2,3), (1,0,0),(0,5,0)]:
print '\n ff(x)=%s*x*x + %s*x + %s, and inverse is using %s:'%(coeffs+(q.__name__,))
ff,fi = funs(*(coeffs+(q,)))
for x in range(-3,4):
ffx = 'ff(%s)'%x; sy='%s, '%ff(x); fiy='fi(%s)'%(ffx)
print '%8s => %6s %11s => %s'%(ffx,sy,fiy, fi(ff(x)))

if __name__ == '__main__':
test()
=============================================================================

test result:

[16:51] C:\pywk\clp>quadratic.py

ff(x)=1*x*x + 2*x + 3, and inverse is using quadr:
ff(-3) => 6, fi(ff(-3)) => [1.0, -3.0]
ff(-2) => 3, fi(ff(-2)) => [0.0, -2.0]
ff(-1) => 2, fi(ff(-1)) => [-1.0, -1.0]
ff(0) => 3, fi(ff(0)) => [0.0, -2.0]
ff(1) => 6, fi(ff(1)) => [1.0, -3.0]
ff(2) => 11, fi(ff(2)) => [2.0, -4.0]
ff(3) => 18, fi(ff(3)) => [3.0, -5.0]

ff(x)=1*x*x + 0*x + 0, and inverse is using quadr:
ff(-3) => 9, fi(ff(-3)) => [3.0, -3.0]
ff(-2) => 4, fi(ff(-2)) => [2.0, -2.0]
ff(-1) => 1, fi(ff(-1)) => [1.0, -1.0]
ff(0) => 0, fi(ff(0)) => [0.0, 0.0]
ff(1) => 1, fi(ff(1)) => [1.0, -1.0]
ff(2) => 4, fi(ff(2)) => [2.0, -2.0]
ff(3) => 9, fi(ff(3)) => [3.0, -3.0]

ff(x)=0*x*x + 5*x + 0, and inverse is using quadr:
ff(-3) => -15, fi(ff(-3)) => [-3, None]
ff(-2) => -10, fi(ff(-2)) => [-2, None]
ff(-1) => -5, fi(ff(-1)) => [-1, None]
ff(0) => 0, fi(ff(0)) => [0, None]
ff(1) => 5, fi(ff(1)) => [1, None]
ff(2) => 10, fi(ff(2)) => [2, None]
ff(3) => 15, fi(ff(3)) => [3, None]

ff(x)=1*x*x + 2*x + 3, and inverse is using quadratic:
ff(-3) => 6, fi(ff(-3)) => (1.0, -3.0)
ff(-2) => 3, fi(ff(-2)) => (0.0, -2.0)
ff(-1) => 2, fi(ff(-1)) => (-1.0, -1.0)
ff(0) => 3, fi(ff(0)) => (0.0, -2.0)
ff(1) => 6, fi(ff(1)) => (1.0, -3.0)
ff(2) => 11, fi(ff(2)) => (2.0, -4.0)
ff(3) => 18, fi(ff(3)) => (3.0, -5.0)

ff(x)=1*x*x + 0*x + 0, and inverse is using quadratic:
ff(-3) => 9, fi(ff(-3)) => (3.0, -3.0)
ff(-2) => 4, fi(ff(-2)) => (2.0, -2.0)
ff(-1) => 1, fi(ff(-1)) => (1.0, -1.0)
ff(0) => 0, fi(ff(0)) => (0.0, 0.0)
ff(1) => 1, fi(ff(1)) => (1.0, -1.0)
ff(2) => 4, fi(ff(2)) => (2.0, -2.0)
ff(3) => 9, fi(ff(3)) => (3.0, -3.0)

ff(x)=0*x*x + 5*x + 0, and inverse is using quadratic:
ff(-3) => -15, fi(ff(-3)) => [-3, None]
ff(-2) => -10, fi(ff(-2)) => [-2, None]
ff(-1) => -5, fi(ff(-1)) => [-1, None]
ff(0) => 0, fi(ff(0)) => [0, None]
ff(1) => 5, fi(ff(1)) => [1, None]
ff(2) => 10, fi(ff(2)) => [2, None]
ff(3) => 15, fi(ff(3)) => [3, None]

Regards,
Bengt Richter
 
K

Kenny Tilton

Bengt said:
So cluttered ;-)
(and you evaluate a+a twice, why not (two-a (+ a a)) in the let ;-)

I like to CPU-binge every once in a while. :)
JFTHOI, a one-liner:

def quadr(a,b,c): return [op(-b, r)/d for r,d in [(sqrt(b*b-4*a*c),a+a)] for op in (add,sub)]

I like it! reminds me of COBOL's perform varying from by until. DEC
Basics after Basic Plus also had statement modifiers:

print x said:
But that aside, don't you think the following 3 lines are more readable than your 5 above?

def quadratic(a,b,c):
rad = sqrt(b*b-4*a*c); den =a+a
return (-b+rad)/den, (-b-rad)/den

Not bad at all. Now gaze upon true beauty:

(defun quadratic (a b c)
(let ((rad (sqrt (- (* b b)
(* 4 a c)))) ;; much nicer than 4*a*c
(den (+ a a)))
(list (/ (+ (- b) rad) den)
(/ (- (- b) rad) den)))))

(Not tested, and without my Lisp editor, parens likely off.)

As for line counting and squeezing semantics into the smallest screen
area possible, (a) I have two monitors for a reason and (b) have you
seen the language K?

:)

kenny
 
H

Hans Nowak

Russell Wallace wrote:

OK, I'll bite --
I'll claim C's syntax is objectively better - it has a clean
definition whereas Python's hasn't.

It hasn't? What is unclean about it?
Python isn't even consistent - it
uses whitespace some of the time and delimiters some of the time;

Python uses indentation to indicate code blocks. I don't really see what is
inconsistent about it. Indentation doesn't matter inside list, dict and tuple
literals, but then again code blocks don't appear in those.
if
it stuck to the decision to use whitespace it might be a bit less
repellent. Also Python's syntax has a whole category of pitfalls that
C's lacks.

Like what? If you mean inconsistent indentation, that one bites you just the
same in other languages, just in different ways.

Curiously y'rs,
 
P

Paul Foley

Not bad at all. Now gaze upon true beauty:
(defun quadratic (a b c)
(let ((rad (sqrt (- (* b b)
(* 4 a c)))) ;; much nicer than 4*a*c
(den (+ a a)))
(list (/ (+ (- b) rad) den)
(/ (- (- b) rad) den)))))
(Not tested, and without my Lisp editor, parens likely off.)

(defun ± (x y)
(values (+ x y) (- x y)))

(defun quadratic (a b c)
(± (/ (- b) (* 2 a)) (/ (sqrt (- (* b b) (* 4 a c))) (* 2 a))))
 
R

Russell Wallace

(Note that I'm not usually in the habit of coming along to comp.lang.X
and posting criticism of X; I read this thread on comp.lang.lisp
without realizing a poster had set followups to this newsgroup only;
but I'll answer the questions below.)

Russell Wallace wrote:

OK, I'll bite --


It hasn't? What is unclean about it?

The relevant definitions in C:
A program is a stream of tokens, which may be separated by whitespace.
The sequence { (zero or more statements) } is a statement.

What's the equivalent for Python?
Python uses indentation to indicate code blocks. I don't really see what is
inconsistent about it. Indentation doesn't matter inside list, dict and tuple
literals, but then again code blocks don't appear in those.

Except that 'if', 'while' etc lines are terminated with delimiters
rather than newline. Oh, and doesn't Python have the option to use \
or somesuch to continue a regular line?
Like what? If you mean inconsistent indentation, that one bites you just the
same in other languages, just in different ways.

But in ways that are objectively less severe because:

- If the indentation is buggered up, the brackets provide the
information you need to figure out what the indentation should have
been.

- The whole tabs vs spaces issue doesn't arise.
 
G

gregm

:> I can't see why a LISP programmer would even want to write a macro.
: That's because you are approaching this with a fundamentally flawed
: assumption. Macros are mainly not used to make the syntax prettier
: (though they can be used for that). They are mainly used to add features
: to the language that cannot be added as functions.

Really? Turing-completeness and all that... I presume you mean "cannot
so easily be added as functions", but even that would surprise me.
(Unless you mean cannot be added _to_Lisp_ as functions, because I don't
know as much as I'd like to about Lisp's capabilities and limitations.)

: For example, imagine you want to be able to traverse a binary tree and do
: an operation on all of its leaves. In Lisp you can write a macro that
: lets you write:
: (doleaves (leaf tree) ...)
: You can't do that in Python (or any other langauge).

My Lisp isn't good enough to answer this question from your code,
but isn't that equivalent to the Haskell snippet: (I'm sure
someone here is handy in both languages)

doleaves f (Leaf x) = Leaf (f x)
doleaves f (Branch l r) = Branch (doleaves f l) (doleaves f r)

I'd be surprised if Python couldn't do the above, so maybe doleaves
is doing something more complex than it looks to me to be doing.

: Here's another example of what you can do with macros in Lisp:

: (with-collector collect
: (do-file-lines (l some-file-name)
: (if (some-property l) (collect l))))

: This returns a list of all the lines in a file that have some property.

OK, that's _definitely_ just a filter: filter someproperty somefilename
Perhaps throw in a fold if you are trying to abstract "collect".

: DO-FILE-LINES and WITH-COLLECTOR are macros, and they can't be implemented
: any other way because they take variable names and code as arguments.

What does it mean to take a variable-name as an argument? How is that
different to taking a pointer? What does it mean to take "code" as an
argument? Is that different to taking a function as an argument?

-Greg
 
M

Marco Baringer

Really? Turing-completeness and all that... I presume you mean "cannot
so easily be added as functions", but even that would surprise me.

well you can pass around code full of lambdas so most macros (expect
the ones which perform hairy source transformations) can be rewritten
as functions, but that isn't the point. Macros are about saying what
you mean in terms that makes sense for your particular app.
: Here's another example of what you can do with macros in Lisp:

: (with-collector collect
: (do-file-lines (l some-file-name)
: (if (some-property l) (collect l))))

: This returns a list of all the lines in a file that have some property.

OK, that's _definitely_ just a filter: filter someproperty somefilename
Perhaps throw in a fold if you are trying to abstract "collect".

no it's not, and the proof is that it wasn't written as a filter. For
whatever reason the author of that snippet decided that the code
should be written with WITH-COLLECTOR and not as a filter, some
languages give you this option, some don't, some people think this is
a good thing, some don't.
: DO-FILE-LINES and WITH-COLLECTOR are macros, and they can't be implemented
: any other way because they take variable names and code as arguments.

What does it mean to take a variable-name as an argument? How is that
different to taking a pointer? What does it mean to take "code" as an
argument? Is that different to taking a function as an argument?

You are confusing the times at which things happen. A macro is
expanded at compile time, there is no such thing as a pointer as far
as macros are concerned (more or less), macros are passed pieces of
source code in the form of lists and atoms and return _source code_ in
the form of lists and atoms. The source code is then compiled (whith
further macro expansion in need be) and finally, after the macro has
long since finished working, the code is executed.

Another trivial example:

We often see code like this:

(let ((var (foo)))
(if var
(do-stuff-with-var)
(do-other-stuff)))

So write a macro called IF-BIND which allows you to write this instead:

(if-bind var (foo)
(do-stuff-with-var)
(do-other-stuff))

The definition for IF-BIND is simply:

(defmacro if-bind (var condition then &optional else)
`(let ((,var ,condition))
(if ,then ,else)))

But what if the condition form returns multiple values which we didn't
want to throw away? Well easy enough:

(defmacro if-bind (var condition then &optional else)
(etypecase var
(cons `(multiple-value-bind ,var ,condition
(if ,(car var) ,then ,else)))
(symbol `(let ((,var ,condition))
(if ,var ,then ,else)))))

Notice how we use lisp to inspect the original code and decide what
code to produce depending on whether VAR is a cons or a symbol.

I could get the same effect (from an execution stand point) of if-bind
without the macro, but the source code is very different. Macros allow
me to say what I _mean_, not what the compiler wants.

If you want more examples look in Paul Graham's OnLisp
(http://www.paulgraham.com/onlisp.html) book for the chapters on
continuations or multitasking.

--
-Marco
Ring the bells that still can ring.
Forget your perfect offering.
There is a crack in everything.
That's how the light gets in.
-Leonard Cohen
 
A

Andrew Dalke

[cc'ed since I wasn't sure if you would be tracking the c.l.py thread]

Russell Wallace:
A program is a stream of tokens, which may be separated by whitespace.
The sequence { (zero or more statements) } is a statement.

Some C tokens may be separated by whitespace and some *must* be
separated by whitespace.

static const int i
static const inti

i + + + 1
i ++ + 1

The last case is ambiguous, so the tokenizer has some logic to
handle that -- specifically, a greedy match with no backtracking.
It throws away the ignorable whitespace and gives a stream of
tokens to the parser.
What's the equivalent for Python?

One definition is that "a program is a stream of tokens, some
of which may be separated by whitespace and some which
must be separated by whitespace." Ie, the same as my
reinterpretation of your C definition.

For a real answer, start with

http://python.org/doc/current/ref/line-structure.html
"A Python program is divided into a number of logical lines."

http://python.org/doc/current/ref/logical.html
"The end of a logical line is represented by the token NEWLINE.
Statements cannot cross logical line boundaries except where
NEWLINE is allowed by the syntax (e.g., between statements in
compound statements). A logical line is constructed from one or
more physical lines by following the explicit or implicit line joining
rules."

http://python.org/doc/current/ref/physical.html
"A physical line ends in whatever the current platform's convention
is for terminating lines. On Unix, this is the ASCII LF (linefeed)
character. On Windows, it is the ASCII sequence CR LF (return
followed by linefeed). On Macintosh, it is the ASCII CR (return)
character."

and so on.
Except that 'if', 'while' etc lines are terminated with delimiters
rather than newline. Oh, and doesn't Python have the option to use \
or somesuch to continue a regular line?

The C tokenizer turns the delimiter character into a token.

The Python tokenizer turns indentation level changes into
INDENT and DEDENT tokens. Thus, the Python parser just
gets a stream of tokens. I don't see a deep difference here.

Both tokenizers need to know enough about the respective
language to generate the appropriate tokens.
But in ways that are objectively less severe because:

- If the indentation is buggered up, the brackets provide the
information you need to figure out what the indentation should have
been.

As I pointed out, one of the pitfalls which does occur in C
is the dangling else

if (a)
if (b)
c++;
else /* indented incorrectly but valid */
c--

That mistake does not occur in Python. I personally had
C++ code with a mistake based on indentation. I and
three other people spent perhaps 10-15 hours spread
over a year to track it down. We all knew where the bug
was supposed to be in the code, but the indentation threw
us off.
- The whole tabs vs spaces issue doesn't arise.

That's an issue these days? It's well resolved -- don't
use tabs.

And you know, I can't recall a case where it's every
been a serious problem for me. I have a couple of times
had a problem, but never such that the code actually
worked, unlike the if/else code I listed above for C.

Andrew
(e-mail address removed)
 
I

Ingvar Mattsson

Joe Marshall said:
(I'm ignoring the followup-to because I don't read comp.lang.python)

Indentation-based grouping introduces a context-sensitive element into
the grammar at a very fundamental level. Although conceptually a
block is indented relative to the containing block, the reality of the
situation is that the lines in the file are indented relative to the
left margin. So every line in a block doesn't encode just its depth
relative to the immediately surrounding context, but its absolute
depth relative to the global context. Additionally, each line encodes
this information independently of the other lines that logically
belong with it, and we all know that when some data is encoded in one
place may be wrong, but it is never inconsistent.

There is yet one more problem. The various levels of indentation
encode different things: the first level might indicate that it is
part of a function definition, the second that it is part of a FOR
loop, etc. So on any line, the leading whitespace may indicate all
sorts of context-relevant information. Yet the visual representation
is not only identical between all of these, it cannot even be
displayed.

It's actually even worse than you think. Imagine you want "blank
lines" in your code, so act as paragraph separators. Do these require
indentation, even though there is no code on them? If so, how does
that interact with a listener? From what I can tell, the option chosen
in the Python (the language) community, the listener and the file
reader have different view on blank lines. This makes it harder than
necessary to edit stuff in one window and "just paste" code from
another. Bit of a shame, really.

//ingvar
 
G

Gerrit Holl

Ingvar said:
It's actually even worse than you think. Imagine you want "blank
lines" in your code, so act as paragraph separators. Do these require
indentation, even though there is no code on them? If so, how does
that interact with a listener? From what I can tell, the option chosen
in the Python (the language) community, the listener and the file
reader have different view on blank lines. This makes it harder than
necessary to edit stuff in one window and "just paste" code from
another. Bit of a shame, really.

Blank lines are ignored by Python.

Gerrit.
 
G

Gerrit Holl

Jason said:
I agree with most of the rest of "The Zen of Python", except for the
"There should be one-- and preferably only one --obvious way to do it."
bit. I think it should be "There should be one, and preferably only one
, *easy* (And it should be obvious, if we can manage it) way to do it."

Of course, this rule is not to be taken literally.

a = 2
a = 1 + 1
a = math.sqrt(4)
a = int((sys.maxint+1) ** (1/31))

....all mean the same thing.

Gerrit.
 
D

David Rush

I agree that injudicious use of macros can destroy the readability of
code, but judicious use can greatly increase the readability. So
while it is probably a bad idea to write COND1 that assumes
alternating test and consequence forms, it is also a bad idea to
replicate boilerplate code because you are eschewing macros.

But it may also be a mistake to use macros for the boilerplate code when
what you really need is a higher-order function...

david rush
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,768
Messages
2,569,574
Members
45,048
Latest member
verona

Latest Threads

Top