Python syntax in Lisp and Scheme

A

Alex Martelli

Frode said:
I believe it is very unfortunate to view lisp macros as something that
is used to "change the language". Macros allow syntactic abstraction

Maybe "enhance" can sound more positive? An enhancement, of course,
IS a change -- and if one were to perform any change, he'd surely be
convinced it WAS going to be an enhancement. (Whether it really
turned out to be one is another issue).
the same way functions allow functional abstraction, and is almost as
important a part of the programmer's toolchest. While macros _can_ be
used to change the language in the sense of writing your own
general-purpose iteration construct or conditional operator, I believe
this is an abuse of macros, precisely because of the implications this
has for the readability of the code and for the language's user
community.

Sure, but aren't these the examples that are being presented? Isn't
"with-collector" a general purpose iteration construct, etc? Maybe
only _special_ purpose ones should be built with macros (if you are
right that _general_ purpose ones should not be), but the subtleness
of the distinction leaves me wondering about the practice.


Alex
 
?

=?ISO-8859-1?Q?Jens_Axel_S=F8gaard?=

Alex said:
Essentially, Guido prefers classes (and instances thereof) to
closures as a way to bundle state and behavior; thus he most
emphatically does not want to add _any_ complication at all,
when the only benefit would be to have "more than one obvious
way to do it".

Guido's generally adamant stance for simplicity has been the
key determinant in the evolution of Python.

The following is taken from "All Things Pythonic - News from Python UK"
written by Guido van Rossum April 17,
<2003:http://www.artima.com/weblogs/viewpost.jsp?thread=4550>

During Simon's elaboration of an example (a type-safe printf function)
I realized the problem with functional programming: there was a simple
programming problem where a list had to be transformed into a
different list. The code to do this was a complex two-level lambda
expression if I remember it well, and despite Simon's lively
explanation (he was literally hopping around the stage making
intricate hand gestures to show how it worked) I failed to "get" it. I
finally had to accept that it did the transformation without
understanding how it did it, and this is where I had my epiphany about
loops as a higher level of abstraction than recursion - I'm sure that
the same problem would be easily solved by a simple loop in Python,
and would leave no-one in the dark about what it did.

Hmm.
 
L

Lulu of the Lotus-Eaters

(e-mail address removed) (Grzegorz Chrupala) wrote previously:
|shocked at how awkward Paul Graham's "accumulator generator" snippet is
|in Python:
|class foo:
| def __init__(self, n):
| self.n = n
| def __call__(self, i):
| self.n += i
| return self.n

Me too. The way I'd do it is probably a lot closer to the way Schemers
would do it:
... accum[0]+=i
... return accum[0]
... 4

Shorter, and without an awkward class.

Yours, David...

--
Buy Text Processing in Python: http://tinyurl.com/jskh
---[ to our friends at TLAs (spread the word) ]--------------------------
Echelon North Korea Nazi cracking spy smuggle Columbia fissionable Stego
White Water strategic Clinton Delta Force militia TEMPEST Libya Mossad
---[ Postmodern Enterprises <[email protected]> ]--------------------------
 
A

Alex Martelli

Lulu said:
(e-mail address removed) (Grzegorz Chrupala) wrote previously:
|shocked at how awkward Paul Graham's "accumulator generator" snippet is
|in Python:
|class foo:
| def __init__(self, n):
| self.n = n
| def __call__(self, i):
| self.n += i
| return self.n

Me too. The way I'd do it is probably a lot closer to the way Schemers
would do it:
def foo(i, accum=[0]):
... accum[0]+=i
... return accum[0]
...4

Shorter, and without an awkward class.

There's an important difference: with your approach, you cannot just
instantiate multiple independent accumulators like with the other --
a = foo(10)
b = foo(23)
in the 'class foo' approach, just as in all of those where foo returns an
inner-function instance, a and b are now totally independent accumulator
callables -- in your approach, 'foo' itself is the only 'accumulator
callable', and a and b after these two calls are just two numbers.

Making a cookie, and making a cookie-cutter, are quite different issues.


Alex
 
D

David Rush

Since no one has done a point-by-point correction of the errors w/rt
Scheme...

Here are a few of the (arguably) notable differences:

Scheme Common Lisp
Philosophy minimalism comprehensiveness
orthogonality compromise
Namespaces one two (functions, variables)
more than two, actually
Continuations yes no
Object system no yes

It really depends on how you define 'object system' as to whether or not
Scheme has one. I personally think it does, but you have to be prepared
to crawl around the foundations of OOP (and CS generally) before this
becomes apparent. It helps if you've ever lived with unconventional
object systems like Self.
Exceptions no yes
yes, via continuations which reify the
fundamental control operators in all languages
Macro system syntax-rules defmacro
most Schemes provide defmacro style macros as
they are relatively easy to implement correctly
(easier than syntax-rules anyway)
Implementations >10 ~4
too many to count. The FAQ lists over twenty. IMO
there are about 9 'major' implementations which
have
relatively complete compliance to R5RS and/or
significant extension libraries
Performance "worse" "better"
This is absolutely wrong. Scheme actually boasts
one
of the most efficient compliers on the planet in
the
StaLIn (Static Language Implementation) Scheme
system.
Larceny, Bigloo, and Gambit are also all quite
zippy
when compiled.
Standards IEEE ANSI
Hrmf. 'Scheme' and 'Standard' are slightly skewed
terms.
This is probably both the greatest weakness of
the
language and also its greatest strength. R5RS is
more
of a description to programmers of how to write
portable
code than it is a constraint on implementors.
Scheme is
probably more of a "family" of languages than
Lisp is
at that.

Anyway, Nobody really pays much attention to
IEEE, although
that may change since it's being reworked this
year. The
real standard thus far has been the community
consensus
document called R5RS, the Revised^5 Report on the
Algorithmic
Language Scheme. There is a growing consensus
that it needs
work, but nobody has yet figured out how to make
a new version happen (And I believe that the IEEE effort is just
bringing IEEE up to date w/R5RS)
Reference name R5RS CLTL2
Reference length 50pp 1029pp
Standard libraries "few" "more"
Well, we're up to SRFI-45 (admittedly a number of
them have been withdrawn, but the code and specification are still
available) and there's very little overlap.
Most of the SRFIs have highly portable
implementations.
Support Community Academic Applications writers
in outlook, perhaps, but the academic component
has dropped fairly significantly over the years. The best implementations
still come out of academia, but the better libraries are starting to come
from people in the industry.
There is also an emphasis on heavily-armed
programming
which is sadly lacking in other branches of the
IT
industry. Remember - there is no Scheme
Underground.

david rush
 
D

David Eppstein

Lulu of the Lotus-Eaters said:
(e-mail address removed) (Grzegorz Chrupala) wrote previously:
|shocked at how awkward Paul Graham's "accumulator generator" snippet is
|in Python:
|class foo:
| def __init__(self, n):
| self.n = n
| def __call__(self, i):
| self.n += i
| return self.n

Me too. The way I'd do it is probably a lot closer to the way Schemers
would do it:
def foo(i, accum=[0]):
... accum[0]+=i
... return accum[0]
...4

Shorter, and without an awkward class.

There's an important difference between these two: the object-based
solution (and the solutions with two nested functions and a closure)
allow more than one accumulator to be created. Yours only creates a
one-of-a-kind accumulator.

I happen to like the object-based solution better. It expresses more
clearly to me the intent of the code. I don't find the class awkward;
to me, a class is what you use when you want to keep some state around,
which is exactly the situation here. "Explicit is better than
implicit." Conciseness is not always a virtue.
 
K

Kenny Tilton

Alex said:
record as promising that the major focus in the next release
of Python where he can introduce backwards incompatibilities
(i.e. the next major-number-incrementing release, 3.0, perhaps,
say, 3 years from now) will be the _elimination_ of many of
the "more than one way to do it"s that have accumulated along
the years mostly for reasons of keeping backwards compatibility
(e.g., lambda, map, reduce, and filter,

Oh, goodie, that should win Lisp some Pythonistas. :) I wonder if Norvig
will still say Python is the same as Lisp after that.
Python draws a firm distinction between expressions and
statements. Again, the deep motivation behind this key
distinction can be found in several points in the Zen of
Python, such as "flat is better than nested" (doing away
with the expression/statement separation allows and indeed
encourages deep nesting) and "sparse is better than dense"
(that 'doing away' would encourage expression/statements
with a very high density of operations being performed).

In Lisp, all forms return a value. How simple is that? Powerful, too,
because a rule like "flat is better than nested" is flat out dumb, and I
mean that literally. It is a dumb criterion in that it does not consider
the application.

Take a look at the quadratic formula. Is that flat? Not. Of course
Python allows nested math (hey, how come!), but non-mathematical
computations are usually trees, too.

I was doing an intro to Lisp when someone brought up the question of
reading deeply nested stuff. It occurred to me that, if the computation
is indeed the moral equivalent of the quadratic formula, calling various
lower-level functions instead of arithmetic operators, then it is
/worse/ to be reading a flattened version in which subexpression results
are pulled into local variable, because then one has to mentally
decipher the actual hierarchical computation from the bogus flat sequence.

So if we have:

(defun some-vital-result (x y z)
(finally-decide
(if (serious-concern x)
(just-worry-about x z)
(whole-nine-yards x
(composite-concern y z)))))

....well, /that/ visually conveys the structure of the algorithm, almost
as well as a flowchart (as well if one is accustomed to reading Lisp).
Unwinding that into an artificial flattening /hides/ the structure.
Since when is that "more explicit"? The structure then becomes implicit
in the temp variable bindings and where they get used and in what order
in various steps of a linear sequence forced on the algotrithm.

I do not know what Zen is, but I do now that is not Zen.

Yes, the initial reaction of a COBOL programmer to a deeply nested form
is "whoa! break it down for me!". But that is just lack of familiarity.
Anyone in a reasonable amount of time can get used to and then benefit
from reading nested code. Similarly with every form returning a
value...the return statement looks silly in pretty short order if one
spends any time at all with a functional language.


kenny
 
A

Alexander Schmolck

[comp.lang.functional removed]
Peter Seibel said:
which seems pretty similar to the Python version.

(If of course we didn't already have the FILL function that does just
that.)

Just for the record, in python all you'd write is: v[:] = a

'as
 
M

Marco Antoniotti

Alexander said:
I'd be interested to hear your reasons. *If* you take the sharp distinction
that python draws between statements and expressions as a given, then python's
syntax, in particular the choice to use indentation for block structure, seems
to me to be the best choice among what's currently on offer (i.e. I'd claim
that python's syntax is objectively much better than that of the C and Pascal
descendants -- comparisons with smalltalk, prolog or lisp OTOH are an entirely
different matter).

The best choice for code indentation in any language is M-C-q in Emacs.

Cheers
 
M

Mario S. Mommer

I have tried on 3 occassions to become a LISP programmer, based upon
the constant touting of LISP as a more powerful language and that
ultimately S-exprs are a better syntax. Each time, I have been
stopped because the S-expr syntax makes we want to vomit.

:)

Although people are right when they say that S-exprs are simpler, and
once you get used to them they are actually easier to read, I think
the visual impact they have on those not used to it is often
underestimated.

And to be honest, trying to deal with all these parenthesis in an
editor which doesn't help you is not an encouraging experience, to say
the least. You need at least a paren-matching editor, and it is a real
big plus if it also can reindent your code properly. Then, very much
like in python, the indent level tells you exactly what is happening,
and you pretty much don't see the parens anymore.

Try it! In emacs, or Xemacs, open a file ending in .lisp and
copy/paste this into it:

;; Split a string at whitespace.
(defun splitatspc (str)
(labels ((whitespace-p (c)
(find c '(#\Space #\Tab #\Newline))))
(let* ((posnew -1)
(posold 0)
(buf (cons nil nil))
(ptr buf))
(loop while (and posnew (< posnew (length str))) do
(setf posold (+ 1 posnew))
(setf posnew (position-if #'whitespace-p str
:start posold))
(let ((item (subseq str posold posnew)))
(when (< 0 (length item))
(setf (cdr ptr) (list item))
(setf ptr (cdr ptr)))))
(cdr buf))))

Now place the cursor on the paren just in front of the defun in the
first line said:
If a set of macros could be written to improve LISP syntax, then I
think that might be an amazing thing. An interesting question to me
is why hasn't this already been done.

Because they are so damned regular. After some time you do not even
think about the syntax anymore.
 
A

Alex Martelli

Grzegorz Chrupa?a wrote:
...
I have some doubts about the notion of simplicity which you (or Guido)
seem to be taking for granted. I don't think it is that straightforwrd to
agree about what is simpler, even if you do agree that simpler is better.
Unless you objectivize this concept you can argue that a "for" loop is
simple than a "map" function and I can argue to the contrary and we'll be
talking past each other: much depends on what you are more familiar with
and similar random factors.

I have both learned, and taught, many different languages -- and my
teaching was both to people already familiar with programming, and to
others who were not programmers but had some experience and practice
of "more rigorous than ordinary" thinking (in maths, physics, etc),
and to others yet, of widely varying ages, who lacked any such practise.

I base my notions of what is simple, first and foremost, on the experience
of what has proved easy to teach, easy to learn, and easy for learners to
use. Secondarily, on the experience of helping experienced programmers
design, develop and debug their code (again in many languages, though
nowhere as wide a variety as for the learning and teaching experience).

None of this (like just about nothing in human experiential knowledge about
such complicated issues as the way human beings think and behave) can be
remotely described as "objective".

As an example of how subjective this can be, most of the features you
mention as too complex for Python to support are in fact standard in
Scheme (true lexical scope, implicit return, no expression/statement

Tut-tut. You are claiming, for example, that I mentioned the lack
of distinction between expressions and statements as "too complex for
Python to support": I assert your claim is demonstrably false, and
that I NEVER said that it would be COMPLEX for Python to support such
a lack. What I *DID* say on the subject, and I quote, was:

"""
Python draws a firm distinction between expressions and
statements. Again, the deep motivation behind this key
distinction can be found in several points in the Zen of
Python, such as "flat is better than nested" (doing away
with the expression/statement separation allows and indeed
encourages deep nesting) and "sparse is better than dense"
(that 'doing away' would encourage expression/statements
with a very high density of operations being performed).
"""

Please read what I write rather than putting in my mouth words that
I have never written, thank you. To reiterate, it would have been
quite simple to design Python without any distinction between
expressions and statement; HOWEVER, such a lack of distinction
would have encouraged programs written in Python by others to
break the Python principles that "flat is better than nested"
(by encouraging nesting) and "sparse is better than dense" (by
encouraging high density).
distinction) and yet Scheme is widely regarded as one of the simplest
programming languages out there, more so than Python.

But does encourage nesting and density. Q.E.D..
Another problem with simplicity is than introducing it in one place my
increase complexity in another place.

It may (which is why "practicality beats purity", yet another Zen of
Python principle...), therefore it becomes important to evaluate the
PRACTICAL IMPORTANCE, in the language's environment, of that "other
place". All engineering designs (including programming languages)
are a rich tapestry of trade-offs. I think Python got its trade-offs
more nearly "right" (for my areas of interest -- particularly for large
multi-author application programs and frameworks, and for learning
and teaching) than any other language I know.
Specifically consider the simple (simplistic?) rule you cite that Python
uses to determine variable scope ("if the name gets bound (assigned to) in
local scope, it's a local variable"). That probably makes the
implementor's job simpler, but it at the same time makes it more complex
and less intuitive for the programmer to code something like the
accumulator generator example -- you need to use a trick of wrapping the
variable in a list.

It makes the _learner_'s job simple (the rule he must learn is simple),
and it makes the _programmer_'s job simple (the rule he must apply to
understand what will happens if he codes in way X is simple) -- those
two are at least as important as simplifying the implementor's job (and
thus making implementations smaller and more bug-free). If the inability
to re-bind outer-scope variables encourages all programmers to use
classes whenever they have to decide how to bundle some code and some
data, i.e. if it makes classes the "one obvious way to do it" for such
purposes, the resulting greater uniformity in Python programs is deemed
to be a GOOD thing in the Python viewpoint. (In practice, there are of
course always "other ways to do it" -- as long as they're "non-obvious",
that's presumably tolerable, even if not ideal:).

As for Ruby, I know and quite like it. Based on what you tell me about
Python's philosophy, perhaps Ruby makes more pragmatic choices in where to
make things simple and for whom than Python.

I thought the total inability to nest method definitions (while in Python
you get perfectly normal lexical closures, except that you can't _rebind_
outer-scope names -- hey, in functional programming languages you can't
rebind ANY name, yet nobody every claimed that this means they "don't have
true lexical closures"...!-), and more generally the deep split between
the space of objects and that of methods (a split that's simply not there
in Python), would have been show-stoppers for a Schemer, but it's always
nice to learn otherwise. I think, however, that deeming the set of
design trade-offs in Ruby as "more pragmatic" than those in Python is
a distorted vision, because it fails to consider the context. If my main
goal in programming was to develop experimental designs in small groups,
I would probably appreciate certain features of Ruby (such as the ability
to change *ANY* method of existing built-in classes); thinking of rather
large teams developing production applications and frameworks, the same
features strike me as a _negative_ aspect. The language and cultural
emphasis towards clarity, simplicity and uniformity, against cleverness,
terseness, density, and "more than one way to do-ity", make Python by
far the most practical language for me to teach, and in which to program
the kind of application programs and frameworks that most interest me --
but if my interest was instead to code one-liner scripts for one-off
system administration tasks, I might find that emphasis abominable...!


Alex
 
B

Bengt Richter

def make_accumulator(initial_value):
accumulator = Bunch(value=initial_value)
def accumulate(addend):
accumulator.value += addend
return accumulator.value
return accumulate

accumulate = make_accumulator(23)
print accumulate(100) # emits 123
print accumulate(100) # emits 223


(using the popular Bunch class commonly defined as:
class Bunch(object):
def __init__(self, **kwds):
self.__dict__.update(kwds)
). There is, of course, a cultural gulf between this
verbose 6-liner [using an auxiliary class strictly for
reasons of better readability...!] and the terse Ruby
1-liner above, and no doubt most practitioners of both
languages would in practice choose intermediate levels,
such as un-densifying the Ruby function into:
I like the Bunch class, but the name suggests vegetables to me ;-)

Since the purpose (as I see it) is to create a unique object with
an attribute name space, I'd prefer a name that suggests that, e.g., NS,
or NSO or NameSpaceObject, so I am less likely to need a translation.


BTW, care to comment on a couple of close variants of Bunch with per-object class dicts? ...

def mkNSC(**kwds): return type('NSC', (), kwds)()

or, stretching the one line a bit to use the instance dict,

def mkNSO(**kwds): o=type('NSO', (), {})(); o.__dict__.update(kwds); return o

I'm wondering how much space is actually wasted with a throwaway class. Is there a
lazy copy-on-write kind of optimization for class and instance dicts that prevents
useless proliferation? I.e.,
['__dict__', '__module__', '__weakref__', '__doc__']

seems like it could be synthesized by the proxy without a real dict
until one was actually needed to hold other state.

For qnd ns objects, I often do

nso = type('',(),{})()
nso.att = 'some_value'

and don't generally worry about the space issue anyway, since I don't make that many.
def outer(a)
proc do |b|
a+b
end
end

or shortening/densifying the Python one into:

def make_accumulator(a):
value = [a]
def accumulate(b):
value[0] += b
return value[0]
return accumulate
Or you could make a one-liner (for educational purposes only ;-)
>>> def mkacc(a): return (lambda a,b: a.__setitem__(0,a[0]+b) or a[0]).__get__([a]) ...
>>> acc = mkacc(100)
>>> acc(3) 103
>>> acc(5)
108

Same with defining Bunch (or even instanciating via a throwaway). Of course I'm not
suggesting these as a models of spelling clarity, but it is sometimes interesting to see
alternate spellings of near-if-not-identical functionality.
'initial_value'

but I think the "purer" (more extreme) versions are
interesting "tipizations" for the languages, anyway.
Oh goody, a new word (for me ;-). Would you define "tipization"?

Regards,
Bengt Richter
 
F

Frode Vatvedt Fjeld

But syntactic abstractions *are* a change to the language, it just
sounds fancier.

Yes, this is obviously true. Functional abstractions also change the
language, even if it's in a slightly different way. Any programming
language is, after all, a set of functional and syntactic
abstractions.
I agree that injudicious use of macros can destroy the readability
of code, but judicious use can greatly increase the readability. So
while it is probably a bad idea to write COND1 that assumes
alternating test and consequence forms, it is also a bad idea to
replicate boilerplate code because you are eschewing macros.

I suppose this is about the same differentiantion I wanted to make by
the terms "syntactic abstraction" (stressing the idea of building a
syntax that matches a particular problem area or programming pattern),
and "changing the language" which is just that, not being part of any
particular abstraction other than the programming language itself.
 
E

Ed Avis

I'd like to know if it may be possible to add a powerful macro system
to Python, while keeping its amazing syntax,

I fear it would not be. I can't say for certain but I found that the
syntax rules out nesting statements inside expressions (without adding
some kind of explicit bracketing, which rather defeats the point of
Python syntax) and you might run into similar difficulties if adding
macros. It's a very clean syntax (well, with a few anomalies) but
this is at the price of a rigid separation between statements and
expressions, which doesn't fit well with the Lisp-like way of doing
things.

Myself I rather like the option chosen by Haskell, to define an
indentation-based syntax which is equivalent to one with bracketing,
and let you choose either. You might do better to add a new syntax to
Lisp than to add macro capabilities to Python. Dylan is one Lisp
derivative with a slightly more Algol-like syntax, heh, Logo is
another; GNU proposed some thing called 'CTAX' which was a C-like
syntax for Guile Scheme, I don't know if it is usable.

If the indentation thing appeals, maybe you could preprocess Lisp
adding a new kind of bracket - say :) - which closes at the next line
of code on the same indentation level. Eg

:) hello
there
(goodbye)

would be equivalent to

(hello
there)
(goodbye)

I dunno, this has probably already been done.
 
A

Alex Martelli

Bengt Richter wrote:
...
I like the Bunch class, but the name suggests vegetables to me ;-)

Well, I _like_ vegetables...
BTW, care to comment on a couple of close variants of Bunch with
per-object class dicts? ...

def mkNSC(**kwds): return type('NSC', (), kwds)()

Very nice (apart from the yecchy name;-).
or, stretching the one line a bit to use the instance dict,

def mkNSO(**kwds): o=type('NSO', (), {})(); o.__dict__.update(kwds);
return o

I don't see the advantage of explicity using an empty dict and then
updating it with kwds, vs using kwds directly.
I'm wondering how much space is actually wasted with a throwaway class. Is
there a lazy copy-on-write kind of optimization for class and instance
dicts that prevents useless proliferation? I.e.,

I strongly doubt there's any "lazy copy-on-write" anywhere in Python.
The "throwaway class" will be its dict (which, here, you need -- that's
the NS you're wrapping, after all) plus a little bit (several dozen bytes
for the typeobject, I'd imagine); an instance of Bunch, probably a bit
smaller. But if you're going to throw either away soon, who cares?

Oh goody, a new word (for me ;-). Would you define "tipization"?

I thought I was making up a word, and slipped by spelling it
as in Italiano "tipo" rather than English "type". It appears
(from Google) that "typization" IS an existing word (sometimes
mis-spelled as "tipization"), roughly in the meaning I intended
("characterization of types") -- though such a high proportion
of the research papers, institutes, etc, using "typization",
seems to come from Slavic or Baltic countries, that I _am_
left wondering...;-).


Alex
 
T

Terry Reedy

David Rush said:
By 'here', I meant comp.lang.python ...
Do you even begin to appreciate how inflammatory such a request is
when posted to to both c.l.l and c.l.s?

As implied by 'here', I did not originally notice the cross-posting
(blush, laugh ;<). I am pleased with the straightforward, civil, and
helpful answers I have received, including yours, and have saved them
for future reference.

....
compromise system design, for both good and bad ....
embedded in it. This cuts both ways, mind you.
....

I believe in neither 'one true religion' nor in 'one best
algorithm/computer language for all'. Studying Lisp has helped me
better understand Python and the tradeoffs embodied in its design. I
certainly better appreciate the issue of quoting and its relation to
syntax.

Terry J. Reedy
 
A

Alex Martelli

Alexander said:
[comp.lang.functional removed]
Peter Seibel said:
which seems pretty similar to the Python version.

(If of course we didn't already have the FILL function that does just
that.)

Just for the record, in python all you'd write is: v[:] = a

'as

I suspect you may intend "v[:] = [a]*len(v)", although a good alternative
may also be "v[:] = itertools.repeat(a, len(v))".


Alex
 
F

Frode Vatvedt Fjeld

Alex Martelli said:
Sure, but aren't these the examples that are being presented? Isn't
"with-collector" a general purpose iteration construct, etc? Maybe
only _special_ purpose ones should be built with macros (if you are
right that _general_ purpose ones should not be), but the subtleness
of the distinction leaves me wondering about the practice.

It is a subtle distinction, just like a lot of other issues in
programming are quite subtle. And I think this particular issue
deserves more attention than it has been getting (so far as I know).

As for the current practice, I know that I quite dislike code that
uses things like with-collector, and I especially dislike it when I
have to look at the macro's expansion to see what is going on, and I
know there are perfectly fine alternatives in the standard syntax. On
the other hand, I do like it when I see a macro call that reduces tens
or even hundreds of lines of code to just a few lines that make it
immediately apparent what's happening. And I know I'd never want to
use a language with anything less than lisp's macros.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,576
Members
45,054
Latest member
LucyCarper

Latest Threads

Top