BIG successes of Lisp (was ...)

K

Kaz Kylheku

Alex Martelli said:
What's "considerably more complicated" in, say,
my_frobazzer = frobaz_compiler('''
oh my pretty, my beauteous,
my own all my own special unique frambunctious *LANGUAGE*!!!
''')
and later on call my_frobazzed(bim. bum, bam) at need?

The problem is that

program = compiler(character-string)

is too much of a closed design. A more flexible design resembles:

abstract-syntax-tree = reader(character-string)

target-syntax-tree = translator(abstract-syntax-tree)

program = compiler(target-syntax-tree)

The input to a compiler should not be a character string, but
structured data.
The complexity
of the frobaz_compiler factory callable depends exclusively on the
complexity of the language you want it to parse and compile, and you

In Lisp, the complexity of the compiler is constant; it's a language
builtin. The complexity of the reader depends on the complexity of the
lexical properties of the language, and the complexity of the
translator depends on the semantic complexity of the language.
The vast majority of applications has absolutely no need to tinker
with the language's syntax and fundamental semantics.

This is a common fallacy: namely that the difficulty of doing
something, and the consequent infrequency of doing it, constitute
evidence for a lack of need of doing it. In fact this is nothing more
than rationalization: ``I know I have inadequate tools, but I really
don't need them''. It's not unlike ``I can't reach those grapes, but I
know they are sour anyway''.

In Lisp, tinkering with the syntax and semantics is done even in
trivial programs.

By the way, your use of ``fundamental'' suggests a misunderstanding:
namely that some kind of destructive manipulation of the language is
going on to change the foundations, so that existing programs are no
longer understood or change in meaning. This is not so: rather, users
extend the language to understand new constructs. The fundamental
syntax and semantics stay what they are; they are the stable target
language for the new constructs.
When you need
a language with different syntax, this is best treated as a serious
task and devoted all the respect it deserves, NOT treated as "a
breeze".

This is a common viewpoint in computer science. Let me state it like
this: ``language design is a Hard Problem that requires you to whip
out lexical analyzers, parser constructors, complex data structures
for symbol table management, intermediate code generation, target code
generation, instruction selection, optimization, etc.'' But when you
have this:

abstract-syntax-tree = reader(character-string)

target-syntax-tree = translator(abstract-syntax-tree)

program = compiler(target-syntax-tree)

you can do most of the language design work in the second step:
translation of syntax trees into other trees. This is where macro
programming is done, and there is a large amount of clever
infrastructure in Lisp which makes it easy to work at this level. Lisp
contains a domain language for language construction.
PARTICULARLY for domain-specific languages, the language's
designers NEED access to domain-specific competence, which typically
they won't have enough of for any domain that doesn't happen to be
what they've spent many years of their life actually DOING (just

Bingo! This is where Lisp comes in; it gives the domain experts the
power to express what they want, without requiring them to become
compiler construction experts. This is why Lisp is used by some
artificial intelligence researchers, biologists, linguists, musicians,
etc.
 
R

Raffael Cavallaro

IOW: Emacs is BLOATED.

In an era of 250 GB hard drives, and GB RAM modules, who cares that
emacs is about 5MB in size, and uses 4 (real) to 10 (virtual) MB of
RAM.

Moore's law is lisp's friend. ;^)
 
A

Andrew Dalke

Kaz Kylheku:
The problem is that

program = compiler(character-string)

is too much of a closed design. A more flexible design resembles:

abstract-syntax-tree = reader(character-string)

target-syntax-tree = translator(abstract-syntax-tree)

program = compiler(target-syntax-tree)

But you didn't see the implementation of Alex's 'compiler' function.
It looks like

def compiler(character_string):
return compile_tree(translator(reader(character_string)))

and those intermediates are part of the publically usable
API.
The input to a compiler should not be a character string, but
structured data.

I think you're arguing naming preferences here. That's fine;
Alex's point remains unchanged. In a dynamic language like
Python, parsing a domain specific language can be done at
run-time, parsed, converted to a Python parse tree, and
compiled to Python byte codes, just like what you want and
in the form you want it.
In Lisp, the complexity of the compiler is constant; it's a language
builtin. The complexity of the reader depends on the complexity of the
lexical properties of the language, and the complexity of the
translator depends on the semantic complexity of the language.

Replace 'Lisp' with 'Python' and the result is still true. Ditto
for 'C'. So I'm afraid I don't understand your point.
This is a common fallacy: namely that the difficulty of doing
something, and the consequent infrequency of doing it, constitute
evidence for a lack of need of doing it. In fact this is nothing more
than rationalization: ``I know I have inadequate tools, but I really
don't need them''. It's not unlike ``I can't reach those grapes, but I
know they are sour anyway''.

That's not Alex's argument. Python has the ability to do exactly
what you're saying (domain language -> AST -> Python code or AST ->
compiler). It's rarely needed (I've used it twice now in my six years
or so of Python), so why should a language cater to make that
easy at the expense of making frequent things harder?
In Lisp, tinkering with the syntax and semantics is done even in
trivial programs.

And that's a good thing? That means that everyone looking at
new Lisp code needs to understand the modifications to the syntax
and semantics. That may be appropriate for an insular organization,
but otherwise it makes it harder for others to understand any code.
By the way, your use of ``fundamental'' suggests a misunderstanding:
namely that some kind of destructive manipulation of the language is
going on to change the foundations, so that existing programs are no
longer understood or change in meaning. This is not so: rather, users
extend the language to understand new constructs. The fundamental
syntax and semantics stay what they are; they are the stable target
language for the new constructs.

Alex and others have responded to this argument many times. The
summary is that 1) in practice those who extend the language are most
often not the domain experts so the result doesn't correctly capture
the domain, 2) because it's easy to do, different groups end up with
different, incompatible domain-specific modifications, 3) rarely
is that approach better than using OOP, HOF and other approaches,
where better is defined as more flexible, easier to read, more
succinct, etc.
This is a common viewpoint in computer science. Let me state it like
this: ``language design is a Hard Problem that requires you to whip
out lexical analyzers, parser constructors, complex data structures
for symbol table management, intermediate code generation, target code
generation, instruction selection, optimization, etc.''

Actually, language design doesn't require any of those. They
are needed to implement a language.

Let me add that implementing a language *was* a Hard Problem,
but effectively solved in the 1970s. The solutions are now well
known and there are a huge number of tools to simplify all
the steps in that process, books on the subject, and people with
experience in doing it.

There are still hard problems, but they are hard engineering
problems, not hard scientific ones where the theory is not
well understood.
But when you have this:

abstract-syntax-tree = reader(character-string)

target-syntax-tree = translator(abstract-syntax-tree)

program = compiler(target-syntax-tree)

you can do most of the language design work in the second step:
translation of syntax trees into other trees. This is where macro
programming is done, and there is a large amount of clever
infrastructure in Lisp which makes it easy to work at this level. Lisp
contains a domain language for language construction.

Python has all that, including the ability to turn a string into
a Python AST, manipulate that tree, and compile that tree.

It's not particularly clever; there's no real need for that. In
general, the preference is to be clear and understandable over
being clever. (The New Jersey approach, perhaps?)

(Though the compiler module is pretty clumsy to use.)
Bingo! This is where Lisp comes in; it gives the domain experts the
power to express what they want, without requiring them to become
compiler construction experts. This is why Lisp is used by some
artificial intelligence researchers, biologists, linguists, musicians,
etc.

Speaking as a representative from the biology community, the
Lisp programmers are a minority and far behind C, Fortran, Perl,
and still behind Python, Tcl, and even Ruby.

*If* the domain expert is also an expert Lisp program then
what you say is true. It's been my experience that most domain
experts are not programmers -- most domains aren't programming.
Even in what I do, computational life sciences, most chemists
and biologists can do but a smattering of programming. That's
why they hire people like me. And I've found that objects and
functions are good enough to solve the problems in that domain;
from a CS point of view, it's usually pretty trivial.

Andrew
(e-mail address removed)
 
V

Ville Vainio

Andrew Dalke said:
It's not particularly clever; there's no real need for that. In
general, the preference is to be clear and understandable over
being clever. (The New Jersey approach, perhaps?)

Some people (academics) are paid for being clever. Others (engineers)
are paid for creating systems that work (in the wide meaning of the
word), in a timeframe that the company/client can afford.

In the for-fun area, by analogy, some people get the kick from
creating systems that work (be it a Linux distribution or a network
programming framework), and some from creating an uber-3133t hacks in
order to impress their friends.

Macros provide billions of different ways to be "clever", so obviously
Lisp gives greater opportunity of billable hours for people who can
bill for clever stuff. I'm studying Grahams "On Lisp" as bad-time
reading ATM, and can also sympathize w/ people who use Lisp just for
the kicks.

Lisp might have a good future ahead of it if it was only competing
againt C++, Java and others. Unfortunately for Lisp, other dynamic
languages exist at the moment, and they yield greater
productivity. Most bosses are more impressed with getting stuff done
fast than getting it done slowly, using gimmicks that would have given
you an A+ if it was a CS research project.
 
E

Espen Vestre

Not even Windows users use MS-Word to edit program code; this is a
completely irrelevant comparison.

I'm not sure you're right. Using Words little brother Wordpad is just
as weird, and we just had proof of that.
 
P

Pascal Costanza

Andrew said:
That's not Alex's argument. Python has the ability to do exactly
what you're saying (domain language -> AST -> Python code or AST ->
compiler). It's rarely needed (I've used it twice now in my six years
or so of Python), so why should a language cater to make that
easy at the expense of making frequent things harder?

Maybe you have only rarely used it because it is hard, and therefore
just think that you rarely need it. At least, this is my assessment of
what I have thought to be true before I switched to Lisp.
And that's a good thing? That means that everyone looking at
new Lisp code needs to understand the modifications to the syntax
and semantics. That may be appropriate for an insular organization,
but otherwise it makes it harder for others to understand any code.

That's true for any language. In any language you build new data
structures, classes, methods/functions/procedures, and everyone looking
at new code in any language needs to understand these new definitions.
There is no difference here _whatsoever_.

Modifying syntax and creating new language abstractions only _sounds_
scary, but these things are like any other activity during programming
that take care, as soon as the language you use supports them well.
Alex and others have responded to this argument many times. The
summary is that 1) in practice those who extend the language are most
often not the domain experts so the result doesn't correctly capture
the domain,

Are these non-experts any better off with just data structures and
functions?
2) because it's easy to do, different groups end up with
different, incompatible domain-specific modifications,

Do they also come up with different APIs? How is divergence of APIs
solved in practice? Can't you use the same solutions for macro libraries?
3) rarely
is that approach better than using OOP, HOF and other approaches,
where better is defined as more flexible, easier to read, more
succinct, etc.

Are you guessing, or is this based on actual experience?

BTW, macros are definitely more flexible and more succinct. The only
claim that I recall being made by Pythonistas is that macros make code
harder to read. The Python argument is that uniformity eases
readability; the Lisp argument is that a better match to the problem
domain eases readability. I think that...

(with-open-file (f "...")
...
(read f)
...)

....is much clearer than...

try:
f=open('...')
...
f.read()
...
finally:
f.close()

Why do I need to say "finally" here? Why should I even care about
calling close? What does this have to do with the problem I am trying to
solve? Do you really think it does not distract from the problem when
you first encounter that code and try to see the forest from the trees?

BTW, this is one of the typical uses for macros: When designing APIs,
you usually want to make sure that certain protocols are followed. For
the typical uses of your library you can provide high-level macros that
hide the details of your protocol.

Here is an arbitrary example that I have just picked from the
documentation for a Common Lisp library I have never used before:

(with-transaction
(insert-record :into [emp]
:attributes '(x y z)
:values '(a b c))
(update-records [emp]
:attributes [dept]
:values 50
:where [= [dept] 40])
(delete-records :from [emp]
:where [> [salary] 300000]))

(see
http://www.lispworks.com/reference/lw43/LWRM/html/lwref-460.htm#pgfId-889797
)

What do you think the with-transaction macro does? Do you need any more
information than that to understand the code?

BTW, note that the third line of this example is badly indented. Does
this make reading the code more difficult?



Here is why I think that Python is successful: it's because it favors
dynamic approaches over static approaches (wrt type system, and so on).
I think this is why languages like Ruby, Perl and PHP are also
successful. Languages like Java, C and C++ are very static, and I am
convinced that static approaches create more problems than they solve.

It's clear that Python is a very successful language, but I think this
fact is sometimes attributed to the wrong reasons. I don't think its
success is based on prettier syntax or uniformity. Neither give you an
objectively measurable advantage.


Pascal
 
P

Pascal Costanza

Ville said:
Lisp might have a good future ahead of it if it was only competing
againt C++, Java and others. Unfortunately for Lisp, other dynamic
languages exist at the moment, and they yield greater
productivity.

This is true for the things that are currently en vogue.
Most bosses are more impressed with getting stuff done
fast than getting it done slowly, using gimmicks that would have given
you an A+ if it was a CS research project.

I have implemented an AOP extension for Lisp that took about a weekend
to implement. The implementation is one page of Lisp code and is rather
efficient (wrt usability and performance) because of macros.

I have heard rumors that the development of an AOP extension for Python
would take considerably longer.


Pascal
 
P

prunesquallor

Ville Vainio said:
Some people (academics) are paid for being clever. Others
(engineers) are paid for creating systems that work (in the wide
meaning of the word), in a timeframe that the company/client can
afford.

And we all know that nothing made by academics actually works.
Conversely there is no need to be clever if you are an engineer.
Macros provide billions of different ways to be "clever", so obviously
Lisp gives greater opportunity of billable hours for people who can
bill for clever stuff. I'm studying Grahams "On Lisp" as bad-time
reading ATM, and can also sympathize w/ people who use Lisp just for
the kicks.

Shhh! Don't alert the PHB's to how we pad our hours!
Lisp might have a good future ahead of it if it was only competing
againt C++, Java and others. Unfortunately for Lisp, other dynamic
languages exist at the moment, and they yield greater productivity.

For instance, a productivity study done by Erann Gat showed ... no
wait. Where was that productivity study that showed how far behind
Lisp was?
Most bosses are more impressed with getting stuff done fast than
getting it done slowly, using gimmicks that would have given you an
A+ if it was a CS research project.

Which is *precisely* the reason that bosses have adopted C++ over C.
 
?

=?iso-8859-1?q?Bj=F6rn_Lindberg?=

Alex Martelli said:
Pascal Costanza wrote:
...

Ah, what a wonderfully meaningful view that is.


I didn't say SMALL. Small or large, it's about alteration to the
syntax. Other lispers have posted (on several of this unending
multitude of threads, many but not all of which I've killfiled)
stating outright that there is no semantic you can only implement
with macros: that macros are ONLY to "make things pretty" for
given semantics. If you disagree with them, I suggest pistols at
ten paces, but it's up to you lispers of course -- as long as
you guys with your huge collective experience of macros stop saying
a million completely contradictory things about them and chastising
me because (due, you all keep claiming, to my lack of experience)
I don't agree with all of them, I'll be glad to debate this again.

Why is this so surprising? Maybe different lispers use macros for
different things, or see different advantages to them? What all have
in common though, is that they all consider macros a valuable and
important part of a programming language. What I have seen in this
thread are your (Alex) lengthy posts where you reiterate the same
uniformed views of macros over and over again. You seem to have made
your mind up already, even though you don not seem to have fully
understood Common Lisp macros yet. Am I wrong?
Till then, this is yet another thread that get killfiled.

Why do you bother to post if you are not even going to read the
responses?

But, until then -- bye. And now, to killfile this thread too....

What is the point in initiating a subthread by an almost 300-line,
very opinionated post just to immediately killfile it?


Björn
 
B

Brian Kelley

Pascal said:
BTW, macros are definitely more flexible and more succinct. The only
claim that I recall being made by Pythonistas is that macros make code
harder to read. The Python argument is that uniformity eases
readability; the Lisp argument is that a better match to the problem
domain eases readability. I think that...

(with-open-file (f "...")
...
(read f)
...)

...is much clearer than...

try:
f=open('...')
...
f.read()
...
finally:
f.close()

Why do I need to say "finally" here? Why should I even care about
calling close? What does this have to do with the problem I am trying to
solve? Do you really think it does not distract from the problem when
you first encounter that code and try to see the forest from the trees?
Your two examples do completely different things and the second is
written rather poorly as f might might not exist in the finally block.
It certainly won't if the file doesn't exist.

A better comparison would (*might*, I haven't used lisp in a while) be:

(with-open-file (f filename :direction :eek:utput :if-exists :supersede)
(format f "Here are a couple~%of test data lines~%")) => NIL

if os.path.exists(filename):
f = open(filename, 'w')
print >> f, "Here are a couple of test data lines"

I think that the latter is easier to read and I don't have to worry
about f.close(), but that is just me. I don't know what with-open-file
does if filename actually can't be opened but you haven't specified what
to do in this case with your example so I won't dig deeper.
BTW, this is one of the typical uses for macros: When designing APIs,
you usually want to make sure that certain protocols are followed. For
the typical uses of your library you can provide high-level macros that
hide the details of your protocol.
I tend to use a model in this case. For example if I want to always
retrieve a writable stream even if a file isn't openable I just supply a
model function or method.

model.safeOpen(filename)
"""filename -> return a writable file if the file can be opened or a
StringIO buffer object otherwise"""

In this case what happens is explicit as it would be with a macro. Note
that I could have overloaded the built-in "open" function but I don't
really feel comfortable doing that. So far, I haven't been supplied an
urgent need to use macros.
Here is an arbitrary example that I have just picked from the
documentation for a Common Lisp library I have never used before:

(with-transaction
(insert-record :into [emp]
:attributes '(x y z)
:values '(a b c))
(update-records [emp]
:attributes [dept]
:values 50
:where [= [dept] 40])
(delete-records :from [emp]
:where [> [salary] 300000]))

(see
http://www.lispworks.com/reference/lw43/LWRM/html/lwref-460.htm#pgfId-889797
)

What do you think the with-transaction macro does? Do you need any more
information than that to understand the code?
Yep. Where's the database? I have to look at the specification you
provided to realize that it is provided by *default-database*. I also
am not aware from this macro whether the transaction is actually
committed or not and whether it rolls back if anything goes wrong. I
can *assume* this at my own peril but I, of course, had to look at the
documentation to be sure. Now this might be lisp-centric but I, being a
lowly scientist, couldn't use this macro without that piece of
knowledge. Of course, now that I know what the macro does I am free to
use it in the future. And yet, if I actually want to *deal* with errors
that occur in the macro besides just rolling back the transaction I
still need to catch the exceptions ( or write another macro :) )

The macro's that you have supplied seem to deal with creating a standard
API for dealing with specific exceptions.

Again, I could create an explicit database model

model.transaction(commands)
"""(commands) Execute a list of sql commands in a transaction.
The transaction is rolled back if any of the commands fail
and the correspoding failed exception is raised"""

BTW, note that the third line of this example is badly indented. Does
this make reading the code more difficult?
This is a red herring. Someone had to format this code to bake it
suitably readable. I could add a red-herring of my own reformatting
this into an undreadable blob but that would be rather childish on my
part and completely irrelevant.
Here is why I think that Python is successful: it's because it favors
dynamic approaches over static approaches (wrt type system, and so on).
I think this is why languages like Ruby, Perl and PHP are also
successful. Languages like Java, C and C++ are very static, and I am
convinced that static approaches create more problems than they solve.
We are in agreement here.
It's clear that Python is a very successful language, but I think this
fact is sometimes attributed to the wrong reasons. I don't think its
success is based on prettier syntax or uniformity. Neither give you an
objectively measurable advantage.
It all depends on your perspective. I think that I have limited brain
power for remembering certain operations. A case in point, I was
re-writing some documentation yesterday for the embedded metakit
database. Python uses a slice notation for list operations:

list[lo:hi] -> returns a list from index lo to index hi-1

The database had a function call select:

view.select(lo, hi) -> returns a list from index lo to index hi

While it seems minor, this caused me major grief in usage and I wish it
had been uniform with the way python selects ranges. Now I have two
things to remember. I can objectively measure the difference in this
case. The difference is two hours of debugging because of a lack of
uniformity. Now, I brought this on myself by not reading the
documentation closely enough and missing the word "(inclusive)" so I
can't gripe to much. I will just say that the documentation now clearly
shows this lack of uniformity from the standard pythonic way. Of course
we could talk about the "should indexed arrays start with 0 or 1?" but I
respect that there are different desired levels of uniformity. Mine is
probably a little higher than yours :)

Note, that I have had similar experiences in lisp where macros that I
expected to work a certain way, as they were based on common CLHS
macros, didn't. For example, you wouldn't expect (with-open-file ...)
to behave fundamentally different from (with-open-stream ...) and would
probably be annoyed if they did.

In case anyone made it this far, I'm not dissing lisp or trying to
promote python. Both languages are remarkably similar. Macros are one
of the constructs that make lisp lisp and indentation are one of the
things that make python python. Macros could be extremely useful in
python and perhaps to someone who uses them regularly, their ommision is
a huge wart. Having used macros in the past, all I can say is that for
*MY* programming style, I can't say that I miss them that much and have
given a couple of examples of why not.
Brian Kelley
Whitehead institute for Biomedical Research
 
J

Jock Cooper

Pascal Costanza said:
Andrew Dalke wrote:

(with-open-file (f "...")
...
(read f)
...)

...is much clearer than...

try:
f=open('...')
...
f.read()
...
finally:
f.close()

Why do I need to say "finally" here? Why should I even care about
calling close? What does this have to do with the problem I am trying
to solve? Do you really think it does not distract from the problem
when you first encounter that code and try to see the forest from the
trees?
snip

Can you implement with-open-file as a function? If you could how would
it compare to the macro version? It would look something like:

(defun with-open-file (the-func &rest open-args)
(let ((stream (apply #'open open-args)))
(unwind-protect
(funcall the-func stream)
(close stream))))

(defun my-func (stream)
... operate on stream...
)

(defun do-stuff ()
(with-open-file #'my-func "somefile" :direction :input))

One of the important differences is that MY-FUNC is lexically isolated
from the environment where WITH-OPEN-FILE appears. The macro version
does not suffer this; and it is often convenient for the code block
in the WITH-OPEN-FILE to access that environment.
 
K

Kaz Kylheku

Ville Vainio said:
Some people (academics) are paid for being clever. Others (engineers)
are paid for creating systems that work (in the wide meaning of the
word), in a timeframe that the company/client can afford.

[ snip ]
^^^^^^^^

Tee hee! :)
 
K

Kaz Kylheku

Andrew Dalke said:
Kaz Kylheku:

I think you're arguing naming preferences here. That's fine;
Alex's point remains unchanged. In a dynamic language like
Python, parsing a domain specific language can be done at
run-time, parsed, converted to a Python parse tree, and
compiled to Python byte codes, just like what you want and
in the form you want it.

Ah, but in Lisp, this is commonly done at *compile* time. Moreover,
two or more domain-specific languages can be mixed together, nested in
the same lexical scope, even if they were developed in complete
isolation by different programmers. Everything is translated and
compiled together. Some expression in macro language B can appear in
an utterance of macro language A. Lexical references across these
nestings are transparent:

(language-a
... establish some local variable foo ...
(language-b
... reference to local variable foo set up in language-a!
))

The Lisp parse tree is actually just normal Lisp code. There is no
special target language for the compiler; it understands normal Lisp,
and that Lisp is very conveniently manipulated by the large library of
list processing gadgets. No special representation or API is required.

Do people write any significant amount of code in the Python parse
tree syntax? Can you use that syntax in a Python source file and have
it processed together with normal code?

What is Python's equivalent to the backquote syntax, if I want to put
some variant pieces into a parse tree template?
 
M

Matthew Danish

Your two examples do completely different things and the second is
written rather poorly as f might might not exist in the finally block.
It certainly won't if the file doesn't exist.

A better comparison would (*might*, I haven't used lisp in a while) be:

(with-open-file (f filename :direction :eek:utput :if-exists :supersede)
(format f "Here are a couple~%of test data lines~%")) => NIL

if os.path.exists(filename):
f = open(filename, 'w')
print >> f, "Here are a couple of test data lines"

How are these in any way equivalent? Pascal posted his example with
try...finally and f.close () for a specific reason. In your Python
example, the file is not closed until presumably the GC collects the
descriptor and runs some finalizer (if that is even the case). This is
much different from the Lisp example which guarantees the closing of the
file when the body of the WITH-OPEN-FILE is exited. In addition:

``When control leaves the body, either normally or abnormally (such as
by use of throw), the file is automatically closed. If a new output file
is being written, and control leaves abnormally, the file is aborted and
the file system is left, so far as possible, as if the file had never
been opened.''

http://www.lispworks.com/reference/HyperSpec/Body/m_w_open.htm#with-open-file

So in fact, you pointed out a bug in Pascal's Python example, and one
that is easy to make. All this error-prone code is abstracted away by
WITH-OPEN-FILE in Lisp.
 
M

Matthew Danish

One of the important differences is that MY-FUNC is lexically isolated
from the environment where WITH-OPEN-FILE appears. The macro version
does not suffer this; and it is often convenient for the code block
in the WITH-OPEN-FILE to access that environment.

(call-with-open-file
#'(lambda (stream)
...)
"somefile"
:direction :input)

WITH-OPEN-FILE happens to be one of those macros which doesn't require
compile-time computation, but rather provides a convenient interface to
the same functionality as above.
 
A

Andrew Dalke

Kaz Kylheku:
Ah, but in Lisp, this is commonly done at *compile* time.

Compile vs. runtime is an implementation issue. Doesn't
change expressive power, only performance. Type inferencing
suggests that there are other ways to get speed-ups from
dynamic languages.
Moreover,
two or more domain-specific languages can be mixed together, nested in
the same lexical scope, even if they were developed in complete
isolation by different programmers.

We have decidedly different definitions of what a "domain-specific
language" means. To you it means the semantics expressed as
an s-exp. To me it means the syntax is also domain specific. Eg,
Python is a domain specific language where the domain is
"languages where people complain about scope defined by
whitespace." ;)

Yes, one can support Python in Lisp as a reader macro -- but
it isn't done because Lispers would just write the Python out
as an S-exp. But then it wouldn't be Python, because the domain
language *includes*domain*syntax*.

In other words, writing the domain language as an S-exp
is a short cut to make it easier on the programmer, and not
on the domain specialist. Unless the domain is programming.
And you know, very few of the examples of writing a domain
specific language in Lisp have been for tasks other than
programming.
Do people write any significant amount of code in the
Python parse tree syntax?

No. First, it isn't handled as a syntax, it's handled as
as operations on a tree data structure. Second -- and
this point has been made several times -- that style of
programming isn't often needed, so there of course isn't
a "significant amount."
Can you use that syntax in a Python source file and have
it processed together with normal code?

Did you look at my example doing just that? I built
an AST for Python and converted it into a normal function.
What is Python's equivalent to the backquote syntax, if I
want to put some variant pieces into a parse tree template?

There isn't. But then there isn't need. The question isn't
"how do I do this construct that I expect in Lisp?" it's "how
do I solve this problem?" There are other ways to solve
that problem than creating a "parse tree template" and to
date there have been few cases where the alternatives were
significantly worse -- even in the case of translating a domain
language into local syntax, which is a Lisp specialty, it's only
about twice as long for Python as for Lisp and definitely
not "impossible" like you claimed. Python is definitely worse
for doing research in new programming styles, but then
again that's a small part of what most programmers need,
and an even smaller part of what most non-professional
programmers need. (Eg, my science work, from a computer
science viewpoint, is dead boring.)

There's very little evidence that Lisp is significantly better
than Python (or vice versa) for solving most problems.
It's close enough that it's a judgement call to decide which
is more appropriate.

But that's a Pythonic answer, which acknowledges that
various languages are better for a given domain and that it's
relatively easy to use C/Java bindings, shared memory, sockets,
etc to make them work together, and not a Lispish answer,
which insists that Lisps are the best and only languages
people should consider. (Broad brush, I know.)

Andrew
(e-mail address removed)
 
P

Paul Rubin

Andrew Dalke said:
In other words, writing the domain language as an S-exp
is a short cut to make it easier on the programmer, and not
on the domain specialist. Unless the domain is programming.
And you know, very few of the examples of writing a domain
specific language in Lisp have been for tasks other than
programming.

Actually in my experience that hasn't been a problem. For example, I
wrote a program that crunched EDI documents. There were hundreds of
different document types each with its own rudimentary syntax. We had
a big 3-ring binder containing a printed copy of the ANSI standard
that had a semi-formal English description the syntax of each of these
documents. My program had an embedded Lisp interpreter and worked by
letting you give it the syntax of each document type as a Lisp
S-expression. The stuff in the S-expression followed the document
description in the printed EDI standard pretty closely. I typed in
the first few syntax specs and then was able to hand off the rest to a
non-programmer, who was able to see pretty quickly how the
S-expressions worked and code the rest of them. I think that the
version of the system we actually shipped to customers still had the
S-expression syntax specs buried in its guts somewhere, but my memory
about that is hazy.

The instructions about how to process specific documents were also
entered as Lisp programs at first. That let us very quickly determine
the semantic features we wanted in processing scripts, even though we
knew that our customers wouldn't tolerate Lisp. Once we had the
semantics figured out, we were able to design a language that
superficially looked like an unholy marriage of Basic and Cobol. We
wrote a Yacc script that parsed that language and built up
S-expressions in memory, and then eval'd them with the Lisp
interpreter.

Peter Norvig talks about this some in his Python/Lisp comparison page:

http://www.norvig.com/python-lisp.html

Basically the Python AST structure is awful, but you arrange your life
so you don't have to deal with it very much.
There's very little evidence that Lisp is significantly better
than Python (or vice versa) for solving most problems.
It's close enough that it's a judgement call to decide which
is more appropriate.

There's one area where Lisp absolutely rules, which is providing a way
to write down complicated data structures without much fuss. These
days, XML is used as a Bizarro cousin of Lisp S-expressions in all
kinds of applications for similar purposes. The EDI program I
mentioned earlier was not originally intended to have an embedded
interpreter. I typed in some EDI syntax specs as S-expressions just
to have something to work with. I then wrote something like a Lisp
reader to read the S-expressions. I found myself then writing a Lisp
printer to debug the Lisp reader. Having a reader and printer it was
then entirely natural to add an eval and gc. The result became a
fairly important product in the EDI world for a time.
 
V

Ville Vainio


Yes, I have an account at a university. I prefer to use it instead of
that of my ISP (because ISP's come and go) or my work account (to
avoid associating my company with any of my opinions, which I think
should be a standard policy.. also, they don't provide web publishing
space for obvious reasons).
 
A

Andrew Dalke

Paul Rubin:
Actually in my experience that hasn't been a problem. For example, I
wrote a program that crunched EDI documents.

Ahh, you're right. I tend to omit business-specific languages
when I think about programming.

I conjecture it would have be about the same amount of work to
do it in Python, based solely on your description, but I defer to
you on it.
superficially looked like an unholy marriage of Basic and Cobol.

heh-heh :)
Peter Norvig talks about this some in his Python/Lisp comparison page:

http://www.norvig.com/python-lisp.html

Basically the Python AST structure is awful, but you arrange your life
so you don't have to deal with it very much.

His description starts

] Python does not have macros. Python does have access to the
] abstract syntax tree of programs, but this is not for the faint of
] heart. On the plus side, the modules are easy to understand,
] and with five minutes and five lines of code I was able to get this:
] >>> parse("2 + 2")
] ['eval_input', ['testlist', ['test', ['and_test', ['not_test',
['comparison',
] ['expr', ['xor_expr', ['and_expr', ['shift_expr', ['arith_expr', ['term',
] ['factor', ['power', ['atom', [2, '2']]]]], [14, '+'], ['term',
['factor',
] ['power', ['atom', [2, '2']]]]]]]]]]]]]]], [4, ''], [0, '']]

I completely agree. Manipulating Python AST and the parse tree
are not for the faint of heart. However, he's not working with
the AST
import compiler
compiler.parse("2+2") Module(None, Stmt([Discard(Add((Const(2), Const(2))))]))

The code I wrote uses the AST and, while clunky, isn't as bad
as his example suggests.
There's one area where Lisp absolutely rules, which is providing a way
to write down complicated data structures without much fuss. These
days, XML is used as a Bizarro cousin of Lisp S-expressions in all
kinds of applications for similar purposes.

That's an old debate. Here's a counter-response
http://www.prescod.net/xml/sexprs.html

Andrew
(e-mail address removed)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,780
Messages
2,569,611
Members
45,281
Latest member
Pedroaciny

Latest Threads

Top