merits of Lisp vs Python

R

Raffael Cavallaro

After all,
Haskell and OCaml are more popular that any given Lisp variant with similar
features (e.g. pattern matching), AFAIK.

What doublespeak!

haskell and ocaml are more popular than any lisp library that tries to
imitate Haskell and ocaml. So what! This only speaks to the relative
unpopularity of imitating these features of haskell and ocaml when you
already have lisp.
 
J

Jon Harrop

Raffael said:
That's the whole point which you keep missing - that a programming
language is expressive precisely to the extent that it allows you to
express the solution in the *programmer's* chosen form, not the
paradigm imposed by the language.

That is the ideal, yes.

In practice, different languages encourage you to use different solutions.
For example, when faced with a problem best solved using pattern matching
in Lisp, most Lisp programmers would reinvent an ad-hoc, informally
specified and bug-ridden pattern matcher of their own. Do you not think
that Lispers typically "compile" their high-level algorithms into low-level
Lisp constructs like COND or IF?
You look down your nose at cells, but if that's the way kenny conceived
of the problem - as a graph of changing state, why should he be forced
to reconceptualize it according to someone else's notion of programming
correctness (be that pure functional or any other paradigm)?

Kenny isn't being forced to do anything.
By asking this question you've implicitly admitted that to solve it *as
he thought of it* in a pure functional language would require
reconceptualizing it (i.e., the aforementioned "jumping through
hoops").

You are saying that solving it as he solved it requires a different
solution. How does that make Lisp any different to the next language?
We don't want to reconceptualize everything according to a
particular paradigm, we want the flexibility to write the solution to
the problem in the terms we think and talk about it, not the
procrustean bed of pure functional semantics.

Of the programming paradigms that can be implemented in Lisp, Lisp doesn't
exactly make them easy. Moreover, every time you pick a random Lisp library
off the wall to implement some feature already found in most other
languages, you fragment the already tiny user base into even fewer people.
 
J

Jon Harrop

Raffael said:
haskell and ocaml are more popular than any lisp library that tries to
imitate Haskell and ocaml.

Implementing pattern matching does not mean imitating Haskell or OCaml.
This only speaks to the relative
unpopularity of imitating these features of haskell and ocaml when you
already have lisp.

On the contrary, Lisp predates the features and, since their inception, most
Lisp programmers have moved on to newer languages.
 
R

Raffael Cavallaro

For example, when faced with a problem best solved using pattern matching
in Lisp, most Lisp programmers would reinvent an ad-hoc, informally
specified and bug-ridden pattern matcher of their own.

No, I think most of us would use an exising lisp pattern matcher, like
cl-unification.

You are saying that solving it as he solved it requires a different
solution. How does that make Lisp any different to the next language?

Give kenny some credit for not being a complete idiot. Cells was
originally designed to keep UI elements in sync with an internal
application model. The UI domain is I/O, i.e., a side effect. To do
this lazily invites situations where an inherently unpredictable user
action forces a complex series of constraints to be computed before
anything can be displayed to the user, so the user must wait while the
lazy system catches up. To do this eagerly means that at any time, any
unpredictable user action will cause already computed state to be
displayed, because everything has been kept up to date automatically
all along.

I'm saying that he conceived of the problem in the most natural way -
state with mutations - and implemented it that way. He was not forced
by his language to reconceive it purely functionally, have haskell
implement the default lazy semantics, only to require the programmer to
find an escape hatch from this default laziness since what he really
wants is eager evaluation of side effects (i.e., I/O - syncing of model
state with GUI display).

People habitually think of the world as state and mutations of state.
We've been doing so for a minimum of tens of thousands of years, quite
possibly a million or more. People are good at thinking about mutation.
Maybe this should tell us something about the desirability of certain
programming language semantics and that referential transparency is a
bit overrated.
Of the programming paradigms that can be implemented in Lisp, Lisp doesn't
exactly make them easy. Moreover, every time you pick a random Lisp library
off the wall to implement some feature already found in most other
languages, you fragment the already tiny user base into even fewer people.

Not all lisp programmers will be solving the same sorts of problems as
each other, so naturally they'll be using different sets of libraries.
This use of different sets of libraries for different tasks doesn't
constitute laguage fragmentation.
 
R

Raffael Cavallaro

Implementing pattern matching does not mean imitating Haskell or OCaml.

We were explicitly comparing lisp with haskell and ocaml. Adding
features built into haskell and ocaml but not present in ANSI common
lisp would therefore constitute imitating haskell and ocaml in the
context of this discussion. Surely you don't think I'm unaware of the
fact that haskell and ocaml weren't the first languages to use some
form of pattern matching. I acutally used SNOBOL once upon a time.
 
S

Slawomir Nowaczyk

On Sat, 16 Dec 2006 14:05:06 -0500

#> And there is something that is missing here in arguing about computer
#> language notations in relationship to human language readability, or
#> correspondence to spoken language. I'm not writing code for another
#> human, I'm writing code for a machine. Often, the optimum expression
#> of an mathematical concept for a machine is relatively baroque
#> compared to the optimum expression for a human being.

Well, the memorable quote from "Structure and Interpretation of Computer
Programs" states that "Programs should be written for people to read,
and only incidentally for machines to execute."

--
Best wishes,
Slawomir Nowaczyk
( (e-mail address removed) )

If at first you do succeed, try not to look astonished.
 
J

Jon Harrop

Kaz said:
The removal for the need for manual object lifetime computation does
not cause a whole class of useful programs to be rejected.

Sometimes you must be able to guarantee that a resource has been disposed
before you can continue. That is why we have finalisers in OCaml and
dispose in .NET (to manage external resources).
In fact, all previously correct programs continue to work as before,

Removing (external) resource allocation will break some programs.
and in addition, some hitherto incorrect programs become correct.
That's an increase in power: new programs are possible without losing
the old ones.

It is a trade-off. Aside from managing external resources, GC has
performance implications that affect the design decisions of real-time
applications.
Wheas programs can't be made to conform to the pure functional paradigm
by adjusting the semantics of some API function. Programs which don't
conform have to be rejected,

You can rephrase any impure program as a pure program (Turing).
Right. GC gets rid of /program errors/. Pure functional programming
gets rid of /programs/.

Pure functional programming does not get rid of any programs.

Pure functional programming does get rid of program errors, e.g. concurrency
issues.
Note that changing the storage liberation request from an imperative to
a hint isn't the removal of a feature. It's the /addition/ of a
feature. The new feature is that objects can still be reliably used
after the implementation was advised by the program that they are no
longer needed. Programs which do this are no longer buggy. Another new
feature is that programs can fail to advise the implementation that
some objects are no longer needed, without causing a leak, so these
programs are no longer buggy. The pool of non-buggy programs has
increased without anything being rejected.

You have rejected all programs that rely upon a resource being disposed
before a certain point.
 
P

Paul Rubin

Kaz Kylheku said:
Incorrect, I believe. The above is like saying Lisp's lack of
optional manual storage allocation and machine pointers makes Lisp
less powerful.

That is true. By itself, that feature makes Lisp less poweful for
real-world software dev, which is why we have implementation-defined
escape hatches for that sort of thing.
...
This is a bad analogy to the bondage-and-discipline of purely
functional languages. [/] The removal for the need for manual object
lifetime computation does not cause a whole class of useful programs
to be rejected.

Did you just say two conflicting things? It's that very rejection
that necessitates the escape hatches.
In fact, all previously correct programs continue to work as before,
and in addition, some hitherto incorrect programs become correct.
That's an increase in power: new programs are possible without losing
the old ones.

There's more to power than making more programs possible. We also
want to be able to distinguish correct programs from incorrect ones.
Lisp has the power to eliminate a large class of pointer-related
errors that are common in C programs, so Lisp is more powerful than C
in that regard. Increasing the number of programs one can write in
the unfounded hope that they might be correct is just one way to
increase power. You can sometimes do that by adding features to the
language. Increasing the number of programs you can write that are
demonstrably free of large classes of errors is another way to
increase power. You can sometimes do that by REMOVING features.
That's what the Lisp holdouts don't seem to understand.
Right. GC gets rid of /program errors/. Pure functional programming
gets rid of /programs/.

GC also gets rid of programs. There are programs you can write in C
but not in Lisp, like device drivers that poke specific machine
addresses.
/Pure/ functional programming isn't about adding the feature of
functional programming. It's about eliminating other features which
are not functional programming.

It seems to me that Haskell does some mumbo-jumbo to be purely
functional only in some theoretical sense. In practice, its purity is
enforced only in functions with a certain class of type signatures
(i.e. most functions). You can write impure code when needed, by
adding certain monads to the type signature of your function. The
point is Haskell's static type system can then tell exactly which
parts of your code are pure and which are impure. If you do something
impure in a funciton with the wrong signature, you get a compile time
error. You can choose to write in a style that separates pure from
impure functional code in a Lisp program, but the compiler can't check
it for you. If you get it wrong and call an impure function from your
"pure" code (say in a lock-free concurrent program), all kinds of
obscure bugs can result, like pointer errors in a C program.
 
X

xscottg

Paul said:
[...] There are programs you can write in C
but not in Lisp, like device drivers that poke specific machine
addresses.

I should assume you meant Common Lisp, but there isn't really any
reason you couldn't

(poke destination (peek source))

in some version of Lisp that was meant for writing device drivers
(perhaps under a Lisp machine or something).

SIOD actually has (%%% memref address) for peek.
 
B

Bill Atkins

Paul Rubin said:
There's more to power than making more programs possible. We also
want to be able to distinguish correct programs from incorrect ones.
Lisp has the power to eliminate a large class of pointer-related
errors that are common in C programs, so Lisp is more powerful than C
in that regard. Increasing the number of programs one can write in
the unfounded hope that they might be correct is just one way to
increase power. You can sometimes do that by adding features to the
language. Increasing the number of programs you can write that are
demonstrably free of large classes of errors is another way to
increase power. You can sometimes do that by REMOVING features.
That's what the Lisp holdouts don't seem to understand.


GC also gets rid of programs. There are programs you can write in C
but not in Lisp, like device drivers that poke specific machine
addresses.

I'm sure this would be news to the people who wrote the operating
system for the Lisp machine.

What makes you think that a Lisp implementation couldn't provide this?
 
P

Paul Rubin

I should assume you meant Common Lisp, but there isn't really any
reason you couldn't

(poke destination (peek source))

That breaks the reliability of GC. I'd say you're no longer writing
in Lisp if you use something like that. Writing in this "augmented
Lisp" can be ok if well-localized and done carefully, but you no
longer have the guarantees that you get from unaugmented Lisp. By
adding one feature you've removed another.
 
P

Paul Rubin

Bill Atkins said:
I'm sure this would be news to the people who wrote the operating
system for the Lisp machine.

That stuff is written in a special dialect of Lisp that doesn't have
regular Lisp semantics and doesn't have the usual Lisp functions,
IIRC. I think maybe you can't even use "cons". But my Orangenual is
currently in storage so I can't easily check.

Anyway I'm not willing to use "Lisp" to describe every language whose
surface syntax is S-expressions. That, as JWZ put it in another
context, is like trying to build a bookcase out of mashed potatoes.
Lisp means Common Lisp as defined by the ANSI standard. Otherwise all
languages are equally powerful as long as they have a way to inline
assembly code.
 
B

Bill Atkins

Paul Rubin said:
That breaks the reliability of GC. I'd say you're no longer writing
in Lisp if you use something like that. Writing in this "augmented
Lisp" can be ok if well-localized and done carefully, but you no
longer have the guarantees that you get from unaugmented Lisp. By
adding one feature you've removed another.

Whatever do you mean? The portion of memory used for memory-mapped
registers is simply excluded from GC; everything else works as normal.
All modern Lisps (yes, *Common* Lisps) support a foreign-function
interface to talk to C libraries. Data involved with these kinds of
interface is ignored by the GC, for obvious reasons. Do you claim
that these implementations are not truly Lisps?

--
There are three doors. Behind one is a tiger. Behind another: the
Truth. The third is a closet... choose wisely.

E-mail me at:
(remove-if (lambda (c) (find c ";:-")) "a;t:k-;n-w@r;p:i-.:e-d:u;")
 
X

xscottg

Paul said:
That breaks the reliability of GC. I'd say you're no longer writing
in Lisp if you use something like that. Writing in this "augmented
Lisp" can be ok if well-localized and done carefully, but you no
longer have the guarantees that you get from unaugmented Lisp. By
adding one feature you've removed another.

I don't agree. The addresses (mapped registers or DMA or whatever) you
peak and poke in a device driver aren't going to be managed by the
garbage collector.

Even regarding interupts, I don't see a problem without a solution:

(with-interupts-and-garbage-collection-disabled
(poke destination (peek source))

You could even allocate memory (cons cells for instance) with the
interupts disabled, but it would be considered the same bad practice
that it would be if you did that in C.

Cheers,
-Scott
 
P

Paul Rubin

Bill Atkins said:
Whatever do you mean? The portion of memory used for memory-mapped
registers is simply excluded from GC; everything else works as normal.

Well ok, if the peek and poke functions validate the addresses.
All modern Lisps (yes, *Common* Lisps) support a foreign-function
interface to talk to C libraries. Data involved with these kinds of
interface is ignored by the GC, for obvious reasons. Do you claim
that these implementations are not truly Lisps?

I think usually those objects are simply treated as opaque by the GC
and the contents are inaccessible except through FFI calls. You can't
have the Lisp code running amuck trampling things through naked
pointers. Obviously the foreign function can trample things but it's
not written in Lisp.
 
P

Paul Rubin

Even regarding interupts, I don't see a problem without a solution:
(with-interupts-and-garbage-collection-disabled
(poke destination (peek source))

It's not just GC or interrupts, it's the possibility of clobbering the
Lisp heap. If the peek/poke addresses are restricted to some i/o
region as Bill suggests, then the above approach may be reasonable.
 
X

xscottg

Paul said:
It's not just GC or interrupts, it's the possibility of clobbering the
Lisp heap. If the peek/poke addresses are restricted to some i/o
region as Bill suggests, then the above approach may be reasonable.

So don't (poke (random) value). That would be obvious to anyone
capable of writing a device driver in C or Lisp or Oberon or ....

Cheers,
-Scott
 
P

Paul Rubin

So don't (poke (random) value). That would be obvious to anyone
capable of writing a device driver in C or Lisp or Oberon or ....

Similarly in C programs, don't do

*random = 0;

Avoiding that is easier said than done. C programs suffer endless
bugs of that type.
 
J

John Thingstad

Similarly in C programs, don't do

*random = 0;

Avoiding that is easier said than done. C programs suffer endless
bugs of that type.

Don't know where you get that idea.
I have used a bounds checker in C++ since 1990..
If I set a pointer to a space on the heap that isn't allocated
I would get a error. If I forgot to deallocate it it would
warn me when the program terminates. Then I just jump to the place
in the file where the memory reference was allocated. Admitably
it could be a great problem if such tools weren't available, but that
hasn't
been the case in years.
Much worse are buffer overflow errors. Functions that don't check buffer
constraints. But then most modern compilers can turn that on too.
Wanna overwrite the return stack. A cookie at the end checks for this
as well.
As for old libraries like string(s) modern replacement's are available
that check constraints. For that matter C++'s type checks are much stricter
than C's.
In fact it hasn't been much a problem for a long time.

(Debugging template error's now that is a pain..)

Incremental linkers and fast compiler time also makes incremental
developement
possible.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,781
Messages
2,569,615
Members
45,294
Latest member
LandonPigo

Latest Threads

Top