BIG successes of Lisp (was ...)

M

mike420

In the context of LATEX, some Pythonista asked what the big
successes of Lisp were. I think there were at least three *big*
successes.

a. orbitz.com web site uses Lisp for algorithms, etc.
b. Yahoo store was originally written in Lisp.
c. Emacs

The issues with these will probably come up, so I might as well
mention them myself (which will also make this a more balanced
post)

a. AFAIK Orbitz frequently has to be shut down for maintenance
(read "full garbage collection" - I'm just guessing: with
generational garbage collection, you still have to do full
garbage collection once in a while, and on a system like that
it can take a while)

b. AFAIK, Yahoo Store was eventually rewritten in a non-Lisp.
Why? I'd tell you, but then I'd have to kill you :)

c. Emacs has a reputation for being slow and bloated. But then
it's not written in Common Lisp.

Are ViaWeb and Orbitz bigger successes than LATEX? Do they
have more users? It depends. Does viewing a PDF file made
with LATEX make you a user of LATEX? Does visiting Yahoo
store make you a user of ViaWeb?

For the sake of being balanced: there were also some *big*
failures, such as Lisp Machines. They failed because
they could not compete with UNIX (SUN, SGI) in a time when
performance, multi-userism and uptime were of prime importance.
(Older LispM's just leaked memory until they were shut down,
newer versions overcame that problem but others remained)

Another big failure that is often _attributed_ to Lisp is AI,
of course. But I don't think one should blame a language
for AI not happening. Marvin Mins ky, for example,
blames Robotics and Neural Networks for that.
 
P

Paul Rubin

a. AFAIK Orbitz frequently has to be shut down for maintenance
(read "full garbage collection" - I'm just guessing: with
generational garbage collection, you still have to do full
garbage collection once in a while, and on a system like that
it can take a while)

I'm skeptical that's the reason for the shutdowns, if they're using a
reasonable Lisp implementation.
b. AFAIK, Yahoo Store was eventually rewritten in a non-Lisp.
Why? I'd tell you, but then I'd have to kill you :)

The Yahoo Store software was written by some small company that sold
the business to some other company that didn't want to develop in
Lisp, I thought. I'd be interested to know more.
c. Emacs has a reputation for being slow and bloated. But then
it's not written in Common Lisp.

Actually, Hemlock is much more bloated. However, Emacs's reputation
for bloat came from the 1 mips VAX days, when it was bigger than less
capable editors such as vi. However, compared with the editors people
run all the time on PC's nowadays (viz. Microsoft Word), Emacs is tiny
and fast. In fact if I want to look in a big mail archive for (say)
mentions of Python, it's faster for me to read the file into Emacs and
search for "python" than it is for me to pipe the file through "more"
and use "more"'s search command.
Are ViaWeb and Orbitz bigger successes than LATEX? Do they
have more users? It depends. Does viewing a PDF file made
with LATEX make you a user of LATEX? Does visiting Yahoo
store make you a user of ViaWeb?

I missed the earlier messages in this thread but Latex wasn't written
in Lisp. There were some half-baked attempts to lispify TeX, but
afaik none went anywhere.
For the sake of being balanced: there were also some *big*
failures, such as Lisp Machines. They failed because
they could not compete with UNIX (SUN, SGI) in a time when
performance, multi-userism and uptime were of prime importance.

Well, they were too bloody expensive too.
Another big failure that is often _attributed_ to Lisp is AI,
of course. But I don't think one should blame a language
for AI not happening. Marvin Mins ky, for example,
blames Robotics and Neural Networks for that.

Actually, there are many AI success stories, but the AI field doesn't
get credit for them, because as soon as some method developed by AI
researchers becomes useful or practical, it stops being AI. Examples
include neural networks, alpha-beta search, natural language
processing to the extent that it's practical so far, optical character
recognition, and so forth.
 
E

Erann Gat

Paul Rubin said:
The Yahoo Store software was written by some small company
Viaweb

that sold the business to some other company

Yahoo (obviously).
that didn't want to develop in
Lisp, I thought.

That's right. Yahoo ultimately reimplemented Yahoo Store in C++.

E.
 
S

Stephen Horne

(e-mail address removed) writes:

The Yahoo Store software was written by some small company that sold
the business to some other company that didn't want to develop in
Lisp, I thought. I'd be interested to know more.

In a context like that, my first guess would be "the other company
didn't have any Lisp programmers". The second would be that the
programmers they did have didn't like (their idea of) Lisp.

Assuming there is some truth in that, it would probably have been
rationalised in other terms of course.
Actually, there are many AI success stories, but the AI field doesn't
get credit for them, because as soon as some method developed by AI
researchers becomes useful or practical, it stops being AI. Examples
include neural networks, alpha-beta search, natural language
processing to the extent that it's practical so far, optical character
recognition, and so forth.

Absolutely true - and many more.

I have come to believe that cognitive psychologists (of whatever field
- there seems to have been a 'cognitive revolution' in most variants
of psychology) should have some experience of programming - or at
least of some field where their conception of what is possible in
terms of information processing would get some hard testing.

Brains are not computers, of course - and even more important, they
were evolved rather than designed. But then a lot of cognitive
psychologists don't seem to take any notice of the underlying
'hardware' at all. In the field I'm most interested in - autism - the
main cognitive 'theories' are so sloppy that it is very difficult to
pick out the 5% of meaningfullness from the 95% of crud.

Take 'theory of mind', for instance - the theory that autistic people
have no conception that other people have minds of their own. This is
nice and simple at 'level 1' - very young autistics seem not to keep
track of other peoples states of mind when other children of the same
age do, just as if they don't realise that other people have minds of
their own. But the vast majority of autistics pass level 1 theory of
mind tests, at least after the age of about 5.

The cognitive psychologists answer to this is to apply level 2 and
higher theory of mind tests. They acknowledge that these tests are
more complex, but they don't acknowledge that they are testing
anything different. But take a close look and it rapidly becomes
apparent that the difference between these tests and the level 1 tests
is simply the amount of working memory that is demanded.

Actually, you don't have to study the tests to realise that - just
read some of the autistic peoples reactions to the tests. They
*understand* the test, but once the level is ramped up high enough
they can't keep track of everything they are expected to keep track
of. It's rather like claiming that if you can't simultaneously
remember 5000 distinct numbers you must lack a 'theory of number'!

But when autistic people have described things that suggest they have
'theory of mind' (e.g. Temple Grandin), the experts response has
typically been to suggest that either the auther or her editor is a
liar (e.g. Francesca Happé, 'the autobiographical writings of three
Asperger syndrome adults: problems of interpretation and implications
for theory', section in 'Autism and Asperger Syndrome' edited by Uta
Frith).

It's not even as if the 'theory of mind' idea has any particular
predictive power. The symptoms of autism vary dramatically from one
person to another, and the distinct symptoms vary mostly independently
of one another. Many symptoms of autism (e.g. sensory problems) have
nothing to do with awareness of other people.

The basic problem, of course, is that psychologists often lean more
towards the subjective than the objective, and towards intuition
rather than logic. It is perhaps slightly ironic that part of my own
theory of autism (integrating evolutionary, neurological and cognitive
issues) has as a key component an IMO credible answer to 'what is
intuition, how does it work, and why is (particularly social)
intuition disrupted in autism?'

But despite all this, it is interesting to note that cognitive
psychology and AI have very much the same roots (even down to mainly
the same researchers in the early days) and that if you search hard
enough for the 'good stuff' in psychology, it doesn't take long before
you start finding the same terms and ideas that used to be strongly
associated with AI.

Creating a machine that thinks like a person was never a realistic
goal for AI (and probably won't be for a very long time yet), but that
was always the dreaming and science fiction rather than the fact. Even
so, it is hard to think of an AI technique that hasn't been adopted by
real applications.

I remember the context where I first encountered Bayes theorem. It was
in AI - expert systems, to be precise - along with my first encounter
with information theory. The value of Bayes theorem is that it allows
an assessment of the probability of a hypothesis based only on the
kinds of information that can be reasonably easily assessed in
studies, approximated by a human expert, or even learned by the expert
system in some contexts as it runs. The probability of a hypothesis
given a single fact can be hard to assess, but a good approximation of
the probability of that fact given the hypothesis is usually easy to
assess given a simple set of examples.

Funny how a current popular application of this approach (spam
filtering) is not considered to be an expert system, or even to be AI
at all. But AI was never meant to be in your face. Software acts more
intelligently these days, but the methods used to achieve that are
mostly hidden.
 
P

Peter Seibel

Yahoo (obviously).


That's right. Yahoo ultimately reimplemented Yahoo Store in C++.

Of course to do so, they had to--according to Paul Graham--implement a
Lisp interpreter in C++! And they had to leave out some features that
depended on closures. So folks who are running the old Lisp version
may never "upgrade" to the new version since it would mean a
functional regression. Graham's messages on the topic to the ll1 list
are at:

<http://www.ai.mit.edu/~gregs/ll1-discuss-archive-html/msg02380.html>
<http://www.ai.mit.edu/~gregs/ll1-discuss-archive-html/msg02367.html>

-Peter
 
K

Kenny Tilton

Peter said:
Of course to do so, they had to--according to Paul Graham--implement a
Lisp interpreter in C++!

I hope he made another $40m off them in consulting fees helping them
with the port. :)
 
P

prunesquallor

In the context of LATEX, some Pythonista asked what the big
successes of Lisp were. I think there were at least three *big*
successes.

a. orbitz.com web site uses Lisp for algorithms, etc.
b. Yahoo store was originally written in Lisp.
c. Emacs

The issues with these will probably come up, so I might as well
mention them myself (which will also make this a more balanced
post)

a. AFAIK Orbitz frequently has to be shut down for maintenance
(read "full garbage collection" - I'm just guessing: with
generational garbage collection, you still have to do full
garbage collection once in a while, and on a system like that
it can take a while)

You are misinformed. Orbitz runs a `server farm' of hundreds of
computers each running the ITA faring engine. Should any of these
machines need to GC, there are hundreds of others waiting to service
users.
 
A

Alex Martelli

Stephen Horne wrote:
...
I remember the context where I first encountered Bayes theorem. It was
in AI - expert systems, to be precise - along with my first encounter

OK, but Reverend Bayes developed it well before "AI" was even conceived,
around the middle 18th century; considering Bayes' theorem to be part
of AI makes just about as much sense as considering addition in the
same light, if "expert systems" had been the first context in which
you had ever seen numbers being summed. In the '80s, when at IBM
Research we developed the first large-vocabulary real-time dictation
taking systems, I remember continuous attacks coming from the Artificial
Intelligentsia due to the fact that we were using NO "AI" techniques --
rather, stuff named after Bayes, Markov and Viterbi, all dead white
mathematicians (it sure didn't help that our languages were PL/I, Rexx,
Fortran, and the like -- no, particularly, that our system _worked_,
the most unforgivable of sins:). I recall T-shirts boldly emblazoned
with "P(A|B) = P(B|A) P(A) / P(B)" worn at computational linguistics
conferences as a deliberately inflammatory gesture, too:).

Personally, I first met Rev. Bayes in high school, together with the
rest of the foundations of elementary probability theory, but then I
did admittedly go to a particularly good high school; neither of my
kids got decent probability theory in high school, though both of
them met it in their first college year (in totally different fields,
neither of them connected with "AI" -- financial economics for my
son, telecom engineering for my daughter).

Funny how a current popular application of this approach (spam
filtering) is not considered to be an expert system, or even to be AI
at all. But AI was never meant to be in your face. Software acts more

I don't see how using Bayes' Theorem, or any other fundamental tool
of probability and statistics, connects a program to "AI", any more
than using fundamental techniques of arithmetic or geometry would.


Alex
 
C

Christian Lynbech

mike420> c. Emacs has a reputation for being slow and bloated.

People making that claim most often does not understand what Emacs
really is or how to use it effectively. Try to check out what other
popular software use up on such peoples machines, stuff like KDE or
gnome or mozilla or any Java based application.

This just isn't a very relevant issue on modern equipment.

mike420> For the sake of being balanced: there were also some *big*
mike420> failures, such as Lisp Machines. They failed because
mike420> they could not compete with UNIX (SUN, SGI) in a time when
mike420> performance, multi-userism and uptime were of prime importance.

It is still a question of heated debate what actually killed the lisp
machine industry.

I have so far not seen anybody dipsuting that they were a marvel of
technical excellence, sporting stuff like colour displays, graphical
user interfaces and laser printers way ahead of anybody else.

In fact the combined bundle of a Symbolics machine is so good that
there still is a viable market for those 20-30 years old machines
(been there, done that, still needs to get it to run :) I challenge
you to get a good price for a Sun 2 with UNIX SYSIII or whatever they
were equipped with at the time.

As far as I know Symbolics was trying to address the price issues but
the new generation of the CPU was delayed which greatly contributed to
the original demise and subsequent success of what we now know as
stock hardware. Do not forget that when the Sun was introduced it was
by no means obvious who was going to win the war of the graphical
desktop server.


------------------------+-----------------------------------------------------
Christian Lynbech | christian #\@ defun #\. dk
------------------------+-----------------------------------------------------
Hit the philistines three times over the head with the Elisp reference manual.
- (e-mail address removed) (Michael A. Petonic)
 
S

Stephen Horne

Stephen Horne wrote:
...

OK, but Reverend Bayes developed it well before "AI" was even conceived,
around the middle 18th century; considering Bayes' theorem to be part
of AI makes just about as much sense as considering addition in the
same light, if "expert systems" had been the first context in which
you had ever seen numbers being summed.

OK, but you can't say that a system isn't artificial intelligence just
because it uses Bayes theorem or any other method either - it isn't
about who first described the formula or algorithm or whatever.
I recall T-shirts boldly emblazoned
with "P(A|B) = P(B|A) P(A) / P(B)" worn at computational linguistics
conferences as a deliberately inflammatory gesture, too:).
;-)


I don't see how using Bayes' Theorem, or any other fundamental tool
of probability and statistics, connects a program to "AI", any more
than using fundamental techniques of arithmetic or geometry would.

Well, I don't see how a neural net can be considered intelligent
whether trained using back propogation, forward propogation or a
genetic algorithm. Or a search algorithm, whether breadth first, depth
first, prioritised, using heuristics, or applying backtracking I don't
care. Or any of the usual parsing algorithms that get applied in
natural language and linguistics (Early etc). I know how all of these
work so therefore they cannot be considered intelligent ;-)

Certainly the trivial rule-based expert systems consisting of a huge
list of if statements are, IMO, about as unintelligent as you can get.

It's a matter of the problem it is trying to solve rather than simply
saying 'algorithm x is intelligent, algorithm y is not'. An
intelligent, knowledge-based judgement of whether an e-mail is or is
not spam is to me the work of an expert system. The problem being that
once people know how it is done, they stop thinking of it as
intelligent.

Perhaps AI should be defined as 'any means of solving a problem which
the observer does not understand' ;-)

Actually, I remember an article once with the tongue-in-cheek claim
that 'artificial stupidity' and IIRC 'artificial bloody mindedness'
would be the natural successors to AI. And that paperclip thing in
Word did not exist until about 10 years later!
 
S

Stephen Horne

Personally, I first met Rev. Bayes in high school, together with the
rest of the foundations of elementary probability theory, but then I
did admittedly go to a particularly good high school; neither of my
kids got decent probability theory in high school, though both of
them met it in their first college year (in totally different fields,
neither of them connected with "AI" -- financial economics for my
son, telecom engineering for my daughter).

Sorry - missed this bit on the first read.

I never limited my education to what the school was willing to tell
me, partly because having Asperger syndrome myself meant that the
library was the best refuge from bullies during break times.

I figure I first encountered Bayes in the context of expert systems
when I was about 14 or 15. I imagine that fits roughly into the high
school junior category, but I'm not American so I don't know for sure.
 
P

Paolo Amoroso

mike420 wrote in message news: said:
In the context of LATEX, some Pythonista asked what the big
successes of Lisp were. I think there were at least three *big*

See:

http://alu.cliki.net/Success Stories

a. AFAIK Orbitz frequently has to be shut down for maintenance
(read "full garbage collection" - I'm just guessing: with

They don't use garbage collection, they do explicit memory allocation
from pools. More details were given in the ILC 2002 talk "ITA Software
and Orbitz: Lisp in the Online Travel World" by Rodney Daughtrey:

http://www.international-lisp-conference.org/ILC2002/Speakers/People/Rodney-Daughtrey.html

The talk's slides are included in the ILC 2002 proceedings available
from Franz, Inc. As for shutdown for maintenance, the slides seem to
suggest that they use online patching.


Paolo
 
E

Edi Weitz

a. AFAIK Orbitz frequently has to be shut down for maintenance

Where do you "know" that from? Have you any quotes or numbers to back
up your claims or are you just trying to spread FUD?
(read "full garbage collection" -

Others have debunked this already.
I'm just guessing:

Please leave the guessing to people who are better at it.

Edi.
 
A

Alex Martelli

Stephen said:
Sorry - missed this bit on the first read.

I never limited my education to what the school was willing to tell

Heh, me neither, of course.
me, partly because having Asperger syndrome myself meant that the
library was the best refuge from bullies during break times.

Not for me, as it was non-smoking and I started smoking very young;-).

But my house was always cluttered with books, anyway. However,
interestingly enough, I had not met Bayes' theorem _by that name_,
only in the somewhat confusing presentation known as "restricted
choice" in bridge theory -- problem is, Borel et Cheron's "Theorie
Mathematique du Bridge" was out of print for years, until (I think)
Mona Lisa Press finally printed it again (in English translation --
the French original came out again a while later as a part of the
reprint of all of Borel's works, but always was much costlier), so
my high school got there first (when I was 16). My kids' exposure
to probability theory was much earlier of course (since I taught
them bridge when they were toddlers, and Bayes' Theorem pretty
obviously goes with it).

I figure I first encountered Bayes in the context of expert systems
when I was about 14 or 15. I imagine that fits roughly into the high
school junior category, but I'm not American so I don't know for sure.

I'm not American either -- I say "high school" to mean what in Italy
is known as a "Liceo" (roughly the same age range, 14-18).


Alex
 
A

Alex Martelli

Stephen Horne wrote:
...
Perhaps AI should be defined as 'any means of solving a problem which
the observer does not understand' ;-)

Clarke's law...?-)

The AAAI defines AI as:

"the scientific understanding of the mechanisms underlying thought and
intelligent behavior and their embodiment in machines."

But just about any mathematical theory is an abstraction of "mechanisms
underlying thought": unless we want to yell "AI" about any program doing
computation (or, for that matter, memorizing information and fetching it
back again, also simplified versions of "mechanisms underlying thought"),
this had better be a TAD more nuanced. I think a key quote on the AAAI
pages is from Grosz and Davis: "computer systems must have more than
processing power -- they must have intelligence". This should rule out
from their definition of "intelligence" any "brute-force" mechanism
that IS just processing power. Chess playing machines such as Deep
Blue, bridge playing programs such as GIB and Jack (between the two
of them, winners of the world computer bridge championship's last 5 or
6 editions, regularly grinding into the dust programs described by their
creators as "based on artificial intelligence techniques" such as
expert systems), dictation-taking programs such as those made by IBM
and Dragon Systems in the '80s (I don't know if the technology has
changed drastically since I left the field then, though I doubt it),
are based on brute-force techniques, and their excellent performance
comes strictly from processing power. For example, IBM's speech
recognition technology descended directly from the field of signal
processing -- hidden Markov models, Viterbi algoriths, Bayes all over
the battlefield, and so on. No "AI heritage" anywhere in sight...


Alex
 
I

Ivan Toshkov

You are misinformed. Orbitz runs a `server farm' of hundreds of
computers each running the ITA faring engine. Should any of these
machines need to GC, there are hundreds of others waiting to service
users.

Besides, when I read the description of orbiz, I was with the
impression, that they prealocated the memory for just that reason: to
remove the need for garbage collection.
 
J

Joe Marshall

Christian Lynbech said:
It is still a question of heated debate what actually killed the lisp
machine industry.

I have so far not seen anybody disputing that they were a marvel of
technical excellence, sporting stuff like colour displays, graphical
user interfaces and laser printers way ahead of anybody else.

It's clear to me that LMI killed itself by an attempt to rush the
LMI-Lambda to market before it was reasonably debugged. A lot of LMI
machines were DOA. It's amazing how fast you can lose customers that
way.

As far as Symbolics goes... I *think* they just saturated the market.
 
R

robert

Alex Martelli said:
Stephen Horne wrote:
...

OK, but Reverend Bayes developed it well before "AI" was even conceived,
around the middle 18th century; considering Bayes' theorem to be part
of AI makes just about as much sense as considering addition in the
same light, if "expert systems" had been the first context in which
you had ever seen numbers being summed. In the '80s, when at IBM
Research we developed the first large-vocabulary real-time dictation
taking systems, I remember continuous attacks coming from the Artificial
Intelligentsia due to the fact that we were using NO "AI" techniques --
rather, stuff named after Bayes, Markov and Viterbi, all dead white
mathematicians (it sure didn't help that our languages were PL/I, Rexx,
Fortran, and the like -- no, particularly, that our system _worked_,
the most unforgivable of sins:). I recall T-shirts boldly emblazoned
with "P(A|B) = P(B|A) P(A) / P(B)" worn at computational linguistics
conferences as a deliberately inflammatory gesture, too:).

Personally, I first met Rev. Bayes in high school, together with the
rest of the foundations of elementary probability theory, but then I
did admittedly go to a particularly good high school; neither of my
kids got decent probability theory in high school, though both of
them met it in their first college year (in totally different fields,
neither of them connected with "AI" -- financial economics for my
son, telecom engineering for my daughter).



I don't see how using Bayes' Theorem, or any other fundamental tool
of probability and statistics, connects a program to "AI", any more
than using fundamental techniques of arithmetic or geometry would.

i boldly disagree. back when i first heard about AI (the '70s, i'd say),
the term had a very specific meaning: probablistic decision making with
feedback. a medical diagnosis system would be the archetypal example.
my recollection of why few ever got made was: feedback collection was not
always easy (did the patient die because the diagnosis was wrong? and
what is the correct diagnosis? and did we get all the symtoms right?, etc),
and humans were unwilling to accept the notion of machine determined
decision making. the machine, like humans before it, would learn from its
mistakes. this was socially unacceptable.

everything else is just rule processing. whether done with declarative
typeless languages like Lisp or Prolog, or the more familiar imperative
typed languages like Java/C++ is a matter of preference. i'm currently
working with a Prolog derivative, and don't find it a better way. fact
is, i find typeless languages (declarative or imperative) a bad thing for
large system building.

robert
 
I

Isaac To

Alex> Chess playing machines such as Deep Blue, bridge playing programs
Alex> such as GIB and Jack ..., dictation-taking programs such as those
Alex> made by IBM and Dragon Systems in the '80s (I don't know if the
Alex> technology has changed drastically since I left the field then,
Alex> though I doubt it), are based on brute-force techniques, and their
Alex> excellent performance comes strictly from processing power.

Nearly no program would rely only on non-brute-force techniques. On the
other hand, all the machines that you have named uses some non-brute-force
techniques to improve performance. How you can say that they are using
"only" brute-force techniques is something I don't quite understand. But
even then, I can't see why this has anything to do with whether the machines
are intelligent or not. We cannot judge whether a machine is intelligent or
not by just looking at the method used to solve it. A computer is best at
number crunching, and it is simply natural for any program to put a lot more
weights than most human beings on number crunching. You can't say a machine
is unintelligent just because much of it power comes from there. Of course,
you might say that the problem does not require a lot of intelligence.

Whether a system is intelligent must be determined by the result. When you
feed a chess configuration to the big blue computer, which any average
player of chess would make a move that will guarantee checkmate, but the
Deep Blue computer gives you a move that will lead to stalemate, you know
that it is not very intelligent (it did happen).

Regards,
Isaac.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top