BIG successes of Lisp (was ...)

B

Bengt Richter

larry said:
Here's a more realistic example that illustrates exactly the opposite
point. Take the task of finding the sum of the square of a bunch of
numbers.
Do people really think to themselves when they do this task:
Umm first make a variable called sum
then set that variable sum to zero.
Then get the next number in the list,
square it,
add it to the old value of sum,
store the resulting value into sum,
then get the next variable,etc....

No they they think: sum up the square of a bunch of numbers.
This has an almost direct translation to the lisp style:
(apply '+ (mapcar #'(lambda(x)(* x x)) numlist)).
Well..
sum (map (\x -> x ** 2) [1..10])
in Haskell, or
sum (map ((lambda x: x ** 2), range(1,11)))

Too much line noise ;-)

sum([x*x for x in range(1,11)])
in Python. But not really sure what point you're trying to make, oh well :)

Regards,
Bengt Richter
 
R

Rmagere

Bengt said:
larry said:
Here's a more realistic example that illustrates exactly the
opposite point. Take the task of finding the sum of the square of a
bunch of numbers.
Do people really think to themselves when they do this task:
Umm first make a variable called sum
then set that variable sum to zero.
Then get the next number in the list,
square it,
add it to the old value of sum,
store the resulting value into sum,
then get the next variable,etc....

No they they think: sum up the square of a bunch of numbers.
This has an almost direct translation to the lisp style:
(apply '+ (mapcar #'(lambda(x)(* x x)) numlist)).
Well..
sum (map (\x -> x ** 2) [1..10])
in Haskell, or
sum (map ((lambda x: x ** 2), range(1,11)))

Too much line noise ;-)

sum([x*x for x in range(1,11)])

Then I would say that a clearer way to express what a person think is:
(loop for number in numberlist sum (expt number power))

rmagere
--
I saw a better version of this signature type can any one give any
indication?
((lambda (a b)
(if (= (length a) (length b))
(loop with string for x across a and y across b do (format t "~a~a"
x y))
(progn (loop with string for x across a and y across b do (format t
"~a~a" x y))
(format t "~a" (char a (- (length a) 1))))))
"raeehtalcm" "(e-mail address removed)")
 
F

Frank A. Adrian

larry said:
No they they think: sum up the square of a bunch of numbers.
This has an almost direct translation to the lisp style:
(apply '+ (mapcar #'(lambda(x)(* x x)) numlist)).

Well, they do until they get an exception because the list is too long.
Then they think WTF!?! until a helpful Lisp person tells them to use reduce
instead of apply :).

faa
 
A

Alex Martelli

Andrew said:
Alex:

Since most bridge playing is not in tournament play or other places which
use randomize shuffles, wouldn't people who want an edge in "friendly"
play like ways to maximize the odds of winning?

I'm not sure people in such non-intensely-competitive settings would
care enough, but I guess if there was money at stake they might.

That link I mentioned,
http://www.dartmouth.edu/~chance/course/topics/winning_number.html
suggests that doing so is strongly frowned upon

The laws and proprieties of (non-duplicate) contract bridge:

http://www.math.auc.dk/~nwp/bridge/laws/rlaws.html

specify that the deck should be thoroughly shuffled, and a new
shuffle may be demanded by any player (presumably if he or she
does not consider the previous shuffle thorough enough). Taking
any advantage from an infraction by one's own side is condemned,
so if an insufficient shuffle had been performed by the player
thereafter proceeding to take advantage, or his/her partner, that
would indeed be improper. Taking advantage of violations by the
opponents, however, is acceptable: there is nothing in the laws
and properties (which tend to be rather exhaustive) that I can see
as condemning a player for noticing and taking advantage of an
imperfect shuffle by an opponent.
Perhaps there's underground literature describing how to take
advantage of this? :)

There's an old superstition that "the Queen always lies over the
Jack" and many have hypothesized that this belief may come from
imperfect shuffles after, on the previous deal, the Q has been
played right over the J, a common occurrence in play (but not
really all that much more common than, say, the K being played
right over the Q, IMHO). I am not aware of any scientific study
of such issues ever having been published (and there are quite a
few non-mainstream bridge books among the 1000 or so I own).


Alex
 
K

Kenny Tilton

Francis Avila said:
who are used to english, it's just plain different and harder, even if it's
better. (Ok, that last bit was a value judgement. :)

Fortunately, misinformation does not "stick" very well. ("Oh, it's
/not/ interpreted?" and then they are OK). The trick is getting folks
to the point where they are even looking in Lisp's direction. Which
you already said:
... it's hard to convince someone of that if they
don't actually _use_ it first, and in the end some will probably still think
it isn't worth the trouble.

Yep. Old habits die hard, tho for many just looking at a page of Lisp
is enough to get them excited. The same syntax that looks like just a
lot of parentheses is ...well, they do not even see the
parenthe-trees, they see in a flash the forest of the beautiful tree
structure. (How's that for fucked-up metaphor?)

I cannot say I have that ability (I saw the parentheses) but I always
suspend disbelief when trying something new just to give it a chance.
Parentheses disappeared /real/ fast.
... It will take very significant and deep cultural
and intellectual changes before lisp is ever an overwhelmingly dominant
language paradigm. That is, when it becomes more natural to think of
cake-making as....

Wrong. For two reasons. The first is social. Developers move in herds.
So Lisp just has to reach a certain very low threshhold, and then it
will catch fire and be used in Tall Buildings, and then developers
will dutifully march down to the bookstore and buy a new set of
O'Reilly books.

The second is that you are just plain wrong about paradigms. You (and
I!) have used the proedural paradigm so long that we think we think
that way.

Follow my URL to info on my Cells hack. It let's you build normal
applications with a spredsheet paradigm. I just gave a talk on it at
ILC 2003. Sitting in the audience was they guy who gave a talk on
something similar at LUGM 99, when I was sitting in the audience.
Afterwards two people came up to me to tell me about their similar
hacks. They are the future presenters. The original prior art on all
this was Sketchpad in 1962. Now my point:

It is a great paradigm. An order of magnitude better than the
imperative. I kid you not. Nothing brain-bending about it. But we (as
in "almost all of you") still program as if we were writing machine
language.

Why? Because we think that way? Nope. Look at VisiCalc, Lotus 1-2-3
and other real spreadsheet applications used by non-programmers to
build sophisticated models. They are non-programmers and they are
having a ball and it all makes perfect sense to them. So much for
people cannot think functionally / declaratively.

The developer community is not ignoring Cells or COSI or KR or LISA or
Kaleidoscope or Screamer because they do not think that way, but
simply out of habit. Let's not dress up habit as innate-way-we-think.

kenny

http://www.tilton-technology.com/
 
A

Alex Martelli

Stephen Horne wrote:
...
...
What is your definition of intuition?

I can accept something like, e.g.:
2. insight without conscious reasoning
3. knowing without knowing how you know

but they require 'insight' or 'knowing', which are neither claimed
nor disclaimed in the above. Suppose I observe a paramecium moves
more often than not, within a liquid, in the direction of increasing
sugar concentration. Given my model of paramecium behavior, I would
verbalize that as "no understanding" etc on the paramecium's part,
but I would find it quaint to hear it claimed that such behaviour
shows the paramecium has "intuition" about the location of food --
I might accept as slightly less quaint mentions of "algorithms" or
even "heuristics", though I would take them as similes rather than
as rigorous claims that the definitions do in fact apply.

If we went more deeply about this, I would claim that these
definitions match some, but not all, of what I would consider
reasonable uses of "intuition". There are many things I know,
without knowing HOW I do know -- did I hear it from some teacher,
did I see it on the web, did I read it in some book? Yet I would
find it ridiculous to claim I have such knowledge "by intuition":
I have simply retained the memory of some fact and forgotten the
details of how I came upon that information. Perhaps one could
salvage [3], a bit weakened, by saying that I do know that piece
of knowledge "came to me from the outside", even though I do not
recall whether it was specifically from a book (and which book),
a Discovery Channel program, a teacher, etc, in any case it was
an "outside source of information" of some kind or other -- I do
know it's not something I have "deduced" or dreamed up by myself.

The third definition will tend to follow from the second (if the
insight didn't come from conscious reasoning, you won't know how you
know the reasoning behind it).

This seems to ignore knowledge that comes, not from insight nor
reasoning, but from outside sources of information (sources which one
may remember, or may have forgotten, without the forgetting justifying
the use of the word "intuition", in my opinion).

Basically, the second definition is the core of what I intend and
nothing you said above contradicts what I claimed. Specifically...

I do not claim the characteristics I listed:

_contradict_ the possibility of "intuition". I claim they're very
far from _implying_ it.
...sounds like "knowing without knowing how you know".

In particular, there is no implication of "knowing" in the above.

Intuitive understanding must be supplied by some algorithm in the
brain even when that algorithm is applied subconsciously. I can well

A "petition of principle" which I may accept (taking 'algorithm' as
a simile, as it may well not rigorously meet the definition), at least
as a working hypothesis. Whether the locus of the mechanisms may in
some cases be elsewhere than in the brain is hardly of the essence,
anyway.
believe that (as you say, after long practice) a learned algorithm may
be applied entirely unconsciously, in much the same way that (after
long practice) drivers don't have to think about how to drive.

Or, more directly, people don't (normally) have to think about how
to walk (after frequent trouble learning how to, early on).
Besides, just because a long multiplication tends to be consciously
worked out by humans, that doesn't mean it can't be an innate ability
in either hardware or software.

Of course, it could conceivably be "hard-wired" as simpler operations
such as "one more than" (for small numbers) seem to be for many
animals (us included, I believe).

Take your voice recognition example. If the method is Markov chains,
then I don't understand it as I don't much remember what Markov chains
are - if I was to approach the task I'd probably use a Morlet wavelet
to get frequency domain information and feature detection to pick out
key features in the frequency domain (and to some degree the original
unprocessed waveform) - though I really have no idea how well that
would work.

You can process the voice waveform into a string of discrete symbols
from a finite alphabet in just about any old way -- we used a model of
the ear, as I recall -- but I wasn't involved with that part directly,
so my memories are imprecise in the matter: I do recall it did not
make all that much of a difference to plug in completely different
front-end filters, as long as they spit out strings of discrete
symbols from a finite alphabet representing the incoming sound (and
the rest of the system was trained, e.g. by Viterbi algorithm, with
the specific front-end filter in use).

So, we have our (processed input) sequence of 'phones' (arbitrary name
for those discrete symbols from finite alphabet), S. We want to
determine the sequence of words W which with the highest probability
may account for the 'sequence of sounds' (represented by) S: W
such that P(W|S) is maximum. But P(W|S)=P(S|W)P(W) / P(S) -- and
we don't care about P(S), the a priori probability of a sequence of
sounds, for the purpose of finding W that maximizes P(W|S): we only
care about P(S|W), the probability that a certain sequence of
words W would have produced those sounds S rather than others (a
probabilistic "model of pronunciation"), and P(W), the a priori
probability that somebody would have spoken that sequence of words
rather than different sequences of words (a probabilistic "model
of natural language"). HMM's are ways in which we can model the
probabilistic process of "producing output symbols based on non-
observable internal state" -- not the only way, of course, but quite
simple, powerful, and amenable to solid mathematical treatment.

You can read a good introduction to HMM's at:
http://www.comp.leeds.ac.uk/roger/HiddenMarkovModels/html_dev/main.html

However, I don't need any details to make the following
observations...

The software was not 'aware' of the method being used - the Markov
chain stuff was simply programmed in and thus 'innate'. Any 'learning'
related to 'parameters' of the method - not the method itself. And the
software was not 'aware' of the meaning of those parameters - it
simply had the 'innate' ability to collect them in training.

The software was not built to be "aware" of anything, right. We did
not care about software to build sophisticated models of what was
going on, but rather about working software giving good recognition
rates. As much estimation of parameters as we could do in advance,
off-line, we did, so the run-time software only got those as large
tables of numbers somehow computed by other means; at the time, we
separated out the on-line software into a "learning" part that would
run only once per speaker at the start, and the non-learning (fixed
parameters) rest of the code (extending the concept to allow some
variation of parameters based on errors made and corrected during
normal operation is of course possible, but at the time we kept things
simple and didn't do anything like that). Any such "learning" would
in any case only affect the numerical parameters; there was no
provision at all for changing any other aspect of the code.
This reminds me a lot of human speach recognition - I can't tell you
how I know what people are saying to me without going and reading up
on it in a cognitive psychology textbook (and even then, the available
knowledge is very limited). I am not 'aware' of the method my brain
uses, and I am not 'aware' of the meaning of the learned 'parameters'
that let me understand a new strong accent after a bit of experience.

People do have internal models of how people understand speech -- not
necessarily accurate ones, but they're there. When somebody has trouble
understanding you, you may repeat your sentences louder and more slowly,
perhaps articulating each word rather than slurring them as usual: this
clearly reflects a model of auditory performance which may have certain
specific problems with noise and speed. Whether you can become conscious
of that model by introspection is secondary, when we're comparing this
with software that HAS no routines for introspection at all -- _your_
software has some, which may happen to be not very effective in any
given particular case but in certain situations may help reach better
understanding; "from their fruits shall ye know them" is more reliable.

And the models are adjustable to a surprising degree -- again, rather
than relying on introspection, we can observe this in the effects. When
a speaker becomes aware that a certain listener is deaf and relying on
lip-reading, after some experience, the "louder" aspect of repetition
in case of trouble fades, while ensuring that the 'listener' can see
one's lips clearly, and the "clearer articulation, no slurring" get
important (so does hand-gesturing supporting speech). When devices
are invented that human beings can have no possible "biologically innate"
direct adaptation to, such as writing, human beings are flexible enough
in adapting their model of "how people understand speech" to enrich
the "clarification if needed" process to include spelling words out in
some cases. Indeed, the concept of "understanding a word" is clearly
there -- sometime, in case of trouble communicating (particularly when
the purely auditory aspects are out of the picture), you may try using
another perhaps "simpler" synonym word in lieu of one which appears
to puzzle the listener; if the social situation warrants the time and
energy investment you might even provide the definition of a useful
term which you may expect to use repeatedly yet appears misunderstood.

In all of these cases, if the model is consciously available to you via
introspection, that may be useful because it may let you "play with it"
and make decisions about how to compensate for miscommunication more
effectively. Of course, that will work better if the model has SOME
aspects that are "accurate". Is your reaction to problems getting
across going to be the same whether [a] you're talking in a very noisy
room with a person you know you've talked to many times in the past,
in different situations, without trouble, you're talking to a two
year old child who appears to not grasp the first principles of what
you're talking about, [c] you're talking to an octuagenarian who keeps
cupping his hand to his hear and saying "eh?", [d] you're talking to
a foreigner whom you have noticed speaks broken, barely understandable
English, ... ? If so, then it's credible that you have no model of
speech understanding by people. If your attempts to compensate for
the communication problems differ in the various cases, some kind of
model is at work; the more aware you become of those models, the better
chance you stand of applying them effectively and managing to communicate
(as opposed to, e.g., the proverbial "ugly American" whose caricatural
reaction to foreigners having trouble understanding English would be
to repeat exactly the same sentences, but much louder:).

In all of these cases we're modeling how OTHERS understand speech --
a model of how WE understand speech is helpful only secondarily, i.e.
by helping us "project" our model of ourselves outwards into a "model
of people". There may be some marginal cases in which the model would
be directly useful, e.g. trying to understand words in a badly botched
tape recording we might use some of the above strategies in "replaying"
it, but the choices are generally rather limited (volume; what else?-);
or perhaps explicitly asking a speaker we have trouble understanding
to speak slowly / loudly / using simpler words / etc appropriately. I
think this situation is quite general: model of ourselves with direct
usefulness appear to me to be a rarer case than useful model of _others_
which may partly be derived by extroflecting our self-models. In part
this is because we're rarely un-lazy enough to go to deliberate efforts
to change ourselves even if our self-models should suggest such changes
might be useful -- better (easier) to rationalize why our defects aren't
really defects or cannot be remedied anyway (and such rationalizations
_diminish_ the direct usefulness of models-of-self...).

The precise algorithms for speach recognition used by IBM/Dragons
dictation systems and by the brain are probably different, but to me
Probably.

fussing about that is pure anthropocentricity. Maybe one day we'll

Actually it isn't -- if you're aware of certain drastic differences
in the process of speech understanding in the two cases, this may be
directly useful to your attempts of enhancing communication that is
not working as you desire. E.g., if a human being with which you're
very interested in discussing Kant keeps misunderstanding each time
you mention Weltanschauung, it may be worth the trouble to EXPLAIN
to your interlocutor exactly what you mean by it and why the term is
important; but if you have trouble dictating that word to a speech
recognizer you had better realize that there is no "meaning" at all
connected to words in the recognizer -- you may or may not be able
to "teach" spelling and pronunciation of specific new words to the
machine, but "usage in context" (for machines of the kind we've been
discussing) is a lost cause and you might as well save your time.

But, you keep using "anthropocentric" and its derivatives as if they
were acknowledged "defects" of thought or behavior. They aren't.

meet some alien race that uses a very different method of speach
recognition to the one used in our brains.

Maybe -- the likely resulting troubles will be quite interesting.

In principle, to me, the two systems (human brain and dictation
program) have similar claims to being intelligent. Though the human

I disagree, because I disagree about the importance of models (even
not fully accurate ones).
mind wins out in terms of being more successful (more reliable, more
flexible, better integrated with other abilities).

Much (far from all) of this is about those models (of self, of others,
of the world). The programs who try to be intelligent, IMHO, must
use some kind of semantic model -- both for its usefulness' sake,
and in order to help us understand human intelligence at all. This,
btw, appears to me to be essentially the same stance as implied on
the AAAI site which I have previously referenced.

But giving non-answers such as "I don't know" or "I just felt like it"
tends not to be socially acceptable. It creates the appearance of
evasiveness. Therefore, giving accurate 'self-aware' answers can be a
bad idea in a social context.

Indeed, from a perfectly rational viewpoint it may be a bad idea in
general. Why give others a help in understanding you, if they're
liable to use that understanding for _their_ benefit and possibly to
your detriment? For a wonderful treatment of this idea from a POV
that's mostly from economics and a little from political science,
see Timur Kuran's "Private Truths, Public Lies", IMHO a masterpiece
(but then, I _do_ read economics for fun:).

But of course you'd want _others_ to suppy you with information about
_their_ motivations (to refine your model of them) -- and reciprocity
is important -- so you must SEEM to be cooperating in the matter.
(Ridley's "Origins of Virtue" is what I would suggest as background
reading for such issues).

Because 'the type of intelligence humans have' is not, to me, a valid
limitation on 'what is intelligence?'.

But if there are many types, the one humans have is surely the most
important to us -- others being important mostly for the contrast
they can provide. Turing's Test also operationally defines it that
way, in the end, and I'm not alone in considering Turing's paper
THE start and foundation of AI.

Studying humanity is important. But AI is not (or at least should not
be) a study of people - if it aims to provide practical results then
it is a study of intelligence.

But when we can't agree whether e.g. a termine colony is collectively
"intelligent" or not, how would it be "AI" to accurately model such a
colony's behavior? The only occurrences of "intelligence" which a
vast majority of people will accept to be worthy of the term are those
displayed by humans -- because then "model extroflecting", such an
appreciated mechanism, works fairly well; we can model the other
person's behavior by "putting ourselves in his/her place" and feel
its "intelligence" or otherwise indirectly that way. For non-humans
it only "works" (so to speak) by antroporphisation, and as the well
known saying goes, "you shouldn't antropomorphise computers: they
don't like it one bit when you do".

A human -- or anything that can reliably pass as a human -- can surely
be said to exhibit intelligence in certain conditions; for anything
else, you'll get unbounded amount of controversy. "Artificial life",
where non-necessarily-intelligent behavior of various lifeforms is
modeled and simulated, is a separate subject from AI. I'm not dissing
the ability to abstract characteristics _from human "intelligent"
behavior_ to reach a useful operating definition of intelligence that
is not limited by humanity: I and the AAAI appear to agree that the
ability to build, adapt, evolve and generally modify _semantic models_
is a reasonable discriminant to use.

If what you want is to understand intelligence, that's one thing. But
if what you want is a program that takes dictation, or ones that plays
good bridge, then an AI approach -- a semantic model etc -- is not
necessarily going to be the most productive in the short run (and
"in the long run we're all dead" anyway:). Calling program that use
completely different approaches "AI" is as sterile as similarly naming,
e.g., Microsoft Word because it can do spell-checking for you: you can
then say that ANY program is "AI" and draw the curtains, because the
term has then become totally useless. That's clearly not what the AAAI
may want, and I tend to agree with them on this point.

Absolutely - this is IMO almost certainly both why we have a
specialised social intelligence and why we have an excess (in terms of
our ancestors apparent requirements) of general intelligence.
Referring back to the Baldwin effect, an ability to learn social stuff
is an essential precursor to it becoming innate.

But note we don't need accurate self-awareness to handle this. It
could even be counter productive. What we need is the ability to give
convincing excuses.

What we most need is a model of _others_ that gives better results
in social interactions than a lack of such a model would. If natural
selection has not wiped out Asperger's syndrome (assuming it has some
genetic component, which seems to be an accepted theory these days),
there must be some compensating adaptive advantage to the disadvantages
it may bring (again, I'm sure you're aware of the theories about that).
Much as for, e.g., sickle-cell anemia (better malaria resistance), say.

I suspect, as previously detailed, that the main role of self-awareness
is as a proxy for a model of _other_ people. There would then appear
to be a correlation between the accuracy of the model as regards your
own motivations, and the useful insights it might provide on the various
motivations of others (adaptively speaking, such insights may be useful
by enhancing your ability to cooperate with them, convince them, resist
their selfish attempts at convincing you, etc, etc -- or directly, e.g.
by providing you with a good chance of seducing usefully fertile members
of the opposite sex; in the end, "adaptive" means "leading to enhanced
rates of reproduction and/or survival of offspring").

Absolutely true. Though it seems to me that people are far to good at
empathising with their pets for a claim that human innate mental
models are completely distinct from other animals. I figure there is a

Lots of antropomorphisation and not-necessarily-accurate projection
is obviously going on. Historically, we bonded as pets mostly with
animals for which such behavior on our part led to reasonably useful
results -- dogs first and foremost (all the rest came far more recently,
after we had developed agriculture, but dogs have been our partners
for a far longer time) -- for obvious reasons of selection.

The only problem is that if you apply social psychology principles to
understand people, you may predict their behaviour quite well but you
absolutely cannot explain your understanding that way - unless, of
course, you like being lynched :-(

Develop a reputation as a quirky, idiosyncratic poet, and you'll be
surprised at how much you CAN get away with -- "present company
excepted" being the main proviso to generally keep in mind there;-).

In contexts where this has worked for me, I would say the final
intuition goes beyond what the original rules are capable of. ie it
isn't just a matter of the steps being held in procedural memory.
Presumably heuristics are learned through experience which are much
better than the ones verbally stated in the original rules.

Yes, this experience does show one limit of _exclusively_ conscious /
verbalized models, of course. I particularly like the way this is
presented in Cockburn's "Agile Software Development" (Addison-Wesley
2002), by the way. Of course, SW development IS a social activity,
but a rather special one.

How sure are you of that? After all, the brain is a massively parallel
machine.

If I did, I would play a far better game of bridge than in fact I
do. Therefore, I don't -- QED;-).
My guess is that even then, there would be more dependence on
sophisticated heuristics than on brute force searching - but I suspect
that there is much more brute force searching going on in peoples
minds than they are consciously aware of.

I tend to disagree, because it's easy to show that the biases and
widespread errors with which you can easily catch people are ones
that would not occur with brute force searching but would with
heuristics. As you're familiar with the literature in the field
more than I am, I may just suggest the names of a few researchers
who have accumulated plenty of empirical evidence in this field:
Tversky, Gigerenzer, Krueger, Kahneman... I'm only peripherally
familiar with their work, but in the whole it seems quite indicative.

It IS interesting how often an effective way to understand how
something works is to examine cases where it stops working or
misfires -- "how it BREAKS" can teach us more about "how it WORKS"
than studying it under normal operating conditions would. Much
like our unit tests should particularly ensure they test all the
boundary conditions of operation...;-).


Alex
 
H

Hans Nowak

This is a great opportunity to turn this into a thread where people swap yummy
recipes. :) In any case, that's more constructive that all that "language X
is better than language Y" drivel. And it tastes better too. :) So I start
off with my mom's potato salad:

Ingredients:
potatoes (2 kilos)
meat (1 pound; this should be the kind of beef that comes in little threads)
little silver onions
small pickles
pepper, salt, mayonnaise, mustard

Boil potatoes. Cut meat, potatoes, onions and pickles in little pieces, and
mix everything. Add pepper, salt. Mix with mayonnaise. Add a tiny bit of
mustard. Put in freezer for one day.

There are many variants of this; some people add apple, vegetables, etc.

So, what is YOUR favorite recipe? (Parentheses soup doesn't count.)

Hungrily y'rs,
 
A

Anton Vredegoor

Stephen Horne said:
Now consider the "Baldwin Effect", described in Steven Pinkers "How
the mind works"...

A human brain is not so simple, but what this says to me is (1) that
anything that allows a person to learn important stuff (without
damaging flexibility) earlier in life should become innate, at least
to a degree, and (2) that learning should work together with
innateness - there is no hard divide (some aspects of neurological
development suggest this too). So I would expect some fairly fixed
heuristics (or partial heuristics) to be hard wired, and I figure
autism and Asperger syndrome are fairly big clues as to what is
innate. Stuff related to nonverbal communication such as body
language, for instance, and a tendency to play activities that teach
social stuff in childhood.

Sometimes I wonder whether Neandertals (a now extinct human
subspecies) would be more relying on innate knowledge than modern Homo
Sapiens. Maybe individual Neandertals were slightly more intelligent
than modern Sapiens but since they were not so well organized (they
rather made their own fire than using one common fire for the whole
clan) evolutionary pressure gradually favored modern Sapiens.

Looking at it from this perspective the complete modern human
population suffers from Aspergers syndrome on a massive scale, while a
more natural human species is now sadly extinct. A great loss, and
something to grieve about instead of using them in unfavorable
comparisons to modern society, which I find a despicable practice.

However let the Aspergers, and maybe later robots (no innate knowledge
at all?) inherit the Earth, whether programmed with Pythons Neandertal
like bag of unchangeable tricks or with Lisp-like adaptable Aspergers
programming.

Anton
 
S

Stephen Horne

Sometimes I wonder whether Neandertals (a now extinct human
subspecies) would be more relying on innate knowledge than modern Homo
Sapiens. Maybe individual Neandertals were slightly more intelligent
than modern Sapiens but since they were not so well organized (they
rather made their own fire than using one common fire for the whole
clan) evolutionary pressure gradually favored modern Sapiens.

As I understand it, Neanderthals had a very stable lifestyle and level
of technology for quite a long time while us cro-magnons were still
evolving somewhere around the North East African coast. In accordance
with the Baldwin effect, I would therefore expect much more of there
behaviour to be innate.

While innate abilities are efficient and available early in life, they
are also relatively inflexible. When modern humans arrived in Europe,
the climate was also changing. Neaderthals had a lifestyle suited for
woodlands, but the trees were rapidly disappearing. Modern humans
preferred open space, but more importantly hadn't had a long period
with a constant lifestyle and were therefore more flexible.

Asperger syndrome inflexibility is a different thing. If you were
constantly overloaded (from lack of intuition about what is going on,
thus having to figure out everything consciously), I expect you would
probably cling too much to what you know as well.
 
S

Stephen Horne

Ever noticed how upgrading a PC tends to lead to disaster?

Upgraded to 768MB yesterday, which I need for Windows 2000, but now
Windows 98 (which I also need, sadly) has died. Even with that
system.ini setting to stop it using the extra memory. I'm pretty sure
it's a driver issue, but I haven't worked out which driver(s) are to
blame.

Anyway, I haven't had a chance to read your post properly yet because
I've been farting around with old backups and reinstalls trying to
figure this out, but I will get around to it in the next couple of
days.
 
A

Anton Vredegoor

Stephen Horne said:
Asperger syndrome inflexibility is a different thing. If you were
constantly overloaded (from lack of intuition about what is going on,
thus having to figure out everything consciously), I expect you would
probably cling too much to what you know as well.

It is tempting to claim that my post was implying Aspergers to be
potentially *more* flexible because of lack of innateness, however
knowing next to nothing about Aspergers and making wild claims,
linking it to anthropology and computer science, may be a bit prone to
making oneself misunderstood and to possibly hurting people
inadvertently in the process of formulating some consistent theory. So
I'd rather apologize for any inconveniences and confusion produced
sofar, and humbly ask my post to be ignored.

Anton
 
T

Terry Reedy

The premise of this question is that there actually was a lisp-machine
industry (LMI) to be killed. My memory is that is was stillborn and
that the promoters never presented a *convincing* value proposition to
enough potentional customers to really get it off the ground.

and continued:
Never having seen one, or read an independent description, I cannot
confirm or 'dipsute' this. But taking this as given, there is the
overlooked matter of price. How many people, Lispers included, are
going to buy, for instance, an advanced, technically excellent,
hydrogen fuel cell car, complete with in-garage hydrogen generator
unit, for, say $200,000.

I believe these are disputable. The American broadcast industry
switched to color displays in the 50s-60s. Around 1980 there were
game consoles (specialized computers) and small 'general purpose'
computers that piggybacked on color televisions. TV game consoles
thrive today while general PC color computing switched (mid80s) to
computer monitors with the higher resolution needed for text. It was
their use with PCs that brought the price down to where anyone could
buy one.

Did lisp machines really have guis before Xerox and Apple?

Did lisp machine companies make laser printers before other companies
like HP made them for anyone to use? If so, what did they price them
at?

Pascal Costanza said:
Wrong in two ways:

In relation to the question about the would-be Lisp machine industry,
this answer, even if correct, is besides the point. People buy on the
basis of what they think. One answer may to be that the LMI failed to
enlighten enough people as to the error or their thoughts.

I wonder about another technical issue: intercompatibility. I
strongly believe that media incompatibility helped kill the once
thriving 8080/Z80/CPM industry. (In spite of binary compatibility,
there were perhaps 30 different formats for 5 1/2 floppies.) I
believe the same about the nascent Motorola 68000 desktop Unix
industry of the early 80s. (My work unit has some, and I loved them.)
So I can't help wondering if the LMI made the same blunder.

Did all the LMI companies adopt the same version of Lisp so an outside
Lisper could write one program and sell it to run on all? Or did they
each adopt proprietary versions so they could monopolize what turned
out to be dried-up ponds? Again, did they all adopt uniform formats
for distribution media, such as floppy disks, so that developers could
easily distribute to all? Or did they differentiate to monopolize?

Terry J. Reedy
 
D

Dave Kuhlman

Stephen Horne wrote:

[snip]
While innate abilities are efficient and available early in life,
they are also relatively inflexible. When modern humans arrived in
Europe, the climate was also changing. Neaderthals had a lifestyle
suited for woodlands, but the trees were rapidly disappearing.
Modern humans preferred open space, but more importantly hadn't
had a long period with a constant lifestyle and were therefore
more flexible.

Asperger syndrome inflexibility is a different thing. If you were
constantly overloaded (from lack of intuition about what is going
on, thus having to figure out everything consciously), I expect
you would probably cling too much to what you know as well.

Asperger's syndrome? -- I did a search and read about it. And,
all this time I thought I was a *programmer*. If I had only known
that I've had Asperger's disorder, I could have saved myself all
those many years of debugging code. It's been fun though,
especially with Python, even if the DSM IV does authoritively say
that I'm just crazy.

Is there any way that I can pass this off as an efficient
adaptation to my environment (Linux, Python, XML, text processing,
the Web, etc.)? Not likely I suppose.

I look forward to DSM V's definition of PPD (Python programmer's
disorder): persistent and obsessive attention to invisible
artifacts (called variously "whitespace" and "indentation").

Here is a link. Remember, the first step toward recovery is
understanding what you've got:

http://www.udel.edu/bkirby/asperger/aswhatisit.html

From now on, I will not have to worry about how to spell
"programmer". When the form says occupation, I'll just write down
299.80.

From the DSM IV:

=====================================================================
Diagnostic Criteria For 299.80 Asperger's Disorder

A. Qualitative impairment in social interaction, as manifested by at
least two of the following:
1. marked impairments in the use of multiple nonverbal behaviors such
as eye-to-eye gaze, facial expression, body postures, and gestures
to regulate social interaction
2. failure to develop peer relationships appropriate to developmental
level
3. a lack of spontaneous seeking to share enjoyment, interests, or
achievements with other people (e.g. by a lack of showing, bringing,
or pointing out objects of interest to other people)
4. lack of social or emotional reciprocity

B. Restricted repetitive and stereotyped patterns of behavior,
interests, and activities, as manifested by at least one of the
following:
1. encompassing preoccupation with one or more stereotyped and
restricted patterns of interest that is abnormal either in intensity
or focus
2. apparently inflexible adherence to specific, nonfunctional routines
or rituals
3. stereotyped and repetitive motor mannerisms (e.g., hand or finger
flapping or twisting, or complex whole-body movements)
4. persistent preoccupation with parts of objects

C. The disturbance causes clinically significant impairment in social,
occupational, or other important areas of functioning

D. There is no clinically significant general delay in language (e.g.,
single words used by age 2 years, communicative phrases used by age 3
years)

E. There is no clinically significant delay in cognitive development or
in the development of age-appropriate self-help skills, adaptive
behavior (other than social interaction), and curiosity about the
environment in childhood

F. Criteria are not met for another specific Pervasive Developmental
Disorder or Schizophrenia

=====================================================================

Dave
 
J

Joe Marshall

[some opinions and questions about Lisp Machines]

I'm going to take the questions out of order. I'm also leaving the
crosspost in because this is an immediate response to a Pythonista.
I wonder about another technical issue: intercompatibility. Did all
the LMI companies adopt the same version of Lisp so an outside
Lisper could write one program and sell it to run on all?

As a first-order approximation, there really was only one Lisp machine
company: Symbolics. Xerox, Lisp Machine Inc. and TI were minor players
in the market.

Nevertheless, the effort to create a common Lisp specification that
would be portable across all lisp implementations produced ....
Common Lisp. This was in the early 80's while the industry was still
growing.
The premise of this question is that there actually was a lisp-machine
industry (LMI) to be killed. My memory is that is was stillborn and
that the promoters never presented a *convincing* value proposition to
enough potentional customers to really get it off the ground.

The first principle of marketing is this: the minimum number of
customers needed is one. The Lisp machine customers were typically
large, wealthy companies with a significant investment in research
like petrochemical companies and defense contractors. The Lisp
machine was originally developed as an alternative to the `heavy
metal' of a mainframe, and thus was quite attractive to these
companies. They were quite convinced of the value. The problem was
that they didn't *stay* convinced.
How many people, Lispers included, are going to buy, for instance,
an advanced, technically excellent, hydrogen fuel cell car, complete
with in-garage hydrogen generator unit, for, say $200,000.

Very few. But consider the Ford Motor Company. They have spent
millions of dollars to `buy' exactly that. There are successful
companies whose entire customer base is Ford.

The Lisp industry was small, no doubt about it, but there was (for
a while) enough of an industry to support a company.
 
P

Paolo Amoroso

[followup set to comp.lang.lisp only]

Terry said:
The premise of this question is that there actually was a lisp-machine
industry (LMI) to be killed. My memory is that is was stillborn and

LMI may be a confusing acronym for that. LMI (Lisp Machines, Inc.) was
just one of the vendors, together with Symbolics, Texas Instruments,
Xerox and a few minor ones (e.g. Siemens).

As for your point, you may check the book:

"The Brain Makers - Genius, Ego, and Greed in the Quest for Machines
that Think"
H.P. Newquist
SAMS Publishing, 1994
ISBN 0-672-30412-0

that the promoters never presented a *convincing* value proposition to
enough potentional customers to really get it off the ground.

From page 2 of the above mentioned book:

[...] Symbolics [...] had been able to accomplish in those four
years what billion-dollar computer behemoths like IBM, Hewlett
Packard, and Digital Equipment could not: It had brought an
intelligent machine to market.

Never having seen one, or read an independent description, I cannot
confirm or 'dipsute' this. But taking this as given, there is the

Here are a few relevant Google queries (I am offline and I don't have
the URLs handy):

lisp machines symbolics ralf moeller museum
minicomputer orphanage [see theese PDF documentation sections: mit,
symbolics, ti, xerox]

You may also search for "lisp machine video" at this weblog:

http://lemonodor.com

Did lisp machines really have guis before Xerox and Apple?

Xerox was also a Lisp Machine vendor. If I recall correctly, the first
Lisp Machine was developed at MIT in the mid 1970s, and it had a GUI.

Did all the LMI companies adopt the same version of Lisp so an outside
Lisper could write one program and sell it to run on all? Or did they

At least early products of major Lisp Machines vendors were
descendants of the CADR machine developed at MIT.


Paolo
 
R

Rainer Joswig

Terry Reedy said:
The premise of this question is that there actually was a lisp-machine
industry (LMI) to be killed. My memory is that is was stillborn and
that the promoters never presented a *convincing* value proposition to
enough potentional customers to really get it off the ground.

Revenues of Symbolics in 1986 were in the range of about 100 million
dollars. This was probably the peak time.
Never having seen one, or read an independent description, I cannot
confirm or 'dipsute' this. But taking this as given, there is the
overlooked matter of price. How many people, Lispers included, are
going to buy, for instance, an advanced, technically excellent,
hydrogen fuel cell car, complete with in-garage hydrogen generator
unit, for, say $200,000.

Customers were defence industries, research labs, animation companies,
etc.

A machine usable for 3d animation from Symbolics was well in the
$100,000 range. Each 3d software module might have been around
$25,000 - remember that was in years 1985 - 1990.

From then prices went down. The mainstream workstation business model
switched rapidly (to Unix workstations) and Symbolics could not
adopt (and successful) fast enough. They tried by:
- selling a half-assed PC-based solution
- selling VME cards for SUNs
- selling NuBUS cards for Mac II
- and finally selling an emulator for their OS running on DEC Alphas
I believe these are disputable. The American broadcast industry
switched to color displays in the 50s-60s. Around 1980 there were
game consoles (specialized computers) and small 'general purpose'
computers that piggybacked on color televisions. TV game consoles
thrive today while general PC color computing switched (mid80s) to
computer monitors with the higher resolution needed for text. It was
their use with PCs that brought the price down to where anyone could
buy one.

Sure, but Symbolics could do 3d animations in full HDTV qualitiy
in 1987 (or earlier?). I've seen animations by Sony or Smarties
done on Lisp machines. Several TV stations did their broadcast
quality logo animations on Lisp machines. The animations
for the ground breaking TRON movie were done on Lisp machines.
Etc.
Did lisp machines really have guis before Xerox and Apple?

Xerox was producing Lisp machines, too. Of course they
had graphical user interfaces - they were developed at
about the same time as the Smalltalk machines of Xerox.
So, they did not have it before Xerox - they were Xerox. ;-)

MIT Lisp machines had megabit b&w displays with
mouse driven GUIs before the 80s, IIRC. In mid 1980
they switched to a new revolutionary object-oriented graphics
system (Dynamic Windows).
Did lisp machine companies make laser printers before other companies
like HP made them for anyone to use? If so, what did they price them
at?

Symbolics was just reselling Laser printers. The Symbolics OS
could output to Postscript somewhen in mid 1980s - the Concordia
system was a software for book/manual production and could
produce large scale hypertext documents (the Symbolics manual set
had almost 10000 pages) - printing to postscript.

Xerox had of course connected their Lisp machines to their
laser printers.

More stuff on: http://kogs-www.informatik.uni-hamburg.de/~moeller/symbolics-info/symbolics.html

Remember, most of that is HISTORY.
 
K

Kenny Tilton

Joe said:
... The Lisp machine customers were typically
large, wealthy companies with a significant investment in research
like petrochemical companies and defense contractors. The Lisp
machine was originally developed as an alternative to the `heavy
metal' of a mainframe, and thus was quite attractive to these
companies. They were quite convinced of the value. The problem was
that they didn't *stay* convinced.

Maybe that was part of the problem: all of the Lisp installed base lived
on an expensive platform (unless compared with big iron). When the AI
projects did not deliver, there waa no grass roots safety net to fall
back on and Lisp disappeared from radar in a wink.

This time Lisp is growing slowly, with an early adopter here and an
early adopter there. And this time Lisp requires /no/ special hardware.
And there is a standard so there is no fragmentation. Except of course
that the first thing anyone does after learning Lisp is start a project
to create a new version of Lisp. :)

--

clinisys, inc
http://www.tilton-technology.com/
---------------------------------------------------------------
"[If anyone really has healing powers,] I would like to call
them about my knees."
-- Tenzin Gyatso, the Fourteenth Dalai Lama
 
A

A. Lloyd Flanagan

Francis Avila said:
"Christian Lynbech" <[email protected]> wrote in message
<snip an enormous long bit of post and re-post>

I believe people don't like Lisp because the Lisp community keeps
writing long, whiny, threads about why people don't like Lisp -- and
posting them in groups that concern entirely different languages.
 
R

Rainer Joswig

"Terry Reedy said:
Did all the LMI companies adopt the same version of Lisp so an outside
Lisper could write one program and sell it to run on all? Or did they
each adopt proprietary versions so they could monopolize what turned
out to be dried-up ponds? Again, did they all adopt uniform formats
for distribution media, such as floppy disks, so that developers could
easily distribute to all? Or did they differentiate to monopolize?

(LMI is a name of a Lisp machine company)

Well, all did adopt Common Lisp. The reason for Common Lisp
was to come up with a common Lisp and work against the
fragmentation of the Lisp language. Some very large
software packages (like KEE) were able to run on all
of those. But much of the software has been developed
before Common Lisp (mid 80) in the 70s.
Btw., floppies were not used in the early times - the
machines had tape drives instead.
 
P

Pascal Costanza

A. Lloyd Flanagan said:
<snip an enormous long bit of post and re-post>

I believe people don't like Lisp because the Lisp community keeps
writing long, whiny, threads about why people don't like Lisp -- and
posting them in groups that concern entirely different languages.

To recap: Someone asked whether it would make sense to add macros to
Python. He specifically cross-posted this to c.l.l in order to get an
opinion from users of a language that has a long history of supporting
macros.

Some strong claims were made why macros might have been the reason that
Lisp has failed. Lispers dispute that macros are bad in this way and
that Lisp has already failed. That's just a natural progression of such
a discussion.


Pascal
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top