BIG successes of Lisp (was ...)

A

Alex Martelli

Stephen Horne wrote:
...
Sounds a bit like intuition to me.

Not to me. E.g., an algorithm (say one of those for long multiplication)
fits the list of no's (it may have been _invented_ or _discovered_ by some
human using any of those -- or suggested to him or her in a drunken stupor
by an alien visitor -- or developed in other ways yet; but that's not
relevant as to what characteristics the algorithm itself exhibits) but
it's nothing like intuition (humans may learn algorithms too, e.g. the
simple one for long multiplication -- they're learned and applied by
rote, "intuition" may sometimes eventually develop after long practice
but the algorithm when correctly followed gives the right numbers anyway).
Of course it would be nice if
computers could invent a rationalisation, the way that human brains
do.

What? I hear you say...

You must be mis-hearing (can't be me saying it) because I'm quite aware
of the role of rationalization, and even share your hypothesis that it's
adaptive for reasons connected to modeling human minds. If I have a
halfway decent model of how human minds work, my socialization is thereby
enhanced compared to operating without such a model; models that are
verbalized are easier to play what-if's with, to combine with each other,
etc -- they're generally handier in all of these ways than nonverbal,
"intuitional" models. (You focus on the models OTHER people have of
me and our "need for excuses to give to other people", but I think that
is the weaker half of it; our OWN models of people's minds, in my own
working hypothesis, are the key motivator for rationalizing -- and our
modeling people's minds starts with modeling OUR OWN mind).
1. This suggests that the only human intelligence is human
intelligence. A very anthropocentric viewpoint.

Of course, by definition of "anthropocentric". And why not? "The
proper study of mankind is man" is after all the central credo
of "humanism" (from "homo", meaning "human being") and the reason
the "humanities" (ditto) used to play a central role in education.

Humanism (besides being the core philosophy that engendered the
Renaissance -- properly accounting for it, since its still-murkish
beginnings in 13th-century Italy) also has an interesting biological
connection: that "proper study of man" may well be the key reason
that made "runaway" brain development adaptive in our far forebears --
at some point, the competitive development of abilities to convince
others, to not be too easily convinced by others, etc, etc, became
an evolutionary pressure for brain development (to the point where
head size became a very serious issue during birth). Of course it
then had interesting side effects outside the area of socialization,
and I understand (the child of a very dear friend of mine having
Asperger syndrome) that Asperger syndrome is very connected to this
area -- what kind of mental approach / modeling is used for purposes
of socialization vs for other purposes.

But the point remains that we don't have "innate" mental models
of e.g. the way the mind of a dolphin may work, nor any way to
build such models by effectively extroflecting a mental model of
ourselves as we may do for other humans. In a way that gives us
more objectivity in the matter (except where we end up projecting
our mental models of humans on non-humans anyway, inappropriately
and ineffectively), and I do see the interest of that in science
or philosophy (perhaps in the very long run with practical returns,
but intrinsically motivated by joy of understanding things _for
the mere sake of understanding_, without thought of returns). Being
an engineer, myself, rather than a scientist or philosopher, I do keep
an eye out on "returns", though (Plato would doubtlessly be utterly
appalled...). I cherish all my living brethren, sure, but my name,
which happens to be the same as that of the author of that "proper
study" quote, DOES after all mean "protector of men" (ok, ok, it's
"men" in the masculine sense, as opposed to "anthropos" being "men"
in the "human beings" sense -- that's why I call myself "Alex",
just the "protector" part of the name, and drop the qualification;-);
marine mammals, silicon circuits, and vortices of pure energy, take
a secondary role in my interests.

2. Read some cognitive neuroscience, some social psychology,
basically whatever you can get your hands on that has cognitive
leanings (decent textbooks - not just pop psychology) - and

Done, a little (I clearly don't have quite as much interest in the
issue as you do -- I spread my reading interests very widely).
then tell me that the human mind doesn't use what you call brute
force methods.

Of course we do -- we can and do learn (by rote -- brute force) such
algorithms as long multiplication, Bayes', etc, and can apply them,
just for example. But I can for example observe the way people play
bridge, and their rationalizations about why they've done X or Y;
and the way GIB plays bridge, and _its_ "rationalizations" (fully
deterministic cause->effect steps, except for the montecarlo sample --
a mere matter of time: there are only C(26,13)=10,400,600 ways two
unseen hands can be dealt -- just a few more turns of Moore's Law
and complete brute-force analysis of the whole range of them will
become quite feasible); and (with the help of some introspection on
how _I_ play bridge and rationalize about it) come to my conclusions
about the mechanisms involved. Bridge has the advantage of having
attracted some very deep thinkers from different areas -- I have
already mentioned Borel, and how important his "Theorie Mathematique
du Bridge" was, but the one I had in mind now was Geza Ottlik --
not very well known in the West, perhaps, but an important figure in
Hungarian literature... with maths and physics as his university
background. A famous quote of his is "the writer should not talk,
should never explain his work" -- but he did not extend this theory
to bridge play. In "Adventures in Card Play", a masterpiece without
equals, he develops, through a series of hands, many with multiple
alternative lines of play and defense which he analyzes carefully,
the full theory of a series of card-play strategies never previously
systematically developed in the vast literature of the game. The
process of "rationalization", the building of a reasonably abstract
but usable theory to account for a vast set of concrete possibilities,
is one fascinating area to study; the play of those hands at the table,
before AND after strudying the theory Ottlik developed, are two more
areas of even more fascination in the endless quest to understand
the actual workings of the human mind. Should I ever manage, at the
table (or virtual table -- these days online bridge is often more
convenient than actual pasteboard-pushing:), the possibility of e.g.
a "backwash squeeze" (one of Ottlik's inventions/discoveries) -- and
how to perform such recognition is one fourth fascinating subject --
my play of the hand will use completely different mechanisms from
those it would have used before I had read Ottlik's work (in his
articles for Bridge World, before it was collected into a book).

The "brute force" bit would come into play if (ha, fat chance) I
ever recognized that the hand has e.g. two possible lines of play:
I could play for a backwash squeeze which will succeed in case
A, B or C, or I could play for (e.g.) a reverse-dummy which will
succeed in case D, E or F. Now, if I am to compute what is the
best line, I _can_ (if I have enough ability for mental computation:
paper and pen not allowed, much less calculators -- hard to police
in online play, of course) do some mental combinatorial arithmetic
(often including a spot of Bayes theorem if I can infer the chance
that an opponent would have followed a certain line of play if he
or she had holding X, to go back from the line actually followed
to the probability of holdings) and determine the probabilities.

But that's the least of issues in practical play. I may well be
in the upper tenth of a centile among bridge players in terms of
my ability to do such mental computations at the table -- but that
has very little effect in terms of success in at-the-table results.
My published theoretical results (showing how to _brute-force_ do
the probabilistic estimation of "hand strength", a key issue in
the bidding phase that comes before the actual play) are one thing,
what I achieve at the table quite another:).

And I sure DO know I don't mentally deal a few tens of thousands
of possibilities in a montecarlo sample to apply the method I had
my computer use in the research leading to said published results...;-)


Alex
 
T

Terry Reedy

Scott McKay said:
Not true. The servers do get restarted a couple of times
a day to get new data loads, which are huge.

Does Orbitz have anything to do with Python?
If not, please leave c.l.py off further discussion thereof.
 
J

John J. Lee

Alex Martelli said:
It's a problem of terminology, no more, no less. We are dissenting on

OK, fair enough.


[...large quantity of text snipped...]

phew, no time to read that! The first line might have been sufficient...


John
 
P

Paolo Amoroso

Given that this thread is Lisp related, and that Python users have had
even too much exposure to Lisp in other threads, I have set the
followup of this article to comp.lang.lisp only. Please consider
moving this, and all other Lisp-related discussions, to
comp.lang.lisp.

Francis said:
I think what helped kill the lisp machine was probably lisp: many people
just don't like lisp, because it is a very different way of thinking that

Why did it take Lisp Machines users over a decade to realize they
didn't like Lisp after all? It looks like Lisp generates pretty strong
feelings pretty fast.

By the way, there were at least Ada, C, Fortran and Prolog compilers
available for Genera, the operating system of Symbolics Lisp
Machines. The C compiler was good enought to port X Window to Genera.


Paolo
 
L

larry

Here's a more realistic example that illustrates exactly the opposite
point. Take the task of finding the sum of the square of a bunch of
numbers.
Do people really think to themselves when they do this task:
Umm first make a variable called sum
then set that variable sum to zero.
Then get the next number in the list,
square it,
add it to the old value of sum,
store the resulting value into sum,
then get the next variable,etc....

No they they think: sum up the square of a bunch of numbers.
This has an almost direct translation to the lisp style:
(apply '+ (mapcar #'(lambda(x)(* x x)) numlist)).


The reason why the imperative style of programming seems natural is
because that's what people are used to programming in. It isn't
necessarily the way they formulate the problem to themselves. There is
always a translation step.
 
A

Andrew Dalke

Alex:
But I can for example observe the way people play
bridge, and their rationalizations about why they've done X or Y;

I recall reading that computer generated shuffles, based on PRNGs,
gave different hand distributions than expected by expert bridge players.
That's because at least seven shuffles are needed to remove local
order from the deck (made because of how tricks are played, or
how people arrange cards in their hand)
http://www.dartmouth.edu/~chance/course/topics/winning_number.html

How much does that skew affect the sorts of bridge play you are
talking about? On the one hand, it seems significant given that
articles like the above said "many bridge players take advantage of the
non-randomness of seemingly shuffled cards". On the other hand,
it doesn't seem like it makes that much difference, since you haven't
mentioned it ("except for the montecarlo sample"). Do bridge players
these days follow the 7 shuffle guideline, or use computer-based
shuffling? Or do the analyses ignore the non-random influence (perhaps
on the belief that considering such is improper)? Or are there
indeed stratagies based on good Baysian or other sound methods
which take that into account? They would of course be invalid for
when the cards are randomized, but I suspect give an edge for most
games.

Andrew
(e-mail address removed)
 
E

E.N. Fan

Does Orbitz have anything to do with Python?
If not, please leave c.l.py off further discussion thereof.

Well, for a start Orbitz could never have been realistically coded
with Python.

The fact that it could be programmed in Lisp is a testament to the
greater power and flexibility of Lisp. No amount of arguing could
speak louder than a proof like that.

jeffrey
 
A

Andrew Dalke

E.N. Fan:
Well, for a start Orbitz could never have been realistically coded
with Python.

Is your word "realistically" based on hardware limitations or language
limitations? That is, is it a limitation of the language or of
the implementation? Suppose there was a type inferencing Python
implementation with the ability to generate native assembly -- would
it be then be realistic? Suppose we wait 5 years (so that hardware
is less of a concern). Would it be realistic then?

Or is it based on social aspects as well? For example, the backers
of the project needed some certainty that it could be done, giving
a good reason to use proven technology over relative newcomers.
The fact that it could be programmed in Lisp is a testament to the
greater power and flexibility of Lisp. No amount of arguing could
speak louder than a proof like that.

What do Travelocity, Expedia, and the other on-line travel systems
use? If it's anything drastically different than a Lisp then your argument
also suggests that other languages (C++? COBOL?, or combinations
like PHP + an Oracle cartridge written in C?) have equal power and
flexibility.

Andrew
(e-mail address removed)
 
K

Kaz Kylheku

Francis Avila said:
language paradigm. That is, when it becomes more natural to think of
cake-making as

UD: things
Gxyz: x is baked at y degrees for z minutes.
Hx: x is a cake.
Ix: x is batter.

For all x, ( (Ix & Gx(350)(45)) > Hx )

(i.e. "Everything that's a batter and put into a 350 degree oven for 45
minutes is a cake")

[ snip ]
...then lisp will take over the universe.

Lisp does not support that kind of programming out of the box, but
there do exist libraries of reasoning code for Lisp which extend the
language to do this type of thing, e.g. http://lisa.sourceforge.net

When you took that dreaded AI course decades ago in school, the
instructor must have given you a little Prolog-in-Lisp library in
which you did the work.

Hazy recollections of things and events long ago are no substitute for
up to date, accurate facts.

Lisp is not a purely functional language; in fact it has excellent
support for imperative as well as object-oriented programming. It
supports assignment to variables, and all kinds of objects that can be
destructively manipulated: arrays, structures, objects, streams, hash
tables, packages and so forth. Nested expressions in Lisp have a
well-defined order of evaluation, so it is easy to get the order right
when side effects are involved, whereas in purely functional
languages, the order is typically unspecified. Lisp has
short-circuiting logical operators, and many imperative statement-like
forms for conditional selection, iteration and the like. Yes, Lisp
also has a form of GOTO. Folks, it doesn't get much more imperative
than that.
 
A

Alex Martelli

Andrew said:
Alex:

I recall reading that computer generated shuffles, based on PRNGs,
gave different hand distributions than expected by expert bridge players.

Yes, it did, back when computer shuffling was a novelty, say in the
early 70's, right when I was starting. These days, all major championships
have computer shuffling, so all competitive players are long used to it; and
with online bridge also using it, more and more average players have
similarly adjusted. "Wilder" (less balanced) hands are more common these
days (as they follow the theoretical probabilities) than they were in the
'50s and '60s. Studies of the hands from world championships in those
decades could neither confirm nor deny this -- there just were not enough --
and apart from world championships other hands were only recorded
selectively, with a bias towards "interesting" [thus, less balanced] ones,
so, no chance to double check there; but that is the consensus. It seems
to go with the change in top experts' style, from the soundness and prudence
of a Goren or Roth or the Italian Blue Team in the '50s/60s, to the
recklessness of Meckstroth-Rodwell, Zia-Rockwell, or the _current_ Great
Italians these days.

So, imperfect shuffling is not simulated in any computer dealer or analysis
program that I know of (including mine in Python, which I've repeatedly used
at the request of my friend Marty Bergen, a once famous player and now
famous writer, to help me answer his theoretical and probability questions
for his books). People _do_ perform chi square tests and the like on all
kinds of aspects of deals collected from OKbridge and other major online
bridge services, as well as on deals played in all major championships, and
no deviation from theoretical expectations has arisen there.


Alex
 
A

Andrew Dalke

Alex:
So, imperfect shuffling is not simulated in any computer dealer or analysis
program that I know of

Since most bridge playing is not in tournament play or other places which
use randomize shuffles, wouldn't people who want an edge in "friendly"
play like ways to maximize the odds of winning?

That link I mentioned,
http://www.dartmouth.edu/~chance/course/topics/winning_number.html
suggests that doing so is strongly frowned upon

] He said a bridge club in New York
] State once consulted him, as a magician, to find out whether several
players
] were cheating. After watching play ''and doing a little thinking in
between,''
] Dr. Diaconis knew what was going on. These players had figured out that
the
] cards were not being randomly shuffled, and that they could predict the
] distributions of cards by knowing what the deck looked like at the end of
the
] previous hand.

Perhaps there's underground literature describing how to take
advantage of this? :)

Andrew
(e-mail address removed)
 
P

Paul Rubin

Andrew Dalke said:
I recall reading that computer generated shuffles, based on PRNGs,
gave different hand distributions than expected by expert bridge players.

I don't know about bridge, but I've played enough Freecell to be
convinced that the PRNG it uses is no good and it gets ridiculously
non-random distributions.
 
M

Michael Walter

larry said:
Here's a more realistic example that illustrates exactly the opposite
point. Take the task of finding the sum of the square of a bunch of
numbers.
Do people really think to themselves when they do this task:
Umm first make a variable called sum
then set that variable sum to zero.
Then get the next number in the list,
square it,
add it to the old value of sum,
store the resulting value into sum,
then get the next variable,etc....

No they they think: sum up the square of a bunch of numbers.
This has an almost direct translation to the lisp style:
(apply '+ (mapcar #'(lambda(x)(* x x)) numlist)).
Well..
sum (map (\x -> x ** 2) [1..10])
in Haskell, or
sum (map ((lambda x: x ** 2), range(1,11)))
in Python. But not really sure what point you're trying to make, oh well :)
 
G

Greg Ewing (using news.cis.dfn.de)

For the sake of being balanced: there were also some *big*
failures, such as Lisp Machines. They failed because
they could not compete with UNIX (SUN, SGI)

I think it's more likely that the specialised hardware
they used couldn't keep up in performance with developments
in the widely-used CPUs from Motorola, Intel, etc. who,
because of their large markets, could afford to pour
lots of effort into making them fast. So there came
a time when it was faster to run a Lisp program on
an off-the-shelf CPU than on a Lisp machine.
 
K

Kenny Tilton

Scott said:
Not true. The servers do get restarted a couple of times
a day to get new data loads, which are huge.

Because the data chompers are C++ (IIRC). :)
 
F

frr

I'm currently working on a Common Lisp typesetting system based on cl-pdf.
One example of what it can already do is here:
http://www.fractalconcept.com/ex.pdf

For now I'm working on the layout and low level stuff. But it's a little bit
soon to think that it will go nowhere, as I hope a lot of people will add TeX
compatibility packages for it ;-)

BTW note that I'm not rewriting TeX. It's another design with other
priorities.


What are your priorities then? O:) What kind of license re you using for it?
 
A

Adam Warner

Hi larry,
Here's a more realistic example that illustrates exactly the opposite
point. Take the task of finding the sum of the square of a bunch of
numbers.
Do people really think to themselves when they do this task:
Umm first make a variable called sum
then set that variable sum to zero.
Then get the next number in the list,
square it,
add it to the old value of sum,
store the resulting value into sum,
then get the next variable,etc....

No they they think: sum up the square of a bunch of numbers.
This has an almost direct translation to the lisp style:
(apply '+ (mapcar #'(lambda(x)(* x x)) numlist)).

Here's my direct translation:
(loop for x in numlist sum (* x x))

Regards,
Adam
 
R

Robert Klemme

Michael Walter said:
larry said:
Here's a more realistic example that illustrates exactly the opposite
point. Take the task of finding the sum of the square of a bunch of
numbers.
Do people really think to themselves when they do this task:
Umm first make a variable called sum
then set that variable sum to zero.
Then get the next number in the list,
square it,
add it to the old value of sum,
store the resulting value into sum,
then get the next variable,etc....

No they they think: sum up the square of a bunch of numbers.
This has an almost direct translation to the lisp style:
(apply '+ (mapcar #'(lambda(x)(* x x)) numlist)).
Well..
sum (map (\x -> x ** 2) [1..10])
in Haskell, or
sum (map ((lambda x: x ** 2), range(1,11)))
in Python. But not really sure what point you're trying to make, oh well
:)

Is this really a question of the language? In Ruby I'd do:

(0..10).inject(0){|sum,e| sum + e*e}

or

(0..10).map{|i|i*i}.inject(0){|sum,i| sum + i}

Not very intuitive, maybe. But if I extend a standard module slightly

module Enumerable
def sum(&b)
inject(0){|sum,i| sum + ( b ? b.call( i ) : i)}
end
end

I can do

(0..10).sum{|i|i*i}

or

(0..10).map{|i|i*i}.sum

That's better. Of course I could then do

module Kernel
def sum(e,&b)
e.sum(&b)
end
end

And thus write

sum(1..10){|i|i*i}

which is untypical for Ruby but similar to the Lisp form.

IMHO familiarity with a language determines how quickly one can come up
with a solution. It's not the similarity of the code and the problem
statement. So *if* people dislike Lisp, I think there must be another
reason.

Kind regards

robert
 
M

Marc Battyani

[I've set the Followup-to to comp.lang.lisp only]

frr said:
What are your priorities then? O:) What kind of license re you using for
it?

Easily programmable with a real programming language
Easily embeddable in other software like web servers
Easily extensible
Understandable internals
Good typographic quality

Marc
 
S

Stephen Horne

Stephen Horne wrote:
...

Not to me. E.g., an algorithm (say one of those for long multiplication)
fits the list of no's (it may have been _invented_ or _discovered_ by some
human using any of those -- or suggested to him or her in a drunken stupor
by an alien visitor -- or developed in other ways yet; but that's not
relevant as to what characteristics the algorithm itself exhibits) but
it's nothing like intuition (humans may learn algorithms too, e.g. the
simple one for long multiplication -- they're learned and applied by
rote, "intuition" may sometimes eventually develop after long practice
but the algorithm when correctly followed gives the right numbers anyway).

What is your definition of intuition?

My dictionary has two definitions, and I'll add a third which seems
fairly common in psychology...

1. instinctive knowledge
2. insight without conscious reasoning
3. knowing without knowing how you know

The first definition isn't required by my typical use of the word - it
may or may not be the case. 'Instinctive' is in any case a poorly
defined word these days - it may or may not refer to innate
characteristics - so I prefer to avoid it and state 'innate'
explicitly if that is what I mean.

The third definition will tend to follow from the second (if the
insight didn't come from conscious reasoning, you won't know how you
know the reasoning behind it).

Basically, the second definition is the core of what I intend and
nothing you said above contradicts what I claimed. Specifically...

....sounds like "knowing without knowing how you know".

Intuitive understanding must be supplied by some algorithm in the
brain even when that algorithm is applied subconsciously. I can well
believe that (as you say, after long practice) a learned algorithm may
be applied entirely unconsciously, in much the same way that (after
long practice) drivers don't have to think about how to drive.

Besides, just because a long multiplication tends to be consciously
worked out by humans, that doesn't mean it can't be an innate ability
in either hardware or software.

Take your voice recognition example. If the method is Markov chains,
then I don't understand it as I don't much remember what Markov chains
are - if I was to approach the task I'd probably use a Morlet wavelet
to get frequency domain information and feature detection to pick out
key features in the frequency domain (and to some degree the original
unprocessed waveform) - though I really have no idea how well that
would work.

However, I don't need any details to make the following
observations...

The software was not 'aware' of the method being used - the Markov
chain stuff was simply programmed in and thus 'innate'. Any 'learning'
related to 'parameters' of the method - not the method itself. And the
software was not 'aware' of the meaning of those parameters - it
simply had the 'innate' ability to collect them in training.

This reminds me a lot of human speach recognition - I can't tell you
how I know what people are saying to me without going and reading up
on it in a cognitive psychology textbook (and even then, the available
knowledge is very limited). I am not 'aware' of the method my brain
uses, and I am not 'aware' of the meaning of the learned 'parameters'
that let me understand a new strong accent after a bit of experience.

The precise algorithms for speach recognition used by IBM/Dragons
dictation systems and by the brain are probably different, but to me
fussing about that is pure anthropocentricity. Maybe one day we'll
meet some alien race that uses a very different method of speach
recognition to the one used in our brains.

In principle, to me, the two systems (human brain and dictation
program) have similar claims to being intelligent. Though the human
mind wins out in terms of being more successful (more reliable, more
flexible, better integrated with other abilities).

You must be mis-hearing (can't be me saying it) because I'm quite aware
of the role of rationalization, and even share your hypothesis that it's
adaptive for reasons connected to modeling human minds. If I have a
halfway decent model of how human minds work, my socialization is thereby
enhanced compared to operating without such a model; models that are
verbalized are easier to play what-if's with, to combine with each other,
etc -- they're generally handier in all of these ways than nonverbal,
"intuitional" models. (You focus on the models OTHER people have of
me and our "need for excuses to give to other people", but I think that
is the weaker half of it; our OWN models of people's minds, in my own
working hypothesis, are the key motivator for rationalizing -- and our
modeling people's minds starts with modeling OUR OWN mind).

That might work if people were aware of the rationalisation process -
but remember the cut corpus callosum example. When asked to explain
'why?' the person didn't say 'I don't know' or 'I just felt like it' -
the logical self-aware explanations if the genuine explanation is not
available to the side of the brain doing the explaining.

The side of the brain doing the explaining was aware that the person
had started moving towards the goal (the coffee) but was not aware of
feeling thirsty (unless the experimenters were extremely sloppy in
setting up the test conditions) so the explanation of "I felt thirsty"
shows a *lack* of self awareness.

But giving non-answers such as "I don't know" or "I just felt like it"
tends not to be socially acceptable. It creates the appearance of
evasiveness. Therefore, giving accurate 'self-aware' answers can be a
bad idea in a social context.

This is actually particularly pertinent to autism and Asperger
syndrome. People with these disorders tend to be too literal, and that
can include giving literally true explanations of behaviour. This is
directly comparable with this level of automatic rationalisation (the
autistic may well not have an automatic rationalisation of behaviour)
though of course in most situations it is more about a lack of
anticipation of the consequences of what we say and a lack of ability
to find a better way to say it (or better thing to say).
Of course, by definition of "anthropocentric". And why not?

Because 'the type of intelligence humans have' is not, to me, a valid
limitation on 'what is intelligence?'.

Studying humanity is important. But AI is not (or at least should not
be) a study of people - if it aims to provide practical results then
it is a study of intelligence.
connection: that "proper study of man" may well be the key reason
that made "runaway" brain development adaptive in our far forebears --

Absolutely - this is IMO almost certainly both why we have a
specialised social intelligence and why we have an excess (in terms of
our ancestors apparent requirements) of general intelligence.
Referring back to the Baldwin effect, an ability to learn social stuff
is an essential precursor to it becoming innate.

But note we don't need accurate self-awareness to handle this. It
could even be counter productive. What we need is the ability to give
convincing excuses.
(to the point where
head size became a very serious issue during birth).

This is also possibly relevant to autism. Take a look here...

http://news.bbc.co.uk/1/hi/health/3067149.stm

I don't really agree with the stuff about experience and learning as
an explanation in this. I would simply point out that the neurological
development of the brain continues after birth (mainly because we hit
that size-of-birth-canal issue some hundreds of thousands of years
ago), and the processes that build detailed innate neural structures
may well be disrupted by rapid brain growth (because evolution has yet
to fix the problems that have arisen at the extreme end of the
postnatal-brain-growth curve).
area -- what kind of mental approach / modeling is used for purposes
of socialization vs for other purposes.

Yes - very much so.
But the point remains that we don't have "innate" mental models
of e.g. the way the mind of a dolphin may work, nor any way to
build such models by effectively extroflecting a mental model of
ourselves as we may do for other humans.

Absolutely true. Though it seems to me that people are far to good at
empathising with their pets for a claim that human innate mental
models are completely distinct from other animals. I figure there is a
small core of innate empathy which is quite widespread (certainly
among mammals) - and which is 'primitive' enough that even us with
Asperger syndrome are quite comfortable with it. And on top of that,
there is a learned (but still largely intuitive) sense about pets that
comes with familiarity.

And as I tried to express elsewhere in different terms, I don't even
think 'innate mental models' is fair - perhaps 'innate mental
metamodels' would be closer.
Done, a little (I clearly don't have quite as much interest in the
issue as you do -- I spread my reading interests very widely).

I have the dubious benefit of a long-running 'special interest' (read
'obsession') in this particular field. The practical benefit is
supposed to be finding solutions to problems, but it tends not to work
that way. Though social psychology obviously turned out to be a real
goldmine - actual objective (rather than self-serving) explanations of
the things going on in peoples minds in social situations!!!

The only problem is that if you apply social psychology principles to
understand people, you may predict their behaviour quite well but you
absolutely cannot explain your understanding that way - unless, of
course, you like being lynched :-(
But that's the least of issues in practical play. I may well be
in the upper tenth of a centile among bridge players in terms of
my ability to do such mental computations at the table -- but that
has very little effect in terms of success in at-the-table results.
My published theoretical results (showing how to _brute-force_ do
the probabilistic estimation of "hand strength", a key issue in
the bidding phase that comes before the actual play) are one thing,
what I achieve at the table quite another:).

Yes, but I never said all learned algorithms are intuitive.

This reminds me of something about learning with mnemonics and
methods. I don't remember the source, but the idea was basically that
the mnemonics, step-by-step instructions etc were there to allow a
stage of learning - the stage where you have to think things through.
In time, the explanation goes, you don't need the mnemonics and
step-by-step methods as handling the problem simply becomes natural.

In contexts where this has worked for me, I would say the final
intuition goes beyond what the original rules are capable of. ie it
isn't just a matter of the steps being held in procedural memory.
Presumably heuristics are learned through experience which are much
better than the ones verbally stated in the original rules.
And I sure DO know I don't mentally deal a few tens of thousands
of possibilities in a montecarlo sample to apply the method I had
my computer use in the research leading to said published results...;-)

How sure are you of that? After all, the brain is a massively parallel
machine.

Look at what is known about the prefrontal cortex and you may end up
with the impression that you are looking at a blackboard-based expert
system - a set of specialist 'intelligences' which observe a common
working memory and which make additions to that working memory when
they spot problems they are able to solve. The ideas going into that
working memory seem to be the things that we are consciously aware of.

When information has to make round trips to the working memory, I
would expect it to be pretty slow - just like most conscious thought
processes.

But what if something is being handled entirely within a single
specialist intelligence unit? That unit may well act rather like a
fast 'inner loop' - testing possibilities much quicker than could be
done consciously. That doesn't mean it has to be innate - the brain
does develop (or at least adapt) specialist structures through
learning.

My guess is that even then, there would be more dependence on
sophisticated heuristics than on brute force searching - but I suspect
that there is much more brute force searching going on in peoples
minds than they are consciously aware of.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top