BIG successes of Lisp (was ...)

J

james anderson

one of the ironic, telling things about that exposition, is that the
syntax-error contraposition is not quite right.

Andrew said:
That's an old debate. Here's a counter-response
http://www.prescod.net/xml/sexprs.html

it also misses the point, that "abstract" is not "concrete", and the surface
syntax in the input stream is not everything. an issue which "xml" will still
be fighting with long after the last so-encoded bits have long faded into the
aether.

analogies apply.

....
 
D

Duane Rettig

Andrew Dalke said:
Kaz Kylheku:

We have decidedly different definitions of what a "domain-specific
language" means.

Probably not; more likely it is just a different emphasis.
To you it means the semantics expressed as
an s-exp. To me it means the syntax is also domain specific. Eg,
Python is a domain specific language where the domain is
"languages where people complain about scope defined by
whitespace." ;)

Your whole article leans heavily toward raising the importance of
syntax. Lispers tend to see it differently. For Common Lispers,
and many other lispers who tend to minimize syntax, if the domain-
specific language already has or is able to have similar syntax
as Lisp, then parsing is analready-solved problem, and one can
just use the CL parser (i.e. read) and move on to other more
important problems in the domain language. But if the syntax
doesn't match, then it really still isn't a big deal; a parser
is needed just as it is in any other language, and one must
(and can) solve that problem as well as the rest of the
domain-specifics. The real question is how quickly you can
finally leave the issue of syntax behind you in your new problem
domain and move on to the problem to solve.

In the early '80s, I did an experiment, using Franz Lisp and Naomi
Sager's (NYU) English Grammar parser from her early 1981 book
"Natural Language Information Processing". I wrote a parser
for English out of her BNF and Franz Lisp's character macros and
the lisp reader. When I joined Franz Inc, I was able to port the
parser to Common Lisp with a little cheating (CL doesn't define
infix-macros like Franz Lisp does, so I had to redefine read-list
in order to make the ' macro work well with the lexicon).

[Unfortunately, I can't release this parser, because in my
correspondences with her, Dr. Sager made it clear that although
the sentences and descriptions in the book are not copyrighted,
the BNF and restrictions are. So although you can see the BNF
nodes reresented in a tree below, I won't define the terms;
you'll have to get the book for that...]

The whole point of my experiment was not to write a parser or
to try to parse English, but to show how powerful Lisp's parser
already is. With an input of

"John's going to Mary's house."

for example, with neither John nor Mary being present in the
lexicon, the parser is able to provide the following analysis
(note that the parser was written as a stand-alone program used
in a fashion similar to a scripting language, with no actual
interactive input from the user except via pipe from stdin):

Franz Lisp, Opus 38.89+ plus English Grammar Parser version 0
-> nil
-> nil
-> t
-> t
-> sentence =
(|John's| going to |Mary's| house |.|)
form = <sentence>
revised sentence =
(|John's| going to Mary is house |.|)
form = <sentence>
revised sentence =
(|John's| going to Mary has house |.|)
form = <sentence>
revised sentence =
(John is going to |Mary's| house |.|)
form = <sentence>
found parse 1
1 1 <sentence>
2 2 <introducer>
4 2 <center>
5 3 <assertion>
6 4 <sa>
8 4 <subject>
9 5 <nstg>
10 6 <lnr>
11 7 <ln>
12 8 <tpos>
18 8 <qpos>
20 8 <apos>
22 8 <nspos>
24 8 <npos>
26 7 <nvar>
27 8 <namestg>
28 9 <lnamer>
29 10 <lname>
34 10 n --------------> John
35 10 <rname>
37 7 <rn>
39 4 <sa>
41 4 <tense>
42 5 <null>
43 4 <sa>
45 4 <verb>
47 5 <lv>
49 5 <vvar>
50 6 tv --------------> is
53 4 <sa>
55 4 <object>
56 5 <objectbe>
57 6 <vingo>
58 7 <lvsa>
60 7 <lvingr>
61 8 <lv>
63 8 ving --------------> going
64 8 <rv>
66 9 <rv1>
67 10 <pn>
69 11 <lp>
71 11 p --------------> to
72 11 <nstgo>
73 12 <nstg>
74 13 <lnr>
75 14 <ln>
76 15 <tpos>
77 16 <lnamesr>
78 17 <lname>
83 17 ns --------------> |Mary's|
84 15 <qpos>
86 15 <apos>
88 15 <nspos>
90 15 <npos>
92 14 <nvar>
93 15 n --------------> house
94 14 <rn>
98 7 <sa>
100 7 <object>
101 8 <nullobj>
102 7 <rv>
104 7 <sa>
106 4 <rv>
108 4 <sa>
110 2 <endmark>
111 3 |-.-| --------------> |.|
revised sentence =
(John is going to Mary is house |.|)
form = <sentence>
revised sentence =
(John is going to Mary has house |.|)
form = <sentence>
revised sentence =
(John has going to |Mary's| house |.|)
form = <sentence>
revised sentence =
(John has going to Mary is house |.|)
form = <sentence>
revised sentence =
(John has going to Mary has house |.|)
form = <sentence>
(no more parses count= 1)
->


A more complex structured sentence, which Sager gave in her book,
was "the force with which an isolated heart beats depends on
the concentration of calcium in the medium which surrounds it."
which also was parsed correctly, though I won't show the parse
tree for it here because it is long. I did have trouble with
conjunctions, because they were not covered fullly in her book
and involve splicing copies of parts of the grammar together,
and there are a number of "restrictions" (pruning and
well-formedness tests specific to some of the BNF nodes that help
to find the correct parse) which she did not describe in her book.

Again, the point is that syntax and parsing are already-solved
problems., and even problems that don't on the surface look like
problems naturally solved with the lisp reader can come fairly
close to being solved with very little effort. Perhaps we can
thus move a little deeper into the problem space a little faster.
 
J

james anderson

Andrew said:
Kaz Kylheku:

Compile vs. runtime is an implementation issue. Doesn't
change expressive power, only performance. Type inferencing
suggests that there are other ways to get speed-ups from
dynamic languages.


We have decidedly different definitions of what a "domain-specific
language" means. To you it means the semantics expressed as
an s-exp. To me it means the syntax is also domain specific. Eg,
Python is a domain specific language where the domain is
"languages where people complain about scope defined by
whitespace." ;)

that is an inaccurate projection of what "domain-specific" means in a
programming environment like lisp. perhaps it says more about what it would
mean in a programming environemtn like python? if the author would take the
example of one of the recent discussions which flew by here, e.weitz's
cl-interpol, it would be interesting to read how
Yes, one can support Python in Lisp as a reader macro -- but
it isn't done because Lispers would just write the Python out
as an S-exp. But then it wouldn't be Python, because the domain
language *includes*domain*syntax*.

it exemplifies "writing the [ domain-specific language ] out as an s-exp.
In other words, writing the domain language as an S-exp
is a short cut to make it easier on the programmer, and not
on the domain specialist. Unless the domain is programming.
And you know, very few of the examples of writing a domain
specific language in Lisp have been for tasks other than
programming.

the more likely approach to "python-in-lisp" would be a
reader-macro/tokenizer/parser/translater which compiled the
"python-domain-specific-language" into s-expressions.

....
 
B

Brian Kelley

Matthew said:
How are these in any way equivalent? Pascal posted his example with
try...finally and f.close () for a specific reason. In your Python
example, the file is not closed until presumably the GC collects the
descriptor and runs some finalizer (if that is even the case).

The file is closed when the reference count goes to zero, in this case
when it goes out of scope. This has nothing to do with the garbage
collector, just the reference counter. At least, that's the way I
understand, and I have been wrong before(tm). The upshot is that it has
worked in my experience 100% of the time and my code is structured to
use (abuse?) this. How is this more difficult?

The difference here, as I see it, is that if an exception happens then
the system has to wait for the garbage collector to close the file. In
both examples there was no exception handling after the fact (after the
file is closed). The macro, then, allows execution to continue with the
closed file while the python version stops execution in which case the
file is closed anyway. (unless it is in a another thread of execution
in which the file is closed when it goes out of scope)

In either case one still needs to write handling code to support the
failure as this is most likely application specific. Using macros as a
default handler seems very appropriate in lisp. In python, as I
mentioned in the model-centric view I would create a new file object to
support better handling of file-i/o and failures and hence abstract away
the "error-prone" styles you mentioned.

Now, macros really shine when they also use the local scope. However,
unless a macro is actually doing this I see no real difference between
creating a wrapper for an object and a macro:

f = FileSafeWrapper(open(...))

(with-file-open (f ...)

Except that the macros you are describing are part of the common
distribution. I have to write my own FileSafeWrapper for now...
http://www.lispworks.com/reference/HyperSpec/Body/m_w_open.htm#with-open-file

So in fact, you pointed out a bug in Pascal's Python example, and one
that is easy to make. All this error-prone code is abstracted away by
WITH-OPEN-FILE in Lisp.

This is a good thing, naturally. But the examples you have given are
completely do-able (in one form or another) in python as it currently
stands, either by creating a wrapper around a file object that can
properly close down on errors or what not. In fact, this might be a
better abstraction in some cases. Consider:

(with-file-that-also-outputs-to-gui ... )
(with-file-that-also-outputs-to-console ...)

to

(with-open-file (f ...

in this case, f is supplied by some constructor that wraps the file to
output to the gui or standard i/o and is passed around to various
functions. Which is the better solution?

I'm not saying that lisp can't do this, it obviously can, but macros
might not be the appropriate solution to this problem ( they certainly
aren't in python ;) )

p.s. I really do enjoy programming in lisp, it was my second programming
language after fortran 77.
 
M

Matthew Danish

The file is closed when the reference count goes to zero, in this case
when it goes out of scope. This has nothing to do with the garbage
collector, just the reference counter. At least, that's the way I
understand, and I have been wrong before(tm). The upshot is that it has
worked in my experience 100% of the time and my code is structured to
use (abuse?) this. How is this more difficult?

I would see this as a dependence on an implementation artifact. This
may not be regarded as an issue in the Python world, though. (Are you
sure refcounting is used in all Python implementations? It is not that
great of a memory management technique except in certain situations).
One of the arguments against using finalizers to deallocate resources is
that it is unpredictable: a stray reference can keep the resource (and
maybe lock) open indefinitely.

This is not to say that Python couldn't achieve a similar solution to
the Lisp one. In fact, it could get quite nearly there with a
functional solution, though I understand that it is not quite the same
since variable bindings caught in closures are immutable. And it would
probably be most awkward considering Python's lambda.
The difference here, as I see it, is that if an exception happens then
the system has to wait for the garbage collector to close the file. In
both examples there was no exception handling after the fact (after the
file is closed). The macro, then, allows execution to continue with the
closed file while the python version stops execution in which case the
file is closed anyway. (unless it is in a another thread of execution
in which the file is closed when it goes out of scope)

I'm not sure I understand this paragraph. The macro only executes the
body code if the file is successfully opened. The failure mode can be
specified by a keyword argument. The usual HANDLER-CASE and
HANDLER-BIND can be used to handle conditions with or without unwinding
the stack (you could fix and continue from a disk full error, for
example). If the stack is unwound out of the macro, by a condition
(abnormally), then there is an attempt to restore the state of the
filesystem (method probably dependent on the :if-exists parameter). If
control exits normally, the file is closed normally.
In either case one still needs to write handling code to support the
failure as this is most likely application specific. Using macros as a
default handler seems very appropriate in lisp.

Not sure what `using macros as a default handler' means.
This is a good thing, naturally. But the examples you have given are
completely do-able (in one form or another) in python as it currently
stands, either by creating a wrapper around a file object that can
properly close down on errors or what not. In fact, this might be a
better abstraction in some cases. Consider:

(with-file-that-also-outputs-to-gui ... )
(with-file-that-also-outputs-to-console ...)

to

(with-open-file (f ...

in this case, f is supplied by some constructor that wraps the file to
output to the gui or standard i/o and is passed around to various
functions. Which is the better solution?

I'm not saying that lisp can't do this, it obviously can, but macros
might not be the appropriate solution to this problem ( they certainly
aren't in python ;) )

The WITH-OPEN-FILE macro is not really an example of a macro that
performs something unique. It is, I find, simply a handy syntactic
abstraction around something that is more complicated than it appears at
first. And, in fact, I find myself creating similar macros all the time
which guide the use of lower-level functions. However, that doesn't
mean, for example, that I would try to defeat polymorphism with macros
(WITH-OPEN-FILE happens to be a very often used special case). I would
write the macro to take advantage of that situation.
 
J

Jock Cooper

Matthew Danish said:
(call-with-open-file
#'(lambda (stream)
...)
"somefile"
:direction :input)

WITH-OPEN-FILE happens to be one of those macros which doesn't require
compile-time computation, but rather provides a convenient interface to
the same functionality as above.

--

Right, if the function is specified as a lambda inside the call then it
*would* have access to the surrounding lexical space.

The recent thread on macros has got me thinking more carefully about
when I use macros, and if I could use functions instead.
 
T

Tayss

Brian Kelley said:
The file is closed when the reference count goes to zero, in this case
when it goes out of scope. This has nothing to do with the garbage
collector, just the reference counter. At least, that's the way I
understand, and I have been wrong before(tm). The upshot is that it has
worked in my experience 100% of the time and my code is structured to
use (abuse?) this. How is this more difficult?

As I understand, you're arguing that it's ok to let Python's
refcounter automagically close the file for you.

Please read this:
http://groups.google.com/groups?hl=...8&safe=off&[email protected]
Erik Max Francis explains that expecting the system to close files
leads to brittle code. It's not safe or guaranteed.

After learning Python, people write this bug for months, until they
see some article or usenet post with the try/finally idiom. This
idiom isn't obvious from the docs; the tutorial doesn't say how
important closing a file is. (I was lucky enough to already know how
exceptions could bust out of code.)

Now, I love Python, but this really is a case where lots of people
write lots of potentially hard-to-reproduce bugs because the language
suddenly stopped holding their hands. This is where it really hurts
Python to make the tradeoff against having macros. The tradeoff may
be worth it, but ouch!
 
P

Paul Rubin

james anderson said:
the more likely approach to "python-in-lisp" would be a
reader-macro/tokenizer/parser/translater which compiled the
"python-domain-specific-language" into s-expressions.

That's not really feasible because of semantic differences between
Python and Lisp. It should be possible, and may be worthwhile, to do
that with modified Python semantics.
 
P

Paul Rubin

Brian Kelley said:
The file is closed when the reference count goes to zero, in this case
when it goes out of scope. This has nothing to do with the garbage
collector, just the reference counter.

There is nothing in the Python spec that says that. One particular
implementation (CPython) happens to work that way, but another one
(Jython) doesn't. A Lisp-based implementation might not either.
At least, that's the way I
understand, and I have been wrong before(tm). The upshot is that it
has worked in my experience 100% of the time and my code is structured
to use (abuse?) this. How is this more difficult?

Abuse is the correct term. If your code is relying on stuff being
gc'd as soon as it goes out of scope, it's depending on CPython
implementation details that are over and above what's spelled out in
the Python spec. If you run your code in Jython, it will fail.
 
B

Brian Kelley

Matthew said:
I would see this as a dependence on an implementation artifact. This
may not be regarded as an issue in the Python world, though.

As people have pointed out, I am abusing the C-implementation quite
roundly. That being said, I tend to write proxies/models (see
FileSafeWrapper) that do the appropriate action on failure modes and
don't leave it up the garbage collector. Refer to the "Do as I Do, not
as I Say" line of reasoning.
This is not to say that Python couldn't achieve a similar solution to
the Lisp one. In fact, it could get quite nearly there with a
functional solution, though I understand that it is not quite the same
since variable bindings caught in closures are immutable. And it would
probably be most awkward considering Python's lambda.

I don't consider proxies as "functional" solutions, but that might just
be me. They are just another way of generating something other than the
default behavior.

Python's lambda is fairly awkward to start with, it is also slower than
writing a new function. I fully admit that I have often wanted lambda
to be able to look up variables in the calling frame.

foo = lambda x: object.insert(x)
object = OracleDatabase()
foo(x)
object = MySqlDatabase()
foo(x)

But in practice I never write lambda's this way. I always bind them to
a namespace (in this case a class).

class bar:
def some_fun(x):
foo = lambda self=self, x: self.object.insert(x)
foo(x)

Now I could use foo on another object as well.
foo(object, x)
I'm not sure I understand this paragraph. The macro only executes the
body code if the file is successfully opened. The failure mode can be
specified by a keyword argument.

I can explain what I meant with an example: suppose you wanted to tell
the user what file failed to open/write and specify a new file to open.
You will have to write a handler for this and supply it to
(with-open-file ...) or catch the error some other way.
> Not sure what `using macros as a default handler' means.

Apologies, I meant to say that writing a macro to handle particular
exceptions in a default-application wide way is a good thing and
appropriate.
The WITH-OPEN-FILE macro is not really an example of a macro that
performs something unique. It is, I find, simply a handy syntactic
abstraction around something that is more complicated than it appears at
first. And, in fact, I find myself creating similar macros all the time
which guide the use of lower-level functions.

Right. I'm was only trying to point out, rather lamely I might add,
that the macros I have been seeing I would solve in an object-oriented
manner. This might simply be because python doesn't have macros. But I
like the thought of "here is your fail-safe file object, use it however
you like". It is hard for me to say which is 'better' though, I tend to
use the language facilities available (and abuse them as pointedly
stated), in fact it took me a while to realize that (with-open-file) was
indeed a macro, it simply was "the way it was done(TM)" for a while.
Certainly, new (and old) users can forget to use their file I/O in the
appropriate macro as easily as forgetting to use try: finally:

In python I use my FileSafeWrapper(...) that ensures that the file is
properly closed on errors and the like. As I stated, this wasn't handed
to me by default though like I remember as (with-file-open ...) was from
my CHLS days.

So let me ask a lisp question. When is it appropriate to use a macro
and when is it appropriate to use a proxy or polymorphism? Perhaps
understanding this would break my macro stale-mate.

p.s. given some of the other posts, I am heartened by the civility of
this particular thread.
 
B

Brian Kelley

Tayss said:
http://groups.google.com/groups?hl=...8&safe=off&[email protected]
Erik Max Francis explains that expecting the system to close files
leads to brittle code. It's not safe or guaranteed.

After learning Python, people write this bug for months, until they
see some article or usenet post with the try/finally idiom.

Some more than most. Interestingly, I write proxies that close
resources on failure but tend to let files do what they want.
Now, I love Python, but this really is a case where lots of people
write lots of potentially hard-to-reproduce bugs because the language
suddenly stopped holding their hands. This is where it really hurts
Python to make the tradeoff against having macros. The tradeoff may
be worth it, but ouch!

I choose a bad example on abusing the C-implementation. The main thrust
of my argument is that you don't need macros in this case, i.e. there
can be situations with very little tradeoff.

class SafeFileWrapper:
def __init__(self, f):
self.f = f

def write(self, data):
try: self.f.write(data)
except:
self.f.close()
self.f = None
raise

def close():
if self.f:
self.f.close()
...

Now the usage is:

f = SafeFileWrapper(open(...))
print >> f, "A couple of lines"
f.close()

So now I just grep through my code for open and replace it with
SafeFileWrapper(open...) and all is well again.

I still have to explicitly close the file though when I am done with it,
unless I don't care if it is open through the application run. But at
least I am guaranteed that the file is closed either when I tell it to
or on an error.

Brian Kelley
 
B

Brian Kelley

Brian said:
class bar:
def some_fun(x):
foo = lambda self=self, x: self.object.insert(x)
foo(x)
oops, that should be
foo = lambda x, self=self: self.object.insert(x)
 
M

Marcin 'Qrczak' Kowalczyk

Python's lambda is fairly awkward to start with, it is also slower than
writing a new function. I fully admit that I have often wanted lambda
to be able to look up variables in the calling frame.

They can since Python 2.1 (in 2.1 with from __future__ import nested_scopes,
in 2.2 and later by default). This applies to nested functions as well.
 
J

JCM

They can since Python 2.1 (in 2.1 with from __future__ import nested_scopes,
in 2.2 and later by default). This applies to nested functions as well.

Variables in lexically enclosing scopes will be visible, but not
variables in calling frames.
 
S

Stephen Horne

Sorry for the long delay. Turns out my solution was to upgrade to
Windows XP, which has better compatibility with Windows 98 stuff than
Windows 2000. So I've had some fun reinstalling everything. On the
plus side, no more dual booting.

Anyway...


Stephen Horne wrote:
...

I can accept something like, e.g.:


but they require 'insight' or 'knowing', which are neither claimed
nor disclaimed in the above.

You can take 'insight' and 'knowing' to mean more than I (or the
dictionary) intended, but in this context they purely mean having
access to information (the results of the intuition).

Logically, those words simply cannot mean anything more in this
context. If you have some higher level understanding and use this to
supply the answer, then that is 'conscious reasoning' and clearly
shows that you you know something of how you know (ie how that
information was derived). Therefore it is not intuition anymore.

Of course the understanding could be a rationalisation - kind of a
reverse engineered explanation of why the intuition-supplied answer is
correct. That process basically adds the 'understanding' after the
fact, and is IMO an everyday fact of life (as I mentioned in an
earlier post, I believe most people only validate and select
consciously from an unconsciously suggested subset of likely solutions
to many problems). However, this rationalisation (even if backdated in
memory and transparent to the person) *is* after the fact - the
intuition in itself does not imply any 'insight' or 'knowing' at any
higher level than the simple availability of information.
There are many things I know,
without knowing HOW I do know -- did I hear it from some teacher,
did I see it on the web, did I read it in some book? Yet I would
find it ridiculous to claim I have such knowledge "by intuition":

Of course. The phrase I used is taken directly from literature, but
the 'not knowing how you know' is obviously intended to refer to a
lack of awareness of how the solution is derived from available
information. Memory is obviously not intuition, even if the context in
which the memory was laid down has been forgotten. I would even go so
far as to suggest that explicit memory is never a part of intuition.
Heuristics (learned or otherwise) are not explicit memories, and
neither is the kind of procedural memory which I suspect plays a
crucial role in intuition.

One thing that has become clear in neuroscience is that almost all
(perhaps literally all) parts and functions of the brain benefit from
learning. Explicit memory is quite distinct from other memory
processes - it serves the conscious mind in a way that other memory
processes do not.

For instance, when a person lives through a traumatic experience, a
very strong memory of that experience may be stored in explicit memory
- but not always. Whether remembered or not, however, that explicit
memory has virtually nothing to do with the way the person reacts to
cues that are linked to that traumatic experience. The kind of memory
that operates to trigger anxiety, anger etc has very weak links to the
conscious mind (well, actually it has very strong ones, but only so
that it can control the conscious mind - not the other way around). It
is located in the amygdala, it looks for signs of danger in sensory
cues, and when it finds any such cues it triggers the fight-or-flight
stress response.

Freudian repression is a myth. When people experience chronic stress
over a period of years (either due to ongoing traumatic experience or
due to PTSD) the hippocampus (crucial to explicit memory) is damaged.
The amygdala (the location of that stress-response triggering implicit
memory) however is not damaged. The explicit memory can be lost while
the implicit memory remains and continues to drive the PTSD symptoms.

It's no surprise, therefore, that recovered memories so often turn out
to simply be false - but still worth considering how this happens.
There are many levels. For instance, explicit memories seem to be
'lossy compressed' by basically factoring out the kinds of context
that can later be reconstructed from 'general knowledge'. Should your
general knowledge change between times, so does the reconstructed
memory.

At a more extreme level, entire memories can be fabricated. The harder
you search for memories, the more they are filled in by made up stuff.
And as mentioned elsewhere, the brain is quite willing to invent
rationalisations for things where it cannot provide a real reason. Add
a psychiatrist prompting and providing hints as to the expected form
of the 'memory' and hey presto!

So basically, the brain has many types of memory, and explicit memory
is different to the others. IMO intuition uses some subset of implicit
memory and has very little to do with explicit memory.
This seems to ignore knowledge that comes, not from insight nor
reasoning, but from outside sources of information (sources which one
may remember, or may have forgotten, without the forgetting justifying
the use of the word "intuition", in my opinion).

Yes, quite right - explicit memory was not the topic I was discussing
as it has nothing to do with intuition.
I do not claim the characteristics I listed:


_contradict_ the possibility of "intuition". I claim they're very
far from _implying_ it.

OK - and in the context of your linking AI to 'how the human brain
works' that makes sense.

But to me, the whole point of 'intuition' (whether in people or, by
extension, in any kind of intelligence) is that the answer is supplied
by some mechanism which is not understood by the individual
experiencing the intuition. Whether that is a built-in algorithm or an
innate neural circuit, or whether it is a the product of an implicit
learning mechanism (whether electronic/algorithmic or
neural/cognitive).
In particular, there is no implication of "knowing" in the above.

Yes there is. An answer was provided. If the program 'understood' what
it was doing to derive that answer, then that wouldn't have been
intuition (unless the 'understanding' was a rationalisation after the
fact, of course).

I haven't read this yet, but your description has got my interest.
The software was not built to be "aware" of anything, right. We did
not care about software to build sophisticated models of what was
going on, but rather about working software giving good recognition
rates.

Evolution is just as much the pragmatist.

Many people seem to have an obsession with a kind of mystic view of
consciousness. Go through the list of things that people raise as
being part of consciousness, and judge it entirely by that list, and
it becomes just another set of cognitive functions - working memory,
primarily - combined with the rather obvious fact that you can't have
a useful understanding of the world unless you have a useful
understanding of your impact on it.

But there is this whole religious thing around consciousness that
really I don't understand, to the point that I sometimes wonder if
maybe Asperger syndrome has damaged that too.

Take, for instance, the whole fuss about mirror tests and the claim
that animals cannot be self-aware as they don't (with one or two
primate exceptions) pass the mirror test - they don't recognise
themselves in a mirror.

There is a particular species that has repeatedly failed the mirror
test that hardly anyone mentions. Homo sapiens sapiens. Humans. When
first presented with mirrors (or photographs of themselves), members
of tribes who have had no contact with modern cultures have
consistently reacted much the same way - they simply don't recognise
themselves in the images. Mirrors are pretty shiney things.
Photographs are colourful patterns, but nothing more.

The reason is simple - these people are not expecting to see images of
themselves and may never have seen clear reflected images of
themselves. It takes a while to pick up on the idea. It has nothing to
do with self-awareness.

To me, consciousness and self-awareness are nothing special. Our
perception of the world is a cognitive model constructed using
evidence from our senses using both innate and learned 'knowledge' of
how the world works. There is no such thing as 'yellow' in the real
world, for instance - 'colour' is just the brains way of labelling
certain combinations of intensities of the three wavebands of light
that our vision is sensitive to.

While that model isn't the real world, however, it is necessarily
linked to the real world. It exists for a purpose - to allow us to
understand and react to the environment around us. And that model
would be virtually useless if it did not include ourselves, because
obviously the goal of much of what we do is to affect the environment
around us.

In my view, a simple chess program has a primitive kind of
self-awareness. It cannot decide its next move without considering how
its opponent will react to its move. It has a (very simple) world
model, and it is aware of its own presence and influence in that world
model.

Of course human self-awareness is a massively more sophisticated
thing. But there is no magic.

Very likely your software was not 'aware' of anything, even in this
non-magical sense of awareness and consciousness. As you say - "We did
not care about software to build sophisticated models of what was
going on".

But that fits exactly my favorite definition of intuition - of knowing
without knowing how you know. If there were sophisticated models, and
particularly if the software had any 'understanding' of what it was
doing, it wouldn't be intuition - it would be conscious reasoning.
People do have internal models of how people understand speech -- not
necessarily accurate ones, but they're there. When somebody has trouble
understanding you, you may repeat your sentences louder and more slowly,
perhaps articulating each word rather than slurring them as usual: this
clearly reflects a model of auditory performance which may have certain
specific problems with noise and speed.

I disagree. To me, this could be one of two things...

1. A habitual, automatic response to not being heard with no
conscious thought at all - for most people, the most common
reasons for not being understood can be countered by speaking more
loudly and slowly.

2. It is possible that a mental model is used for this and the
decision made consciously, though I suspect the mental model comes
in more as the person takes on board the fact that there is a
novel communication barrier and tries to find solutions.

Neither case is relevant to what I meant, though. People don't
consciously work on recognising sounds nor on translating series of
such sounds into words and scentences - that information is provided
unconsciously. Only when understanding becomes difficult such that the
unconscious solutions are likely to be erroneous is there any
conscious analysis.

And the conscious analysis is not a conscious analysis of the process
by which the 'likely solutions subset' is determined. There is no
doubt 'introspection' in the sense that intermediate results in some
form (which phenomes were recognised, for instance) are no doubt
passed on the the conscious mind to aid that analysis, and at that
stage a conscious model obviously comes into play, but I don't see
that as particularly important to my original argument.

Of course people can use rational thought to solve communication
problems, at which point a mental model comes into play, but most of
the time our speach recongition is automatic and unconscious.

Even when we have communications difficulties, we are not free to
introspect the whole speach recognition process. Rather some plausible
solutions and key intermediate results (and a sense if where the
problem lies) are passed to the conscious mind for separate analysis.

The normal speach recognition process is basically a black box. It is
able to provide intermediate results and 'debugging information' in
difficult cases - but there is no conscious understanding of the
processes used to derive any of that. I couldn't tell anything much of
use about the patterns of sound that create each phenome, for
instance. The awareness that one phenome sounds rather similar to
another doesn't count, in itself.

BTW - I hope 'phenome' is the right word. My dictionary has failed me
and a web search seems to see 'phenome' as something to do with
genetic. It is intended to refer to 'basic' sound components that
build up words, but I think I've got a bit confused.
(as opposed to, e.g., the proverbial "ugly American" whose caricatural
reaction to foreigners having trouble understanding English would be
to repeat exactly the same sentences, but much louder:).

I believe the English can outdo any American in the loud-and-slow
shouting at foreigners thing ;-)
Actually it isn't -- if you're aware of certain drastic differences
in the process of speech understanding in the two cases, this may be
directly useful to your attempts of enhancing communication that is
not working as you desire.

Yes, but I was talking about what can or cannot be considered
intelligent. I was simply stating that in my view, a thing that
provides intelligent results may be considered intelligent even if it
doesn't use the same methods that humans would use to provide those
results.

I talk to my mother in a slightly different way to the way I talk to
my father. This is a practical issue necessitated by their different
conversational styles (and the kind of thing that seriously bugs
cognitive theorists who insist despite the facts that people with
Aspergers can never understand or react to such differences). That
doesn't mean that my mother and father can't both be considered
intelligent.
E.g., if a human being with which you're
very interested in discussing Kant keeps misunderstanding each time
you mention Weltanschauung, it may be worth the trouble to EXPLAIN
to your interlocutor exactly what you mean by it and why the term is
important; but if you have trouble dictating that word to a speech
recognizer you had better realize that there is no "meaning" at all
connected to words in the recognizer -- you may or may not be able
to "teach" spelling and pronunciation of specific new words to the
machine, but "usage in context" (for machines of the kind we've been
discussing) is a lost cause and you might as well save your time.

Of course, that level of intelligence in computer speach recognition
is a very long way off.
But, you keep using "anthropocentric" and its derivatives as if they
were acknowledged "defects" of thought or behavior. They aren't.

Not at all. I am simply refusing to apply an arbitrary restriction on
what can or cannot be considered intelligent. You have repeatedly
stated, in effect, that if it isn't the way that people work then it
isn't intelligent (or at least AI). To me that is an arbitrary
restriction. Especially as evolution is a pragmatist - the way the
human mind actually works is not necessarily the best way for it to
work and almost certainly is not the only way it could have worked. It
seems distinctly odd to me to observe the result of a particular roll
of the dice and say "this is the only result that we can consider
valid".
see Timur Kuran's "Private Truths, Public Lies", IMHO a masterpiece
(but then, I _do_ read economics for fun:).

I've not read that, though I suspect I'll be looking for it soon.
But of course you'd want _others_ to suppy you with information about
_their_ motivations (to refine your model of them) -- and reciprocity
is important -- so you must SEEM to be cooperating in the matter.
(Ridley's "Origins of Virtue" is what I would suggest as background
reading for such issues).

I've read 'origins of virtue'. IMO it spends too much time on the
prisoners dilemma. I have the impression that either Ridley has little
respect for his readers intelligence or he had little to say and had
to do some padding. From what Ridley takes a whole book to say, Pinker
covers the key points in a couple of pages.
But if there are many types, the one humans have is surely the most
important to us

From a pragmatic standpoint of getting things done, that is clearly
not true in most cases. For instance, when faced with the problem of
writing a speach recognition program, you and your peers decided to
follow the pragmatic approach and do something different to what the
brain does.
Turing's Test also operationally defines it that
way, in the end, and I'm not alone in considering Turing's paper
THE start and foundation of AI.

Often, the founders of a field have certain ideas in mind which don't
pan out in the long term. When Kanner discovered autism, for instance,
he blamed 'refridgerator' mothers - but that belief is simply false.

Turing was no more omniscient than Kanner. Of course his contribution
to many fields in computing was beyond measure, but that doesn't mean
that AI shouldn't evolve beyond his conception of it.

Evolution is a pragmatist. I see no reason why AI designers shouldn't
also be pragmatists.

If we need a battle of the 'gods', however, then may I refer you to
George Boole who created what he called 'the Laws of Thought'. They
are a lot simpler than passing the Turing Test ;-)
But when we can't agree whether e.g. a termine colony is collectively
"intelligent" or not, how would it be "AI" to accurately model such a
colony's behavior?

When did I claim it would be?
The only occurrences of "intelligence" which a
vast majority of people will accept to be worthy of the term are those
displayed by humans

Of course - we have yet to find another intelligence at this point
that even registers on the same scale as human intelligence. But that
does not mean that such an intelligence cannot exist.
-- because then "model extroflecting", such an
appreciated mechanism, works fairly well; we can model the other
person's behavior by "putting ourselves in his/her place" and feel
its "intelligence" or otherwise indirectly that way.

Speaking as the frequent victim of a breakdown in that (my broken
non-verbal communication and other social difficulties frequently lead
to people jumping to the wrong conclusion - and persisting in that bad
conclusion, often for years, despite clear evidence to the contrary) I
can tell you that there is very little real intelligence involved in
that process. Of course even many quite profound autistics can "put
themselves in his/her place" and people who supposedly have no empathy
can frequently be seen crying about the suffering of others that
neurotypicals have become desensitised to. But my experience of trying
to explain Asperger syndrome to people (which is quite typical of what
many people with AS have experienced) is pretty much proof positive
that most people are too lazy to think about such things - they'd
rather keep on jumping to intuitive-but-wrong conclusions and they'd
rather carry on victimising people in supposed retaliation for
non-existent transgressions as a consequence.

'Intelligent' does not necessarily imply 'human' (though in practice
it does at this point in history), but certainly 'human' does not
imply 'intelligent'.
For non-humans
it only "works" (so to speak) by antroporphisation, and as the well
known saying goes, "you shouldn't antropomorphise computers: they
don't like it one bit when you do".

Of course - but I'm not the one saying that computer intelligence and
human intelligence must be the same thing.
A human -- or anything that can reliably pass as a human -- can surely
be said to exhibit intelligence in certain conditions; for anything
else, you'll get unbounded amount of controversy. "Artificial life",
where non-necessarily-intelligent behavior of various lifeforms is
modeled and simulated, is a separate subject from AI. I'm not dissing
the ability to abstract characteristics _from human "intelligent"
behavior_ to reach a useful operating definition of intelligence that
is not limited by humanity: I and the AAAI appear to agree that the
ability to build, adapt, evolve and generally modify _semantic models_
is a reasonable discriminant to use.

Why should the meaning of the term 'intelligent' be derived from the
meaning of the term 'human' in the first place!

Things never used to be this way. Boole could equate thought with
algebra and no-one batted an eyelid. Only since the human throne of
specialness has been threatened (on the one hand by Darwins assertion
that we are basically bald apes, and on the other by machines doing
tasks that were once considered impossible for anything but human
minds) did terms like 'intelligence', 'thought' and 'consciousness'
start taking on mystic overtones.

Once upon a time, "computer" was a job title. You would have to be
pretty intelligent to work as a computer. But such people were
replaced by pocket calculators.

People have been told for thousands of years that humanity is special,
created in gods image and similar garbage. Elephants would no doubt be
equally convinced of their superiority, if they thought of such
things. After all, no other animal has such a long and flexible nose,
so useful for spraying water around for instance.

Perhaps such arrogant elephants would find the concept of a hose pipe
quite worrying?

I think what is happening with people is similar. People now insist
that consciousness must be beyond understandability, for example, no
because there is any reason why it should be true but simply because
they need some way to differentiate themselves from machines and apes.
If what you want is to understand intelligence, that's one thing. But
if what you want is a program that takes dictation, or ones that plays
good bridge, then an AI approach -- a semantic model etc -- is not
necessarily going to be the most productive in the short run (and
"in the long run we're all dead" anyway:).

I fully agree. And so does evolution. Which is why 99% or more of what
your brain does involves no semantic model whatsoever.
Calling program that use
completely different approaches "AI" is as sterile as similarly naming,
e.g., Microsoft Word because it can do spell-checking for you: you can
then say that ANY program is "AI" and draw the curtains, because the
term has then become totally useless. That's clearly not what the AAAI
may want, and I tend to agree with them on this point.

Then you and they will be very unhappy when they discover just how
'sterile' 99% of the brain is.
What we most need is a model of _others_ that gives better results
in social interactions than a lack of such a model would. If natural
selection has not wiped out Asperger's syndrome (assuming it has some
genetic component, which seems to be an accepted theory these days),
there must be some compensating adaptive advantage to the disadvantages
it may bring (again, I'm sure you're aware of the theories about that).
Much as for, e.g., sickle-cell anemia (better malaria resistance), say.

There are theories of compensating advantages, but I tend to doubt
them. This is basically a misunderstanding of what 'genetic' means.

First off, to the extent that autism involves genetics (current
assessments claim autism is around 80% genetic IIRC) those genetics
are certainly not simple. There is no single autism gene. Several
'risk factor' genes have been identified, but all can occur in
non-autistic people and none is common to even more than a
'significant minority' of autistic people.

Most likely, in my view, there are two key ideas to think of in the
context of autism genetics. The first is recessive genes. The second
is what I call a 'bad mix' of genes. I am more convinced by the latter
(partly because I thought it up independantly of others - yes, I know
that's not much of an argument) so I'll describe that in more detail.

I general, you can't just mutate one gene and get a single change in
the resulting organism. Genes interact in complex ways to determine
developmental processes, which in turn determine the end result.

People have recently, in evolutionary terms, evolved for much greater
mental ability. But while a new feature can evolve quite quickly, each
genetic change that contributes to that feature also has a certain
amount of 'fallout'. There are secondary consequences, unwanted
changes, that need to be compensated for - and the cleanup takes much
longer.

Genes are also be continuously swapped around, generation by
generation, by recombination. And particular combinations can have
'unintended' side-effects. There can be incompatibilities between
genes. For evolution to prgress to the point where there are no
incompatibilities (or immunities to the consequences of those
incompatibilities) can take a very long time, especially as each
problem combination may only occur rarely.

Based on this, I would expect autistic symptoms to suddenly appear in
a family line (when the bad mix genes are brought together by a fluke
of recombination). This could often be made worse by the general
principle that birds of a feather flock together, bringing more
incompatible bad mix genes together. But as reproductive success drops
(many autistics never find partners) some of the lines simply die out,
while other lines simply separate out those bad mix genes, so that
while the genes still exist most children no longer have an
incompatible mix.

Basically, the bad mix comes together by fluke, but after a few
generations that bad mix will be gone again.

Alternatively, people with autism and Asperger syndrome seem to
consistently have slightly overlarge heads, and there is considerable
evidence of an excessive growth in brain size at a very young age.
This growth spurt may well disrupt developmental processes in key
parts of the brain. The point being that this suggests to me that
autistic and AS people are basically pushing the limit in brain size.
We are the consequence of pushing too fast for too much more mental
ability. We have the combination of genes for slightly more brain
growth, but the genes to adapt developmental processes to cope with
that growth - but we don't have the genes to fix the unwanted
consequences of these new mixes of genes.

So basically, autism and AS are either the leading or trailing edge of
brain growth evolution - either we are the ones who suffer the
failings of 'prototype' brain designs so that future generations may
evolve larger non-autistic brains, or else we are the ones who suffer
the failings of bad mix 'fallout' while immunity to the bad gene
combinations gradually evolves.

In neither case do we have a particular compensating advantage, though
a few things have worked out relatively well for at least some people
with AS over the last few centuries. Basically, you get the prize
while I suffer for it. Of course I'm not bitter ;-)
Lots of antropomorphisation and not-necessarily-accurate projection
is obviously going on.

Not necessarily. Most of the empathising I was talking about is pretty
basic. The stress response has a lot in common from one species to
another, for instance. This is about the level that body language
works in AS - we can spot a few extreme and/or stereotyped emotions
such as anger, fear, etc.

Beyond that level, I wouldn't be able to recognise empathising with
pets even if it were happening right in front of me ;-)
I tend to disagree, because it's easy to show that the biases and
widespread errors with which you can easily catch people are ones
that would not occur with brute force searching but would with
heuristics. As you're familiar with the literature in the field
more than I am, I may just suggest the names of a few researchers
who have accumulated plenty of empirical evidence in this field:
Tversky, Gigerenzer, Krueger, Kahneman... I'm only peripherally
familiar with their work, but in the whole it seems quite indicative.

I'm not immediately familiar with those names, but before I go look
them up I'll say one thing...

Heuristics are fallible by definition. They can prevent a search
algorithm from searching a certain line (or more likely, prioritise
other lines) when in fact that line is the real best solution.

With human players having learned their heuristics over long
experience, they should have a very different pattern of 'tunnel
vision' in the search to that which a computer has (where the
heuristics are inherently those that could be expressed 'verbally' in
terms of program code or whatever).

In particular, human players should have had more real experience of
having their tunnel vision exploited by other players, and should have
learned more sophisticated heuristics as a result.

I don't believe in pure brute force searching - for any real problem,
that would be an infinite search (and probably not even such a small
infinity as aleph-0). When I say 'brute force' I tend to mean that as
a relative thing - faster searching, less sophisticated heuristics. I
suspect that may not have been clear above.

But anyway, the point is that heuristics are rarely much good at
solving real problems unless there is some kind of search or closure
algorithm or whatever added.

I do remember reading that recognition of rotated shapes shows clear
signs that a search process is going on unconsciously in the mind.
This isn't conscious rotation (the times were IIRC in milliseconds)
but the greater the number of degrees of rotation of the shape, the
longer it takes to recognise - suggesting that subconsciously, the
shape is rotated until it matches the required template.

So searches do seem to happen in the mind. Though you are quite right
to blame heuristics for a lot of the dodgy results. And while I doubt
that 'search loops' in the brain run through thousands of iterations
per second, with good heuristics maybe even a one iteration per second
(or even less) could be sufficient.

The real problem for someone with AS is that so much has to be handled
by the single-tasking conscious mind. The unconscious mind is, of
course, able to handle a number of tasks at once. If only I could
listen to someones words and figure out their tone of voice and pay
attention to their facial expression at the same time I'd be a very
happy man. After all, I can walk and talk at the same time, so why no
all this other stuff too :-(
It IS interesting how often an effective way to understand how
something works is to examine cases where it stops working or
misfires -- "how it BREAKS" can teach us more about "how it WORKS"
than studying it under normal operating conditions would. Much
like our unit tests should particularly ensure they test all the
boundary conditions of operation...;-).

That is, I believe, one reason why some people are so keen to study
autism and AS. Not so much to help the victims as to find out more
about how social ability works in people who don't have these
problems.
 
J

Jon S. Anthony

Brian Kelley said:
I choose a bad example on abusing the C-implementation. The main
thrust of my argument is that you don't need macros in this case,
i.e. there can be situations with very little tradeoff.

class SafeFileWrapper:
def __init__(self, f):
self.f = f

def write(self, data):
try: self.f.write(data)
except:
self.f.close()
self.f = None
raise

def close():
if self.f:
self.f.close()
...

Now the usage is:

f = SafeFileWrapper(open(...))
print >> f, "A couple of lines"
f.close() ....
I still have to explicitly close the file though when I am done with

It's just this sort of monotonous (yet important) book keeping (along
with all the exception protection, etc.) that something like
with-open-file ensures for you.


/Jon
 
R

Rainer Deyke

Jon said:
It's just this sort of monotonous (yet important) book keeping (along
with all the exception protection, etc.) that something like
with-open-file ensures for you.

Personally I'd prefer guaranteed immediate destructors over with-open-file.
More flexibility, less syntax, and it matches what the CPython
implementation already does.
 
B

Brian Kelley

Jon said:
It's just this sort of monotonous (yet important) book keeping (along
with all the exception protection, etc.) that something like
with-open-file ensures for you.

I can't say that I am completely won over but this is an important
point. Thanks for the discussion.

Brian
 
R

Raymond Wiker

Rainer Deyke said:
Personally I'd prefer guaranteed immediate destructors over with-open-file.
More flexibility, less syntax, and it matches what the CPython
implementation already does.

Right... all along until CPython introduces a more elaborate
gc scheme.

Note that reference-counting has problems with cyclic
references; probably not something that will bite you in the case of
open files, but definitely a problem you need to be aware of.

--
Raymond Wiker Mail: (e-mail address removed)
Senior Software Engineer Web: http://www.fast.no/
Fast Search & Transfer ASA Phone: +47 23 01 11 60
P.O. Box 1677 Vika Fax: +47 35 54 87 99
NO-0120 Oslo, NORWAY Mob: +47 48 01 11 60

Try FAST Search: http://alltheweb.com/
 
R

Rob Warnock

+---------------
| Ah, but in Lisp, this is commonly done at *compile* time. Moreover,
| two or more domain-specific languages can be mixed together, nested in
| the same lexical scope, even if they were developed in complete
| isolation by different programmers. Everything is translated and
| compiled together. Some expression in macro language B can appear in
| an utterance of macro language A. Lexical references across these
| nestings are transparent:
|
| (language-a
| ... establish some local variable foo ...
| (language-b
| ... reference to local variable foo set up in language-a!
| ))
+---------------

Exactly so!

A concrete example of this is using a macro package such as Tim
Bradshaw's HTOUT together with (say) one's own application-specific
Lisp code. You can blithely nest macro-generated HTML inside Lisp
code inside macro-generated HTML, etc., and have *direct* access to
anything in an outer lexical scope! Big win. Consider the following
small excerpt from <URL:http://rpw3.org/hacks/lisp/appsrv-demo.lhp>
(which has been reformatted slightly to make the nesting more apparent).
Note that any form *not* starting with a keyword switches from the
HTML-generating language ("language-B") to normal Lisp ("language-A")
and that the MACROLET "htm" switches from normal Lisp ("A") back to
HTML generation ("B"):

(lhp-basic-page () ; Contains a call of WITH-HTML-OUTPUT.
...
:)table ()
:)tr ()
(loop for e from 1 to pows
and h in headings do
(let ((p (cond ((= e 1) ":") ((= e pows) "") (t ","))))
(htm :)th :)nowrap)
(fmt h e) p)))))
(loop for i from 1 to nums do
(htm
:)tr :)align "right")
(loop for e from 1 to pows do
(htm
:)td ()
(princ (expt i e) s))))))))
... )

Note how the innermost reference of "i" [in "(expt i e)"] is nested
inside *four* language shifts from the binding of "i" [in the LOOP form],
that is:

(Lisp
;; "i" is bound here
(HTML
(Lisp
(HTML
(Lisp
;; "i" is used here
)))))

Oh, and by the way, all of that HTML-generating code gets expanded
into Lisp code at macro-expansion time [roughly, at compile time].


-Rob
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,774
Messages
2,569,600
Members
45,179
Latest member
pkhumanis73
Top