Standards in Artificial Intelligence

A

Arthur T. Murray

Arthur T. Murray said:
Human language, of course. Your system is
very Anglo-centric. I suppose that's because
you probably only know English,

+German +Russian +Latin +Greek
[Saepe circumambulo cogitans Latine; ergo scio
me esse.]
DBH:
In that case, giving named concepts a numerical index is
an exercise in futility, since there is no 1-1 mapping
between words and concepts in different languages.
ATM:
Such reasoning about "futility" does not apply here,
where we make no attempt at a one-to-one (1-1) mapping.
Above a few yeastlike "starter" concepts in the English
http://mentifex.virtualentity.com/enboot.html bootstrap,
each AI Mind species dynamically assigns a numeric "nen"
(for English) number on the fly to each learned concept.
The proper inter-mind identifier for any concept is not
the English "nen" number or the German "nde" number or
the French "nfr" number or the Japanese "njp" number but --
(are you sitting down and ready to absorb a great shock?)
the natual-language *word* for the communicand concept!
For example, if the word "FahrvergnYgen" has an index of
93824 in the geVocab module, what index(es) does that
correspond to in the enVocab module?
ATM:
http://mentifex.virtualentity.com/standard.html#variable tells
why the German vocabulary module should be called "deVocab"
in accordance with the (International Standards Organization)
"ISO 639:1988 Extract: Codes for names of language" online at
http://palimpsest.stanford.edu/lex/iso639.html (de = deutsch).

Normally it would be cheeky and presumptuous for an AI
parvenu and arriviste like myself to make bold to proclaim
a white paper of "Standards in Artificial Intelligence,"
but people began porting my AI Mind code into various
programming languages several years ago, and I had no
choice but to draw up a "Standards" document just to
make things easier for all concerned -- lest people
start "reinventing the wheel" or diverging too far.

If you don't mind my giving you some more ammunition
for things "Mentifex" to sneer at, then please notice
how craftily the new "AI Standards" white paper at
http://mentifex.virtualentity.com/standard.html now
contains the exact same brain-mind ASCII diagram as
found on page 15 (HCI) and on the cover of the AI4U
http://www.amazon.com/exec/obidos/ASIN/0595654371/ book.

Sometimes I wonder if people, especially students,
who see the Mentifex AI Mind diagrams, simply assume
that the depicted layouts are part of the general
background of neuroscience and cognitive science,
without realizing how original the diagrams are.
There must eventually be some quite authoritative
diagrammatic depictions of how the brain-mind is
organized with respect to information-flow, so
we shall see if the 35 AI4U diagrams were right.
[...]
One reaon the AI4U book may prove to be valuable
over time is
http://www.amazon.com/exec/obidos/ASIN/0595654371/
35 diagrams.
DBH:
Ah yes. But then, you should have called it:
"Intelligence in 35 Diagrams".
ATM:
The AI textbook AI4U (AI For You) has 34 mind-module diagrams,
one for each of the 34 chapters -- each about a single module.
http://mentifex.virtualentity.com/ai4u_157.html is diagram #35.

Notice how crafty it is in all the XYZ AI Weblogs (such as
http://mentifex.virtualentity.com/cpp.html -- C++ AI Blog) to
link to "ai4u_157.html" as the "framework" on p. 157 of AI4U,
instead of to "mindloop.html" as the diagram was once called,
because now the (actually quite genuine) impression is given
that everything ties in with the wonderful new AI Mind design
that has metaoneirically been revealed to the world in AI4U.

ATM:
[...]
As you go on to elaborate below, of course the neuronal
fiber holds a concept by storing information in the
*connections*.
DBH:
You're the only person I know that calls them "fibers".
Everyone else calls them "neurons" or "units". When you
say "fiber", it makes me think you are talking about the
axon, and don't really understand how neurons work.
ATM:
Logic dictates that the most important thing about neurons is how
elongated they are. If neurons were all punctiform cells like an
amoeba, there would be no long-distance transmission of signals
and there would be no diachronic persistence of conceptual ideas.
That is to say, the "longness" of neuronal fibers, each tagged
with as many as ten thousand associative tags to other concepts,
allows a concept to be embodied in one or more (redundant) fibers.

http://mentifex.virtualentity.com/theory5.html -- Know Thyself --
the Concept-Fiber Theory of Mind includes speculation that minds
may have evolved when originally dedicated sensory fibers broke
free by genetic saltation from their sensory-only dedication and
stumbled felicitously into a role of holding long-time conceptual
information rather than instantaneous-time sensory information.

ATM:
DBH:
LOL!!! Well, that's all very poetic, but not very useful for
building an intelligent artifact. I would hardly describe a
separate semantic and syntactic system for natural language
processing a "novel, original contribution". Such a thing is
so obvious that even a rank novice in AI would probably *start*
with such a structure if designing a natural language system.
ATM:
Yes, but where would the novice get such a system if not from AI4U?
Right here and now I would humbly like to ask paper-writers and
book-authors to cite the AI4U mind-diagrams and germane ideas in
their forthcoming publications so as to spread the thoery of mind,
with adjustments and even refutations where necessary. I give
blanket permission for anyone anywhere to reproduce the diagrams
and to re-fashion them into line-art far prettier than ASCII.

ATM:
[...]
We have heard a lot of talk about portions of the
Internet coming together to form one giant "Global
Brain" in distributed processing. Uneasiness ensues
among us "neurotheoreticians." How would you like
it if your own brain-mind were distributed across
several earthly continents, with perhaps one lunar lobe,
and old engrams stored four light years away at Alpha
Centauri?
DBH:
LOL!!! You cannot possibly be serious!!! If I thought
you were actually intelligent, I would chalk this up to
satire.
ATM:
Since a brain-mind needs to be local, I am quite serious.

As an aside, please consider the following line of thought.
By the way, this same thing happens to me whenever I try
to read a paragraph or two of Esperanto -- an artificial
language which I do not know myself. As I stare at a
sentence in Esperanto, a very strange thing happens.
[Now remember, I am fluent in Latin, Germanic and Slavic
languages -- the Esperanto guy was Polish -- plus Greek.]
The meanings of the Esperanto words slowly surface in my
polyglot mind as I stare at each succeeding word, and I
feel as if I have known Esperanto automagically for years.
But yet I could never try to read a book in Esperanto --
it would be cranially painful -- like the scientist
father-figure trying to mindmeld with the Krell in
http://us.imdb.com/Title?0049223 -- Forbidden Planet.

Suppose that Reichsfuehrer Ashcroft and his Parteibonzen
were to set up a Total-Information-Awareness AI Mind
that knew everything about every American citizen and
every teenage or older inmate at the Guantanomo K-Z
and the other now emerging American concentration camps,
such as Novo-Sobibor near Baghdad International Airport.

Such an all-seeing, all-knowing Big-Brother-Mind would
only need some sort of RF-ID tag or other chance input
to start thinking about you and your entire life-span:
"As I think about Citizen Held, suddenly I remember...
his fingerprints, uh, wait a minute, his genome... and,
uh, it's in there somewhere... his every Usenet post."

The distributed Held-data are not in the Mind until
they are fetched, but BBN (Big-Brother-Noesis) *feels*
as if it knew the data all along, albeit recalling slowly.

ATM:
DBH:
The fact that you sum up consciousness in 4 weak,
meaningless paragraphs is surpassed only by the fact
that you can't spell "conscius" [sic].
ATM:
I shorten all Wintel filenames to be eight characters or fewer.
The name "conscius.html" happens to be perfectly good Latin.
[...]
Let's do something new for a change. Let's code AI in
many different programming languages and let's let evolve
not only the AI Minds but cummunities of AI adepts to
tend to the AI Minds and nurture them into full maturity.
DBH:
I intend to do just that, but not using the Mentifex model,

Why not? You could make whatever changes you saw as needed,
and then your resulting AI would be a brand new Mind species.
and not out in the open. But until such time as I have a
working prototype, I'm not going to ramble on about
speculations and mindless musings. [...]

Arthur
 
M

Mark Browne

Arthur,

The first person to create a full functional Artificial Intelligence will
have to solve all levels, from the simple low level calculation units
through the high level organization and information coding.

The neural network system seem to be the best method for low level
functions. The work shown in the late David Marrs book "Vision" does an
excellent job of showing how mid level functions may be organized, explored,
and represented. I have not seen a similar method for organizing the high
level functionality. I have read much of what you have published on the
internet. While I disagree with much of what you are doing, your "top level
down" approach has given me much to think about in my own explorations of
cognition. Reading through your material has allowed me to gain experience
with the scope of what is needed, and given some insight in what will be
required. Perhaps you are the only person in the field working at this level
today.

You may wish to take a long hard look at the distributed information
representations inherent in neural networks. You will find that it goes a
long way towards solving some of the intractable problems you are having
with the fiber concept. It is my guess that once you understand how they
work, and what they can do, you will be doing some major rethinking of your
top level functions. Another useful area is anatomical organizing on the
human neural systems. You will find that nature has anatomical divisions
that suggest high level organization. The division of function is very
different from what you are proposing. Since the natural system seems to
work, you may wish to spend more time looking at how the problem has been
solved in the human neurocomputer. I make these suggestions for my own
selfish reasons; you are the energizer bunny of the AI world. You have been
promoting your ideas for years; selflessly sharing your ideas with anyone
who would listen. I would like to see you correct some of the difficult
areas and move forward. Your successes would clearly be everyone's gain.

It is easy to criticize; quite another thing to create. It is very easy for
others to nit-pick the things you are doing. I can only assume that the
reason for the criticism is that they themselves have solved the problems
you are examining and are willing to show the better way. I am waiting for
them to post alternative methods for doing these high level functions, or to
suggest alternative ways of parsing and grouping the high level functions.

We are all working for much the same goals, with each coming at it from
their own directions, and with their own motivations. Thank you for making
the effort.

Mark Browne


Arthur T. Murray said:
Arthur T. Murray said:
on Mon, 15 Sep 2003:
Human language, of course. Your system is
very Anglo-centric. I suppose that's because
you probably only know English,

+German +Russian +Latin +Greek
[Saepe circumambulo cogitans Latine; ergo scio
me esse.]
DBH:
In that case, giving named concepts a numerical index is
an exercise in futility, since there is no 1-1 mapping
between words and concepts in different languages.
ATM:
Such reasoning about "futility" does not apply here,
where we make no attempt at a one-to-one (1-1) mapping.
Above a few yeastlike "starter" concepts in the English
http://mentifex.virtualentity.com/enboot.html bootstrap,
each AI Mind species dynamically assigns a numeric "nen"
(for English) number on the fly to each learned concept.
The proper inter-mind identifier for any concept is not
the English "nen" number or the German "nde" number or
the French "nfr" number or the Japanese "njp" number but --
(are you sitting down and ready to absorb a great shock?)
the natual-language *word* for the communicand concept!
For example, if the word "FahrvergnYgen" has an index of
93824 in the geVocab module, what index(es) does that
correspond to in the enVocab module?
ATM:
http://mentifex.virtualentity.com/standard.html#variable tells
why the German vocabulary module should be called "deVocab"
in accordance with the (International Standards Organization)
"ISO 639:1988 Extract: Codes for names of language" online at
http://palimpsest.stanford.edu/lex/iso639.html (de = deutsch).

Normally it would be cheeky and presumptuous for an AI
parvenu and arriviste like myself to make bold to proclaim
a white paper of "Standards in Artificial Intelligence,"
but people began porting my AI Mind code into various
programming languages several years ago, and I had no
choice but to draw up a "Standards" document just to
make things easier for all concerned -- lest people
start "reinventing the wheel" or diverging too far.

If you don't mind my giving you some more ammunition
for things "Mentifex" to sneer at, then please notice
how craftily the new "AI Standards" white paper at
http://mentifex.virtualentity.com/standard.html now
contains the exact same brain-mind ASCII diagram as
found on page 15 (HCI) and on the cover of the AI4U
http://www.amazon.com/exec/obidos/ASIN/0595654371/ book.

Sometimes I wonder if people, especially students,
who see the Mentifex AI Mind diagrams, simply assume
that the depicted layouts are part of the general
background of neuroscience and cognitive science,
without realizing how original the diagrams are.
There must eventually be some quite authoritative
diagrammatic depictions of how the brain-mind is
organized with respect to information-flow, so
we shall see if the 35 AI4U diagrams were right.
[...]
One reaon the AI4U book may prove to be valuable
over time is
http://www.amazon.com/exec/obidos/ASIN/0595654371/
35 diagrams.
DBH:
Ah yes. But then, you should have called it:
"Intelligence in 35 Diagrams".
ATM:
The AI textbook AI4U (AI For You) has 34 mind-module diagrams,
one for each of the 34 chapters -- each about a single module.
http://mentifex.virtualentity.com/ai4u_157.html is diagram #35.

Notice how crafty it is in all the XYZ AI Weblogs (such as
http://mentifex.virtualentity.com/cpp.html -- C++ AI Blog) to
link to "ai4u_157.html" as the "framework" on p. 157 of AI4U,
instead of to "mindloop.html" as the diagram was once called,
because now the (actually quite genuine) impression is given
that everything ties in with the wonderful new AI Mind design
that has metaoneirically been revealed to the world in AI4U.

ATM:
[...]
As you go on to elaborate below, of course the neuronal
fiber holds a concept by storing information in the
*connections*.
DBH:
You're the only person I know that calls them "fibers".
Everyone else calls them "neurons" or "units". When you
say "fiber", it makes me think you are talking about the
axon, and don't really understand how neurons work.
ATM:
Logic dictates that the most important thing about neurons is how
elongated they are. If neurons were all punctiform cells like an
amoeba, there would be no long-distance transmission of signals
and there would be no diachronic persistence of conceptual ideas.
That is to say, the "longness" of neuronal fibers, each tagged
with as many as ten thousand associative tags to other concepts,
allows a concept to be embodied in one or more (redundant) fibers.

http://mentifex.virtualentity.com/theory5.html -- Know Thyself --
the Concept-Fiber Theory of Mind includes speculation that minds
may have evolved when originally dedicated sensory fibers broke
free by genetic saltation from their sensory-only dedication and
stumbled felicitously into a role of holding long-time conceptual
information rather than instantaneous-time sensory information.

ATM:
DBH:
LOL!!! Well, that's all very poetic, but not very useful for
building an intelligent artifact. I would hardly describe a
separate semantic and syntactic system for natural language
processing a "novel, original contribution". Such a thing is
so obvious that even a rank novice in AI would probably *start*
with such a structure if designing a natural language system.
ATM:
Yes, but where would the novice get such a system if not from AI4U?
Right here and now I would humbly like to ask paper-writers and
book-authors to cite the AI4U mind-diagrams and germane ideas in
their forthcoming publications so as to spread the thoery of mind,
with adjustments and even refutations where necessary. I give
blanket permission for anyone anywhere to reproduce the diagrams
and to re-fashion them into line-art far prettier than ASCII.

ATM:
[...]
We have heard a lot of talk about portions of the
Internet coming together to form one giant "Global
Brain" in distributed processing. Uneasiness ensues
among us "neurotheoreticians." How would you like
it if your own brain-mind were distributed across
several earthly continents, with perhaps one lunar lobe,
and old engrams stored four light years away at Alpha
Centauri?
DBH:
LOL!!! You cannot possibly be serious!!! If I thought
you were actually intelligent, I would chalk this up to
satire.
ATM:
Since a brain-mind needs to be local, I am quite serious.

As an aside, please consider the following line of thought.
By the way, this same thing happens to me whenever I try
to read a paragraph or two of Esperanto -- an artificial
language which I do not know myself. As I stare at a
sentence in Esperanto, a very strange thing happens.
[Now remember, I am fluent in Latin, Germanic and Slavic
languages -- the Esperanto guy was Polish -- plus Greek.]
The meanings of the Esperanto words slowly surface in my
polyglot mind as I stare at each succeeding word, and I
feel as if I have known Esperanto automagically for years.
But yet I could never try to read a book in Esperanto --
it would be cranially painful -- like the scientist
father-figure trying to mindmeld with the Krell in
http://us.imdb.com/Title?0049223 -- Forbidden Planet.

Suppose that Reichsfuehrer Ashcroft and his Parteibonzen
were to set up a Total-Information-Awareness AI Mind
that knew everything about every American citizen and
every teenage or older inmate at the Guantanomo K-Z
and the other now emerging American concentration camps,
such as Novo-Sobibor near Baghdad International Airport.

Such an all-seeing, all-knowing Big-Brother-Mind would
only need some sort of RF-ID tag or other chance input
to start thinking about you and your entire life-span:
"As I think about Citizen Held, suddenly I remember...
his fingerprints, uh, wait a minute, his genome... and,
uh, it's in there somewhere... his every Usenet post."

The distributed Held-data are not in the Mind until
they are fetched, but BBN (Big-Brother-Noesis) *feels*
as if it knew the data all along, albeit recalling slowly.

ATM:
DBH:
The fact that you sum up consciousness in 4 weak,
meaningless paragraphs is surpassed only by the fact
that you can't spell "conscius" [sic].
ATM:
I shorten all Wintel filenames to be eight characters or fewer.
The name "conscius.html" happens to be perfectly good Latin.
[...]
Let's do something new for a change. Let's code AI in
many different programming languages and let's let evolve
not only the AI Minds but cummunities of AI adepts to
tend to the AI Minds and nurture them into full maturity.
DBH:
I intend to do just that, but not using the Mentifex model,

Why not? You could make whatever changes you saw as needed,
and then your resulting AI would be a brand new Mind species.
and not out in the open. But until such time as I have a
working prototype, I'm not going to ramble on about
speculations and mindless musings. [...]

Arthur
--
http://www.kurzweilai.net/mindx/profile.php?id=26 - Mind-eXchange;
http://www.amazon.com/exec/obidos/ASIN/0595654371/ -- AI Textbook;
http://www.sl4.org/archive/0205/3829.html -- Goertzel on Mentifex;
http://doi.acm.org/10.1145/307824.307853 -- ACM SIGPLAN: Mind.Forth
 
S

srgrimm

As long as programming a machine to think remains as a final option for
artificial intelligence, then, although we may develop some very clever
machines, they will never aspire to anything more than the program itself.
Meaning that if real intelligence is to be "acquired", it will not be
through programming but rather, more likely, through a form of hardware
development that has the ability to "tap into" what is now considered to be
an ethereal universal consciousness. Only at this level will a machine begin
to compete with a human intelligence and truly have access to knowledge and
virtually unlimited memory.
We have that ability now, to an extent. However the powerful lure of the
digital domain to create at will, what we believe should be intelligent, is
nothing more than an elaborate tool that is actually holding us back; both
technically and philosophically.
Let's not get too excited about our so called break throughs in
programming........right direction, wrong tool.
Steven R. Grimm
http://biobotics.150m.com

Arthur T. Murray said:
DBH:
Brittle.
ATM:
You are right. It is precariously brittle. That brittleness
is part of the "Grand Challenge" of building a viable AI Mind.
First we have to build a brittle one, then we must trust the
smarter-than-we-are crowd to incorporate fault-tolerance.
DBH:
Language-specific.
ATM:
Do you mean "human-language-specific" or "programming-language"?
With programming-language variables, we have to start somewhere,
and then we let adventitious AI coders change the beginnings.
With variables that lend themselves to polyglot human languages,
we achieve two aims: AI coders in non-English-speaking lands
will feel encouraged to code an AI speaking their own language;
and AI Minds will be engendered that speak polyglot languages.
Obiter dictu -- the Mentifex "Concept-Fiber Theory of Mind" --
http://mentifex.virtualentity.com/theory5.html -- features
a plausible explanation of how to implant multiple Chomskyan
syntaxes and multiple lexicons within one unitary AI Mind.
The AI textbook AI4U page 35 on the English language module --
http://mentifex.virtualentity.com/english.html -- and
the AI textbook AI4U page 77 on the Reify module --
http://mentifex.virtualentity.com/reify.html -- and the
AI textbook AI4U page 93 on the English bootstrap module --
http://mentifex.virtualentity.com/enboot.html -- all show
unique and original diagrams of an AI Mind that contains
the thinking apparatus for multiple human languages --
in other words, an AI capapble of Machine Translation (MT).

DBH:
Non-scalable.
ATM:
Once again, we have to start somewhere. Once we attain
critical mass in freelance AI programmers, then we scale up.

DBH:
You are trying to build something "intelligent", aren't you?
ATM:
http://mentifex.virtualentity.com/mind4th.html -- Machine...
http://mentifex.virtualentity.com/jsaimind.html -- Intelligence.

DBH:
Then you invite programmers to add to this core by using
indexes above a suitable threshold, as if we were defining
ports on a server. [...]
ATM:
http://mentifex.virtualentity.com/newcept.html#analysis
explains that Newconcept calls the English vocabulary
(enVocab) module to form an English lexical node for any
new word detected by the Audition module in the stream of
user input.

DBH:
Besides the fact that the "enVocab" module is embarrassingly
underspecified, the notion of indexing words is just silly.
ATM:
Nevertheless, here at the dawn of AI (flames? "Bring 'em on.")
we need to simulate conceptual gangs of redundant nerve fibers,
and so we resort to numeric indexing just to start somewhere.

DBH:
If a dictionary were a database, it might be a reasonable idea.
But trying to simulate human speech with a database-like
dictionary is the way of symbolic AI, and the combinatorial
nature of language is going to rear its ugly head when you try
to scale your system to realistic proportions. Hence, why
programs like SHRDLU were good at their blocks worlds,

http://www.semaphorecorp.com/misc/shrdlu.html -- by T. Winograd?
but terrible at everything else. Again, a little history would
do you well. If you want to refer to your text, let's take a
quick look at something you wrote:

6.4. Introduce aspects of massively parallel ("maspar")
learning by letting many uniconceptual filaments on the
mindgrid coalesce into conceptual minigrids that
redundantly hold the same unitary concept as a massively
parallel aggregate with massively parallel associative tags,
so that the entire operation of the AI Mind is massively
parallel in all aspects except such bottleneck factors as
having only two eyes or two ears -- in the human tradition.
Umm...pardon me, but the emperor is wearing no clothes.
"uniconceptual filaments"?
ATM:
Yes. Each simulated nerve fiber holds one single concept.
"conceptual minigrids"?
ATM:
Yes. Conceptual fibers may coalesce into a "gang" or minigrid
distributed across the entire mindgrid, for massive redundancy --
which affords security or longevity of concepts, and which
also aids in massively parallel processing (MPP).
"massively parallel aggregate"?

Where is the glossary for your pig Latin?
How on earth is a programmer supposed to build a
computational model from this fluff? Read your mind?
She certainly can't read your text. This sounds more
like a motivational speech from a pointy-haired boss in a
Dilbert strip than instructions for how to build an "AI Mind".
I would parody it, but you've done a fine job yourself.

Ha! You're funny there! said:
Here's the real cheerleading right here:

Then go beyond human frailties and human limitations
by having any number ad libitum of local and remote
sensory input devices and any number of local and
remote robot embodiments and robotic motor
opportunities. Inform the robot of human bondage in
mortal bodies and of robot freedom in possibilities yet
to be imagined.
Wow. I have a warm fuzzy feeling inside. I think I'll stay
up another hour writing more of the Sensorium module.
[...] At one point, you address programmers who might
have access to a 64-bit architecture. Pardon me, but
given things like the "Hard Problem of Consciousness",
the size of some programmer's hardware is completely
irrelevant. [...]

http://mentifex.virtualentity.com/standard.html#hardware
(q.v.) explains that not "the size of some programmer's
hardware" counts but rather the amount of memory
available to the artificial Mind.
The amount of memory is completely irrelevant, since you
have not given enough detail to build a working model.
ATM:
If the AI coder has an opportunity to go beyond 32-bit and
use a 64-bit machine, then he/she/it ought to do it, because
once we arrive at 64-bits (for RAM), we may stop a while.
It's like me saying: "If you have a tokamak transverse reactor,
then my spaceship plans will get you to Alpha Centauri in
8 years, but if you only have a nuclear fission drive, then it
will take 10. Oh and drop your carrots and onions in this
big black kettle I have here." Also, the memory space of a
single processor really isn't that important, since a serious
project would be designed to operate over clusters or grids
of processors. But I suppose it never occurred to you that
you might want an AI brain that takes advantage of more
than one processor, huh?
ATM:
The desired "unitariness of mind" (quotes for emphasis) may
preclude using "clusters or grids of processors."
I suppose you think the Sony
"Emotion Engine" is what Lt. Cmdr. Data installed so he
could feel human?
There's no doubt it's ambitious. And I have no doubt that
you believe you have really designed an AI mind. However,
I also believe you hear voices in your head and when you
look in the mirror you see a halo. Frankly, your theory has
too much fibre for me to digest.
If I knew what "morking" was, I would probably agree.
However, your first example of someone "morking" on it in
C++ tells me that "morking" isn't really a good thing. At
least not as far as C++ goes. Namely, it more or less proves
that the "interest" in this project mainly consists of the blind
being (b)led by the blind.
This is the only sign of progress you have shown. Without
even looking at the link, I can believe that the "VB Mind"
already has a higher IQ than you.
Oh, I see...so if enough people report on it, then it's "serious"
and should be taken seriously? A lot of people reported on
cold fusion. But I'd take the cold fusion researchers over
you any day of the week.
And what, pray tell, is a "mind species"? Is it subject to
crossover, selection, and mutation?
ATM:
http://www.seedai.e-mind.org tries to track each new species
of AI Mind. We do _not_ want standard Minds; we only wish
to have some standards in how we go about coding AI Minds.
LOL!!!! Wow! Whatever you're smoking, it has to be
illegal, because it's obviously great stuff!
Here is an example of "primitive, free AI source code":
10 PRINT "Hello, world!"
See? It's got a speech generation and emotion engine
built right in! And the AI is so reliable, it will never display
a bad attitude, even if you tell it to grab you a cold one
from the fridge. It always has a cheerful, positive
demeanor. It is clearly self-aware, because it addresses
others as being distinct from itself. And it has a theory of
mind, because it knows that others expect a greeting when
meeting for the first time. Unfortunately, it has no memory,
so every meeting is for the first time. However, its output
is entirely consistent, given this constraint. I guess I've
just proved that "AI has been solved in theory"!
I'm still waiting to see *your* mind germinate. I've watched
grass grow faster. While ad homs are usually frowned
upon, I don't see any harm when applied to someone who
cannot be reasoned with anyway. Since you seem to have
single-handedly "solved the AI problem", I'd like to ask
you a few questions I (and I'm sure many others) have.
1) How does consciousness work?
ATM:
Through a "searchlight of attention". When a mind is fooled
into a sensation of consciousness, then it _is_ conscious.
2) Does an AI have the same feeling when it sees red
that I do? How do we know?

ATM:
You've got me there. Qualia totally non-plus me :(
3) How are long-term memories formed?

ATM:
Probably by the lapse of time, so that STM *becomes* LTM.
4) How does an intelligent agent engage in abstract reasoning?

ATM:
Syllogistic reasoning is the next step, IFF we obtain funding.
http://www.kurzweilai.net/mindx/profile.php?id=26 - $send____.
5) How does language work?

http://www.amazon.com/exec/obidos/ASIN/0595654371/ -- AI4U.
6) How do emotions work?

ATM:
By the influence of physiological "storms" upon ratiocination.
Please don't refer me to sections of your site. I've seen
enough of your writing to know that the answers to my
questions cannot be found there.
Like a typical crackpot (or charlatan), you deceive via
misdirection. You attempt to draw attention to all the
alleged hype surrounding your ideas without addressing
the central issues. I challenged your entire scheme by
claiming that minds are not blank slates, and that human

IIRC the problem was with how you stated the question.
brains are collections of specialized problem solvers
which must each be understood in considerable detail
in order to produce anything remotely intelligent. You
never gave a rebuttal, which tells me you don't have one.
Why don't you do yourself a favor and start out by
reading Society of Mind, by Minsky. After that, read
any good neurobiology or neuroscience text to see just
how "blank" your brain is when it starts out. Pinker
has several good texts you should read. There's a
reason why he's a professor at MIT, and you're a
crackpot trying to con programmers into fulfilling your
ridiculous fantasies.

Arthur
--
http://mentifex.virtualentity.com/cpp.html -- C++ AI Weblog
http://www.kurzweilai.net/mindx/profile.php?id=26 - Mind-eXchange;
http://www.sl4.org/archive/0205/3829.html -- Goertzel on Mentifex;
http://doi.acm.org/10.1145/307824.307853 -- ACM SIGPLAN: Mind.Forth
 
M

Mark Browne

srgrimm said:
As long as programming a machine to think remains as a final option for
artificial intelligence, then, although we may develop some very clever
machines, they will never aspire to anything more than the program itself.
Meaning that if real intelligence is to be "acquired", it will not be
through programming but rather, more likely, through a form of hardware
development that has the ability to "tap into" what is now considered to be
an ethereal universal consciousness. Only at this level will a machine begin
to compete with a human intelligence and truly have access to knowledge and
virtually unlimited memory.
We have that ability now, to an extent. However the powerful lure of the
digital domain to create at will, what we believe should be intelligent, is
nothing more than an elaborate tool that is actually holding us back; both
technically and philosophically.
Let's not get too excited about our so called break throughs in
programming........right direction, wrong tool.
Steven R. Grimm
http://biobotics.150m.com
<snip>
This conjecture flies in the face of what is known about neurological
function. Much of the brains functions have been mapped in the last 100
years. The subjects have come from a variety of sources - some are due to
strokes, some from horrendous war wounds, some from brain surgery. IN each
case, a part of the brain in disabled. The researchers has the opportunity
to see what effect this has on cognition. When you tally up the bits and
pieces that have been described, there is little left of consciousness to be
explained.

It is my opinion that it is not necessary to look for "ethereal universal
consciousness" to explain a simple meat machine.

Mark Browne
 
N

Noah Roberts

Mark said:
This conjecture flies in the face of what is known about neurological
function. Much of the brains functions have been mapped in the last 100
years. The subjects have come from a variety of sources - some are due to
strokes, some from horrendous war wounds, some from brain surgery. IN each
case, a part of the brain in disabled. The researchers has the opportunity
to see what effect this has on cognition. When you tally up the bits and
pieces that have been described, there is little left of consciousness to be
explained.

It is my opinion that it is not necessary to look for "ethereal universal
consciousness" to explain a simple meat machine.

Meat machine, or for that matter any other purely biological
explaination for thought, fails to account for choice. If all thought
is simple chemical reactions or neural impulses then there is no way for
you to choose anything, the neural impulses do that for you. That being
the case then how can you be responsible for anything that you do? If,
on the other hand, we can make choices then what gives us that ability?
What allows us to manipulate the neural impulses to fire the way WE
want them to?
 
M

Mark Browne

Noah Roberts said:
Meat machine, or for that matter any other purely biological
explaination for thought, fails to account for choice. If all thought
is simple chemical reactions or neural impulses then there is no way for
you to choose anything, the neural impulses do that for you. That being
the case then how can you be responsible for anything that you do? If,
on the other hand, we can make choices then what gives us that ability?
What allows us to manipulate the neural impulses to fire the way WE
want them to?

This choice question is a common stumbling block for many people.

In you question, you state: "the way WE want them to" without making the
connection that this is how our programs run. We get our programming from
our culture, our learning, and our instincts. This forms our belief systems
and values. Without learned cultures, people are basically animals. There
have been enough cases of feral humans to validate this fact. Our
programming is combined with our short and long term experience to make
judgments and decisions as necessary. Since any two people have different
background they will not necessarily make the same decisions.

I have seen a goodly number of computer programs that are able to follow
their program(s) and captured data to make a choice. The data may be
complicated enough that an outsider is not able to predict the outcome
before the fact. Many games now have good "AI" players that have very rich
behavior, and are not very predictable. This is no theoretical reason that
an AI could not have a rich enough programming to seem to have free will.

As far as the moral question about free will - I consider it to be as
relevant as "How many angels can dance on the head of a pin?" or "What will
an android consciousness feel like?" It has no practical bearing on the
creation of an artificial intelligence.

As far as responsibility and values - this is one of the functions of
culture. There is little that is intrinsic in human morals. Behavior that it
perfectly acceptable in one time and place is not in others. Culture teaches
acceptable behavior and conditions adherence to said behavior.

Mark Browne
 
D

David B. Held

Noah Roberts said:
[...]
Meat machine, or for that matter any other purely
biological explaination for thought, fails to account
for choice.

You mean "free will". Define a test to detect the presence
of free will in a decision-making agent.
If all thought is simple chemical reactions or neural impulses

Well, that's the rub, isn't it? Neither are "simple". If they
were, humans wouldn't appear to be so intelligent.
then there is no way for you to choose anything,

For a suitable definition of "choose", which in this case, means
"freely choose".
the neural impulses do that for you.

Who is this "you", other than the very neural impulses of which
you speak?
That being the case then how can you be responsible for
anything that you do?

In the sense that you might have chosen differently, you can't
be responsible, since you couldn't have chosen differently.
But that isn't a very useful definition of "responsibility", is it?
If, instead, we define responsibility as: "accepting corrective
action", then it doesn't matter if the decision-making agent
makes free choices or determined ones, does it? The end
result of modifying behaviour towards a more desirable end
is achieved, right? We punish people because we can't do
otherwise. Ponder that.
If, on the other hand, we can make choices then what gives
us that ability? What allows us to manipulate the neural
impulses to fire the way WE want them to?

Your Deus ex Machina(R) Neuro-Soul 3000 Control
Panel(TM), of course. Don't you have the latest model?

Dave
 
D

David B. Held

Mark Browne said:
The first person to create a full functional Artificial Intelligence
will have to solve all levels, from the simple low level
calculation units through the high level organization and
information coding.

It depends on what people accept as being "intelligent". I
suspect it is possible to create something that is deemed
intelligent that does not deal with low-level issues like
sensory analysis.
[...]
I have not seen a similar method for organizing the high
level functionality.

That's because nobody knows what the high level
functionality looks like in enough detail to design a
complete model.
I have read much of what you have published on the
internet.

Wow. I must have a low pain tolerance.
While I disagree with much of what you are doing,
your "top level down" approach has given me much
to think about in my own explorations of cognition.

I don't think there's anything wrong with doing top-down
thinking about cognition. I *do* think there's something
wrong with claiming that "AI has been solved in theory".
Reading through your material has allowed me to gain
experience with the scope of what is needed, and given
some insight in what will be required.

I don't think Mr. Murray gives the half of it. And to that
end, I think his work does more harm than good, because
it possibly deceives people into thinking that intelligence
is much simpler than it really is. I would be quite
embarrassed to be a human if I found out my mind was
as simple as Mr. Murray's ASCII diagrams.
Perhaps you are the only person in the field working at
this level today.

I seriously doubt it. I strongly suspect that a lot of people
are working on AI and not talking about it. It's a little like
cryptography. There is much more to be gained by not
sharing than by sharing. The first person with a good,
working AI is going to introduce an immense imbalance
of power in the world the likes of which has not been
seen since the dawn of the nuclear age. In an age where
nuclear weapons are merely a posturing tool for nations
that have too much to lose by using them, it is information
that wields true power, and a general-purpose AI is the
ultimate tool for wielding that power. Given all that, it
is simply inconceivable that nobody else is thinking at the
high level, and altogether expected that they wouldn't be
talking about it.
[...]
You have been promoting your ideas for years; selflessly
sharing your ideas with anyone who would listen.

I'm not so sure what is so "selfless" about $30/book. Perhaps
you could enlighten me? Note that every post of his in this
group has contained multiple links to his book, or exceprts
of his book. Even the most ruthless Madison Avenue
marketer would blush with shame at this bald-faced ad
campaign.
I would like to see you correct some of the difficult
areas and move forward.

From where I'm sitting, the "difficult areas" cover 99% of his
"theory". Or did you not read the part where he said that he
thinks most of cognition is memory? Maybe you missed the
irony of the grandmother cell discussion compared to the
numerical indexing of words? Perhaps you were thinking
of contributing to the Perl version of the Sensorium module?
Your successes would clearly be everyone's gain.

No, his "successes" appear to drain $30 from people's wallets.
In the accounting classes I took, that isn't a "gain", although
by Arthur Anderson's "New Accounting" (aka "New Math"),
it could probably be contrived that way.
It is easy to criticize; quite another thing to create.

It is easy to play the piano. It is difficult to play the piano
well. It is easy to paint a picture. It is difficult to paint well.
It is easy to write a story. It is difficult to write a story well.
It is easy to program a computer. It is difficult to program a
computer well. The difference between doing something
and doing it well is knowing when you're *not* doing it well.
For many tasks, it is obvious when that occurs. When it
comes to designing an AI, apparently it is not so obvious.
If Mr. Murray truly is serious about designing an AI *well*,
he had better take heed of the thousands of man-hours of
research that is directly apropos to his work. That will tell
him when his designs are not going *well*. Since he refuses
to "pollute" his mind with other ideas, I am helping him out.
I don't think it's a stretch to say that I'm doing him a service
while at the same time doing him a disservice. Merely by
reading my posts enough to reply to them (to defend his
position and bolster his book sales), he is learning new
things about the state of cognitive science that he obviously
didn't spend the time to learn by buying *someone else's*
book for $30. Of course, the disservice is to try to
persuade other readers that AI4U is a complete waste of
money. But I'm mostly letting Arthur do the dissuading,
and I think he's doing a great job of it.
It is very easy for others to nit-pick the things you are
doing.

When one is doing so many things wrong, it is not only
easy, but almost compulsive to nitpick.
I can only assume that the reason for the criticism is that
they themselves have solved the problems you are
examining and are willing to show the better way.

If I had a better way, I would show it by controlling the
world, just like anyone who had a general-purpose AI.
The real reason for the criticism is to expose Arthur for
the crackpot that he is, in a way that is so painful and
humiliating that he stops spamming the net (including
Usenet) with his drivel. Oh, and it's really fun. I suppose
you think that one cannot criticize unless one has a solution?
Let me dispel that fallacy with a simple illustration. I
don't know the exact population of China, but I can
show that it is not 0. Therefore, if Arthur were to claim
that the population of China is 0, I could criticize the
claim *without knowing the solution*. That's because
it only takes one counterexample to falsify an hypothesis.
It doesn't require a solution.

Arthur himself appreciates the value of criticism. After all,
he has memorized Occam's Razor in the original Latin.
And what is the purpose of Occam's Razor other than to
criticize extravangant theories? Trim the fat. Unfortunately,
despite the high fiber content, when you trim the fat off the
Mentifex pig, you're only left with the ghost of Arthur.
I am waiting for them to post alternative methods for
doing these high level functions, or to suggest alternative
ways of parsing and grouping the high level functions.

Well, for one, I don't see any benefit to performing sensory
analysis serially, in a loop. And if one does so for practical
purposes, I don't see the benefit of specifying a particular
order. After all, I'm pretty sure there isn't an ordering
dependency in my brain that I can only smell things after
I've heard something. That is to say, I believe that Arthur
has overspecified operational ordering (wow, that is a
syllable-rich alliteration). Thus, my first recommended
change would be to switch to a stochastic design. Why?
You can find out all about the benefits of stochastic over
synchronous designs in the ANN literature. The digital
nature of synchronous updates can lead to weird
deadlocks and loops and other undesirable artifacts of
simulation. Stochastic update not only avoids such issues,
it also introduces an element of pseudo-randomness that
might be desirable in an entity that is purported to be
"intelligent". After all, we expect intelligent agents to be
original, and by "original", we really mean "random".
We are all working for much the same goals, with each
coming at it from their own directions, and with their
own motivations. Thank you for making the effort.

If you appreciate his contribution so much, why don't you
thank him with your wallet? He even provided a link!!

Dave
 
D

David B. Held

----- Original Message -----
From: "Arthur T. Murray" <[email protected]>
Newsgroups:
bionet.neuroscience,comp.ai.genetic,comp.ai.nat-lang,comp.ai.neural-nets,com
p.lang.c++
Sent: Tuesday, September 16, 2003 1:17 PM
Subject: Re: Standards in Artificial Intelligence

[...]
Above a few yeastlike "starter" concepts in the English
http://mentifex.virtualentity.com/enboot.html bootstrap,
each AI Mind species dynamically assigns a numeric
"nen" (for English) number on the fly to each learned
concept.

As far as I can tell, the indexes are an implementation
detail that have no relevance to AIs or minds. It is a lot
like specifying a high-level design for a 747 and throwing
in that the seats must be blue with a red stripe down the
middle.
[...]
Normally it would be cheeky and presumptuous for an
AI parvenu and arriviste like myself to make bold to
proclaim a white paper of "Standards in Artificial
Intelligence,"

Yes, considering that standards are usually drawn up for
entities which already exist.
[...]
Sometimes I wonder if people, especially students,
who see the Mentifex AI Mind diagrams, simply assume
that the depicted layouts are part of the general
background of neuroscience and cognitive science,

Don't worry. Students who are that stupid have to
wear helmets and ride the short bus to school.
without realizing how original the diagrams are.

Original != useful.
There must eventually be some quite authoritative
diagrammatic depictions of how the brain-mind is
organized with respect to information-flow, so
we shall see if the 35 AI4U diagrams were right.

Actually, there are. Try looking at any neurophysiology
text to see some from your very own...err...from the
brains of your fellow humans. Of all the ones I've
seen, none remotely resemble your ASCII diagrams.
[...]
Logic dictates that the most important thing about neurons
is how elongated they are. If neurons were all punctiform
cells like an amoeba, there would be no long-distance
transmission of signals and there would be no diachronic
persistence of conceptual ideas.

LOL!!! So I suppose the number of dendrites or synapses
or the configuration of local connections are all just
irrelevant details compared to "axon length". While signal
transmission is very important, the axon is, perhaps, the
least interesting part of the neuron. The soma performs
computation by summing the inputs. The dendrites store
information in the synapses. The axon is essentially a
connecting wire. It is very much like you looking at a
circuit diagram, completely ignoring the logic gates and
flip-flops, and pointing at the longest wire on the page
and exclaiming:

"There! Right there!! Do you see it? That's the
essential feature of this circuit! If that wire were
any shorter, none of this would be possible!!!"
That is to say, the "longness" of neuronal fibers, each
tagged with as many as ten thousand associative tags
to other concepts, allows a concept to be embodied
in one or more (redundant) fibers.

Yes, but this is very brittle. How do you know that your
fibers will support an appropriate number of connections
for each concept? It doesn't scale. That's the problem
with local representations. They are brittle. If you used
a distributed representation, you could end up with
robustness and scalability. Check out Rumelhart and
McClellan for a primer on the essential features of a
distributed architecture, and learn what "parallel" and
"distributed processing" really mean.
[...]
Yes, but where would the novice get such a system if not
from AI4U?

LOL!!!! Wow! Move over Chris Rock! Move over
Dennis Miller! Our next act is the most clueless crackpot
on Usenet! Introducing the ever-funny Arthur T. Murray!!!
Right here and now I would humbly like to ask paper-
writers and book-authors to cite the AI4U mind-diagrams
and germane ideas in their forthcoming publications so as
to spread the thoery of mind,

LOL!! You would have to PAY me to cite any of your
work, even to *disprove* it, in a publication! And for a
beggar, you don't seem to be in a position to do that.
with adjustments and even refutations where necessary.
I give blanket permission for anyone anywhere to
reproduce the diagrams and to re-fashion them into line-
art far prettier than ASCII.

"...because I'm too lazy and inept to do it myself." That
was foolish of you. You should have also given yourself
permission to use any such modification for future editions
of your book!
[...]
Why not? You could make whatever changes you saw
as needed, and then your resulting AI would be a brand
new Mind species.

Because the Mentifex model is horribly wrong on just
about every single feature. I don't know what you call
that, but I call it "useless". I intend to build a pure-play
neural network design that somehow magically solves
what I call the "binding problem", but could also be
thought of as the "instance problem". This is really the
major stumbling block that prevents neural networks
from performing general-purpose computation. I have
some ideas on how the issue can be addressed, but I'm
certainly not going to talk about them until I've had a
go at it myself. Unfortunately, I'm not clever enough
to write a junk science book, so I earn my money
the hard way, and don't have as much time to code
as I'd like.

However, if you manage to con a lot of people into
funding your work, and you want to how I would build
an AI, you could certainly fund *my* project to find
out. ;) I actually have a pretty good plan for building
an AI that doesn't require radical new technology,
builds on lessons learned from symbolic AI and ANNs,
takes cues from the latest neuroscience research,
and makes no overarching assumptions about how
the brain really works (well maybe a few, but they
are all grounded in observation and experiment). Oh,
and it can all be implemented in C++. ;> The problem
is, a large number of people could actually implement
my plan themselves, which is exactly what I *don't*
want to happen, because I want to control the
technology *myself*. Maybe that sounds selfish,
but I'll just blame it on my genes. Or maybe it just
means I'm cynical, and if AI leads to Big Brother, I
want to be sure that *I'm* Big Brother. ;>

Dave
 
M

Mark Browne

I seriously doubt it. I strongly suspect that a lot of people
are working on AI and not talking about it. It's a little like
cryptography. There is much more to be gained by not
sharing than by sharing. The first person with a good,
working AI is going to introduce an immense imbalance
of power in the world the likes of which has not been
seen since the dawn of the nuclear age. In an age where
nuclear weapons are merely a posturing tool for nations
that have too much to lose by using them, it is information
that wields true power, and a general-purpose AI is the
ultimate tool for wielding that power. Given all that, it
is simply inconceivable that nobody else is thinking at the
high level, and altogether expected that they wouldn't be
talking about it.
<snip>

And I thought I was the only person in the world following the "frantic mad
scientist in the cellar" approach!
Sigh.

Assuming you are correct - there is a horde of tinkerers chasing the goal of
functional AI (well, now I know it is at least 2), how much faster it would
go if we worked together.

On the other hand, considering that I can't trust you to do the right thing,
I best keep my work to myself.

Mark Browne
 
C

Carl Burke

:
....
... I intend to build a pure-play
neural network design that somehow magically solves
what I call the "binding problem", but could also be
thought of as the "instance problem". This is really the
major stumbling block that prevents neural networks
from performing general-purpose computation. I have
some ideas on how the issue can be addressed, but I'm
certainly not going to talk about them until I've had a
go at it myself. ...

You may be interested in some of the work done at Berkeley
on binding through temporal synchrony (e.g., SHRUTI at
http://www.icsi.berkeley.edu/~shastri/shruti/ ). There are
probably some other sites that talk about this as well.
 
M

Matthias

A webpage of proposed Standards in Artificial Intelligence is at
http://mentifex.virtualentity.com/standard.html -- updated today.

How about using a mailing list where everyone interested in your
website can subscribe and is informed about your frequent updates?

If everybody posted their update notifications through usenet the news
servers would immediately break down from overload. So please be
polite and use the appropriate channels to communicate with the
readers of your website.
 
K

KP_PC

I'm reading this thread only sporadically, but,
though I've not the entire context, the David's
thoughts deserve comment.

|
| | <snip>
| > [...]
| > The first person with a good,
| > working AI is going to introduce
| > an immense imbalance of power
| > in the world the likes of which has
| > not been seen since the dawn of
| > the nuclear age.

And, unless, folks come to comprehend
how nervous systems process information,
'ai' will [continue to] be ab-used.

The danger is that folks'll abdicate thought
to machines in a way that folks abdicated
thought to spreadsheets during the stock
market bubble [the result being 'Enron', etc.,
and misery all around].

This's the same mis-take that was made
with respect to so-called "nuclear" phen-
omena.

Everyone working on 'shaking the biggest
stick' they can come up with, rather than
expending less than the same quantity of
energy working to actually communicate
mutual understanding to one another, through
which actual progress can be accomplished.

It's the biggest short-coming of generalized
use of computational machines - this off-
loading of thought.

It's 'funny' - folks taking their good "wetware",
and abdicating its functionality to rocks [literally],
and 'wondering' why the general mess only gets
worse.

With respect to =real= AI, I'm not worrired. Real
AI will, necessarily, 'recognize', and 'bow' to, '2nd
Thermo'.

It's the stuff engineered by folks who remain
'blind' to '2nd Thermo', in the same way that
they're 'blind' to it it themselves, that is
dangerous.

It's them who see it as the font of 'power' that
you addressed in your post.

"Long bow defeats mounted armour".

Same-old, same-old, visciously-inwardly-
spiralling decrementing of Freedom to 'move',
ad terminus.

Rending Humanity apart.

'Just' the inverse of what's right-there, in
each person to hope for:

Freedom.

ken [k. p. collins]

| >
| <snip>
| [...]
 
P

Programmer Dude

Mark said:
When you tally up the bits and pieces that have been described,
there is little left of consciousness to be explained.

Actually, "What causes consciousness" is one of the great unanswered
questions in the field. A recent issue of SciAm had the human brain
and mind as its theme. The introductory article has this quote:

"Left largely untouched was one of science's grand challenges,
ranking in magnitude with cosmologists' dream of finding a
way to snap together all the fundamental physical forces:
WE ARE STILL NOWHERE NEAR AN UNDERSTANDING OF THE NATURE OF
CONSCIOUSNESS. Getting there might require another century,
and some neuroscientists and philosophers believe that
comprehension of what makes you you MAY ALWAYS REMAIN UNKNOWABLE.
Pictures abound showing yellow and orange splotches against a
background of gray matter--a snapshot of where the lightbulb
goes on when you move a finger, feel sad, or add two and two.
These pictures reveal which areas receive increased oxygen-rich
blood flow. But despite pretensions to latter-day phrenology,
they remain an abstraction, an imperfect bridge from brain to
mind."

(Emphasis mine)
 
D

David B. Held

Mark Browne said:
[...]
And I thought I was the only person in the world following
the "frantic mad scientist in the cellar" approach!
Sigh.

Assuming you are correct - there is a horde of tinkerers
chasing the goal of functional AI (well, now I know it is at
least 2), how much faster it would go if we worked together.

Heh. There almost certainly is a horde of tinkerers, but they
aren't the real threat. The real threat is the corporations and
gov't agencies that actually have money to fund projects
privately. Look at AI in the 80's. Sure, there was a lot of
hype and a lot of broken promises. But there was also a lot
of projects which had interesting and commercially useful
results, which were not widely publicized, because even
knowledge of the results would constitute a competitive
advantage. Companies like Symbolics died at least partly
because their clients had a vested interest in *not* saying
how well their products worked. And now, AI has been
balkanized into a bunch of disorganized factions at the few
schools still willing to study it, and a bunch of "independent
scholars" trying whatever they can. Everyone has their own
idea on how to do AI, and since nobody has succeeded in
building one, nobody can say that anyone else's approach
is unreasonable (except within some reasonable bounds,
which Mr. Murray has clearly crossed).

Of course, part of the problem is a matter of scale. People
who are actually publishing advances in cognitive science
are doing good work and addressing individual systems,
but since nobody knows how to put all those systems
together into a unified whole, it looks like AI isn't going
anywhere, but it is. It's just that there's a threshold of
knowledge and complexity that must be crossed before
all of this research becomes useful. And frankly, that's a
compliment to the human brain, which AI researchers can
sometimes take for granted.

But to be honest with you, I suspect that some of the best
researchers *aren't* publishing, and that's because they're
paid to keep their results to themselves.
On the other hand, considering that I can't trust you to
do the right thing, I best keep my work to myself.

That's probably the safest thing to do. Do you wonder
why violence still exists? It's not because of "society".
It's not even because of biology. Frankly, in a game-
theoretic way, violence exists because it's a successful
strategy for a self-replicating agent. What that means is
that as long as people attempt to reproduce in *some*
way, they will compete; and some of those techniques
will include harm to others. While cooperation brings
its own set of advantages, a few cheaters prosper most
in a cooperative environment. Look at Kenneth Lay.
Investors cooperate to raise the share price, one well-
placed cheater cashes in. I'm not going to be the one
left holding worthless stock certificates. Especially not
with so many known cheaters in the world.

Dave
 
D

David B. Held

Carl Burke said:
[...]
You may be interested in some of the work done at
Berkeley on binding through temporal synchrony (e.g.,
SHRUTI at http://www.icsi.berkeley.edu/~shastri/shruti/ ).
There are probably some other sites that talk about this
as well.

That is indeed very interesting. I don't know how much to
infer from the demo diagrams, but it appears to me that
the SHRUTI team is using a local representation (which is
quite understandable for a first go). If so, that begins to
address the problem I refer to, but not really. Suppose we
scale up the SHRUTI problem to a distributed
representation, where all the objects are now stored in the
same network. How does reasoning proceed then? Mere
synchrony is not going to be useful, because some of the
patterns you wish to associate simply can't be activated
at the same time. Even if you argue against putting *all*
the object patterns in the same network, you are still going
to want *some* patterns to be distributed over the same
units, and whenever you need to reason about two or
more of those patterns at once, you have a problem.
Namely, the obvious lack of variables in neural network
architecture. Nonetheless, that is an interesting link.
Thanks for bringing it up.

Dave
 
D

David B. Held

Programmer Dude said:
Mark said:
When you tally up the bits and pieces that have been
described, there is little left of consciousness to be
explained.

Actually, "What causes consciousness" is one of the
great unanswered questions in the field. A recent issue
of SciAm had the human brain and mind as its theme.
[...]

While I feel Mr. Browne's prognosis is a bit optimistic, I
think the SA article goes a bit far in the other direction
(though that is entirely expected). If you are interested in
reading about some of the latest ideas in consciousness
research, I really recommend "The Feeling of What
Happens", by Antonio Damasio. It's not a sufficiently
detailed model to go out and build an AI right now, but
I think he really makes some important points that
serious AI researchers need to consider. And the idea
must be catching on, because Marvin Minsky is also
working on his own book on emotion.

Dave
 
K

Ketil Malde

Well, that's the rub, isn't it? Neither are "simple". If they
were, humans wouldn't appear to be so intelligent.

Maybe we don't? Perhaps we're just too dumb to see through our own
stupidity? :)

-kzm
 
J

John Ahlstrom

Programmer said:
Actually, "What causes consciousness" is one of the great unanswered
questions in the field. A recent issue of SciAm had the human brain
and mind as its theme. The introductory article has this quote:

"Left largely untouched was one of science's grand challenges,
ranking in magnitude with cosmologists' dream of finding a
way to snap together all the fundamental physical forces:
WE ARE STILL NOWHERE NEAR AN UNDERSTANDING OF THE NATURE OF
CONSCIOUSNESS. Getting there might require another century,
and some neuroscientists and philosophers believe that
comprehension of what makes you you MAY ALWAYS REMAIN UNKNOWABLE.
Pictures abound showing yellow and orange splotches against a
background of gray matter--a snapshot of where the lightbulb
goes on when you move a finger, feel sad, or add two and two.
These pictures reveal which areas receive increased oxygen-rich
blood flow. But despite pretensions to latter-day phrenology,
they remain an abstraction, an imperfect bridge from brain to
mind."

(Emphasis mine)

Or there is Kaekel's Conjecture
Any system of neural organization sufficiently complicated to
generate the axioms of arithmetic is too complex to understand
itself.
 
P

Programmer Dude

David B. Held said:
If you are interested in reading about some of the latest ideas
in consciousness research, I really recommend "The Feeling of
What Happens", by Antonio Damasio.

I *am* interested, so thanks for the reference!
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,576
Members
45,054
Latest member
LucyCarper

Latest Threads

Top