Standards in Artificial Intelligence

D

David B. Held

Arthur T. Murray said:
A webpage of proposed Standards in Artificial Intelligence
is at http://mentifex.virtualentity.com/standard.html --
updated today.

Besides not having anything to do with C++, you should
stop posting your notices here because you are a crank.
You claim to have a "theory of mind", but fail to recognize
two important criteria for a successful theory: explanation
and prediction. That is, a good theory should *explain
observed phenomena*, and *predict non-trivial
phenomena*. From what I have skimmed of your "theory",
it does neither (though I suppose you think that it does
well by way of explanation).

In one section, you define a core set of concepts (like
'true', 'false', etc.), and give them numerical indexes.
Then you invite programmers to add to this core by using
indexes above a suitable threshold, as if we were defining
ports on a server. When I saw this, and many other things
on your site, I laughed. This is such a naive and simplistic
view of intelligence that you surely cannot be expected
to be taken seriously.

I dare say one of the most advanced AI projects in
existence is Cog. The philosophy behind Cog is that
an AI needs a body. You say more or less the same
thing. However, the second part of the philosophy behind
Cog is that a simple working robot is infinitely better
than an imaginary non-working robot. That's the part
you've missed. Cog is designed by some of the field's
brightest engineers, and funded by one of the last
strongholds of AI research. And as far as success
goes, Cog is a child among children. You expect to
create a fully developed adult intelligence from scratch,
entirely in software, using nothing more than the
volunteer labor of gullible programmers and your own
musings. This is pure comedy.

At one point, you address programmers who might
have access to a 64-bit architecture. Pardon me, but
given things like the "Hard Problem of Consciousness",
the size of some programmer's hardware is completely
irrelevant. These kinds of musings are forgivable when
coming from an idealistic young high school student
who is just learning about AI for the first time. But the
prolific nature of the work implies that you have been
at this for quite some time.

Until such time as you can A) show that your theory
predicts an intelligence phenomenon that is both novel
and later confirmed by experiment or observation of
neurological patients, or B) produce an artifact that is
at least as intelligent as current projects, I must conclude
that your "fibre theory" is just so much wishful rambling.

The level of detail you provide clearly shows that you
have no real understanding of what it takes to build a
successful AI, let alone something that can even
compete with the state of the art. The parts that you
think are detailed, such as your cute ASCII diagrams,
gloss over circuits that researchers have spent their
entire lives studying, which you leave as "an exercise
for the programmer". This is not only ludicrous, but
insulting to the work being done by legitimate
researchers, not to mention it insults the intelligence
of anyone expected to buy your "theory".

Like many cranks and crackpots, you recognize that
you need to insert a few scholarly references here and
there to add an air of legitimacy to your flights of fancy.
However, a close inspection of your links shows that
you almost certainly have not read and understood
most of them, or A) you would provide links *into* the
sites, rather than *to* the sites (proper bibliographies
don't say: "Joe mentioned this in the book he published
in '92" and leave it at that), or B) you wouldn't focus
on the irrelevant details you do.

A simple comparison of your model with something
a little more respectable, such as the ACT-R program
at Carnegie-Mellon, shows stark contrasts. Whereas
your "model" is a big set of ASCII diagrams and some
aimless wanderings on whatever pops into your head
when you're at the keyboard, the "models" link (note
the plural) on the ACT-R page takes you to what...?
To a bibliography of papers, each of which addresses
some REAL PROBLEM and proposes a DETAILED
MODEL to explain the brain's solution for it. Your
model doesn't address any real problems, because
it's too vague to actually be realized.

And that brings us to the final point. Your model has
components, but the components are at the wrong
level of detail. You recognize the obvious fact that
the sensory modalities must be handled by
specialized hardware, but then you seem to think that
the rest of the brain is a "tabula rasa". To see why
that is utterly wrong, you should take a look at Pinker's
latest text by the same name (The Blank Slate).
The reason the ACT-R model is a *collection* of
models, rather than a single model, is very simple.
All of the best research indicates that the brain is
not a general-purpose computer, but rather a
collection of special-purpose devices, each of which
by itself probably cannot be called "intelligent".

Thus, to understand human cognition, it is necessary
to understand the processes whereby the brain
solves a *PARTICULAR* problem, and not how it
might operate on a global scale. The point being
that the byzantine nature of the brain might not make
analysis on a global scale a useful or fruitful avenue
of research. And indeed, trying to read someone's
mind by looking at an MRI or EEG is like trying to
predict the stock market by looking at the
arrangement of rocks on the beach.

Until you can provide a single model of the precision
and quality of current cognitive science models, for
a concrete problem which can be tested and
measured, I must conclude that you are a crackpot
of the highest order. Don't waste further bandwidth
in this newsgroup or others with your announcements
until you revise your model to something that can be
taken seriously (read: explains observed phenomena
and makes novel predictions).

Dave
 
W

White Wolf

Stop these off topic posting to comp.lang.c++ or prepare to look for a new
service provider.
 
A

Arthur T. Murray

[...] In one section, you define a core set of concepts (like
'true', 'false', etc.), and give them numerical indexes.

http://mentifex.virtualentity.com/variable.html#nen -- yes.
Then you invite programmers to add to this core by using
indexes above a suitable threshold, as if we were defining
ports on a server. [...]

http://mentifex.virtualentity.com/newcept.html#analysis explains
that Newconcept calls the English vocabulary (enVocab) module
to form an English lexical node for any new word detected
by the Audition module in the stream of user input.
[...] At one point, you address programmers who might
have access to a 64-bit architecture. Pardon me, but
given things like the "Hard Problem of Consciousness",
the size of some programmer's hardware is completely
irrelevant. [...]

http://mentifex.virtualentity.com/standard.html#hardware (q.v.)
explains that not "the size of some programmer's hardware" counts
but rather the amount of memory available to the artificial Mind.

The Mentifex AI Mind project is extremely serious and ambitious.
Free-lance coders are morking on it in C++ and other languages:

http://mentifex.virtualentity.com/cpp.html -- C++ with starter code;
http://mentifex.virtualentity.com/java.html -- see Mind.JAVA 1 and 2;
http://mentifex.virtualentity.com/lisp.html -- Lisp AI Weblog;
http://mentifex.virtualentity.com/perl.html -- first Perl module;
http://mentifex.virtualentity.com/prolog.html -- Prolog AI Weblog;
http://mentifex.virtualentity.com/python.html -- Python AI Weblog;
http://mentifex.virtualentity.com/ruby.html -- Ruby AI Blog (OO AI);
http://mentifex.virtualentity.com/scheme.html -- Scheme AI Weblog;
http://mentifex.virtualentity.com/vb.html -- see "Mind.VB #001" link.

AI Mind project news pervades the blogosphere, e.g. at
http://www.alpha-geek.com/2003/09/11/perl_ai.html -- etc.

The Mentifex Seed AI engenders a new species of mind at
http://sourceforge.net/projects/mindjava -- Mind2.Java --
and at other sites popping up _passim_ on the Web.

AI has been solved in theory and in primitive, free AI source code.
Please watch each new species of AI Mind germinate and proliferate.

A.T. Murray
 
G

Guest

In comp.lang.java.programmer White Wolf said:
Stop these off topic posting to comp.lang.c++ or prepare to look for a new
service provider.

Seems on topic to every group posted to to me. Also an interesting
project. But I guess you had to actually read his post to figure
that out, Mr. Net-Cop.

--arne
 
D

David B. Held

Arthur T. Murray said:
on Wed, 10 Sep 2003:
[...]
In one section, you define a core set of concepts (like
'true', 'false', etc.), and give them numerical indexes.

http://mentifex.virtualentity.com/variable.html#nen -- yes.

Brittle. Language-specific. Non-scalable. You are trying
to build something "intelligent", aren't you?
Then you invite programmers to add to this core by using
indexes above a suitable threshold, as if we were defining
ports on a server. [...]

http://mentifex.virtualentity.com/newcept.html#analysis
explains that Newconcept calls the English vocabulary
(enVocab) module to form an English lexical node for any
new word detected by the Audition module in the stream of
user input.

Besides the fact that the "enVocab" module is embarrassingly
underspecified, the notion of indexing words is just silly. If
a dictionary were a database, it might be a reasonable idea.
But trying to simulate human speech with a database-like
dictionary is the way of symbolic AI, and the combinatorial
nature of language is going to rear its ugly head when you try
to scale your system to realistic proportions. Hence, why
programs like SHRDLU were good at their blocks worlds,
but terrible at everything else. Again, a little history would
do you well. If you want to refer to your text, let's take a
quick look at something you wrote:

6.4. Introduce aspects of massively parallel ("maspar")
learning by letting many uniconceptual filaments on the
mindgrid coalesce into conceptual minigrids that
redundantly hold the same unitary concept as a massively
parallel aggregate with massively parallel associative tags,
so that the entire operation of the AI Mind is massively
parallel in all aspects except such bottleneck factors as
having only two eyes or two ears -- in the human tradition.

Umm...pardon me, but the emperor is wearing no clothes.
"uniconceptual filaments"? "comceptual minigrids"?
"massively parallel aggregate"? Where is the glossary for
your pig Latin? How on earth is a programmer supposed
to build a computational model from this fluff? Read your
mind? She certainly can't read your text. This sounds more
like a motivational speech from a pointy-haired boss in a
Dilbert strip than instructions for how to build an "AI Mind".
I would parody it, but you've done a fine job yourself. Here's
the real cheerleading right here:

Then go beyond human frailties and human limitations
by having any number ad libitum of local and remote
sensory input devices and any number of local and
remote robot embodiments and robotic motor
opportunities. Inform the robot of human bondage in
mortal bodies and of robot freedom in possibilities yet
to be imagined.

Wow. I have a warm fuzzy feeling inside. I think I'll stay
up another hour writing more of the Sensorium module.
[...] At one point, you address programmers who might
have access to a 64-bit architecture. Pardon me, but
given things like the "Hard Problem of Consciousness",
the size of some programmer's hardware is completely
irrelevant. [...]

http://mentifex.virtualentity.com/standard.html#hardware
(q.v.) explains that not "the size of some programmer's
hardware" counts but rather the amount of memory
available to the artificial Mind.

The amount of memory is completely irrelevant, since you
have not given enough detail to build a working model. It's
like me saying: "If you have a tokamak transverse reactor,
then my spaceship plans will get you to Alpha Centauri in
8 years, but if you only have a nuclear fission drive, then it
will take 10. Oh and drop your carrots and onions in this
big black kettle I have here." Also, the memory space of a
single processor really isn't that important, since a serious
project would be designed to operate over clusters or grids
of processors. But I suppose it never occurred to you that
you might want an AI brain that takes advantage of more
than one processor, huh? I suppose you think the Sony
"Emotion Engine" is what Lt. Cmdr. Data installed so he
could feel human?
The Mentifex AI Mind project is extremely serious and
ambitious.

There's no doubt it's ambitious. And I have no doubt that
you believe you have really designed an AI mind. However,
I also believe you hear voices in your head and when you
look in the mirror you see a halo. Frankly, your theory has
too much fibre for me to digest.
Free-lance coders are morking on it in C++ and other
languages:

If I knew what "morking" was, I would probably agree.
However, your first example of someone "morking" on it in
C++ tells me that "morking" isn't really a good thing. At
least not as far as C++ goes. Namely, it more or less proves
that the "interest" in this project mainly consists of the blind
being (b)led by the blind.

This is the only sign of progress you have shown. Without
even looking at the link, I can believe that the "VB Mind"
already has a higher IQ than you.
AI Mind project news pervades the blogosphere, e.g. at
http://www.alpha-geek.com/2003/09/11/perl_ai.html -- etc.

Oh, I see...so if enough people report on it, then it's "serious"
and should be taken seriously? A lot of people reported on
cold fusion. But I'd take the cold fusion researchers over
you any day of the week.
The Mentifex Seed AI engenders a new species of mind at
http://sourceforge.net/projects/mindjava -- Mind2.Java --
and at other sites popping up _passim_ on the Web.

And what, pray tell, is a "mind species"? Is it subject to
crossover, selection, and mutation?
AI has been solved in theory

LOL!!!! Wow! Whatever you're smoking, it has to be
illegal, because it's obviously great stuff!
and in primitive, free AI source code.

Here is an example of "primitive, free AI source code":

10 PRINT "Hello, world!"

See? It's got a speech generation and emotion engine
built right in! And the AI is so reliable, it will never display
a bad attitude, even if you tell it to grab you a cold one
from the fridge. It always has a cheerful, positive
demeanor. It is clearly self-aware, because it addresses
others as being distinct from itself. And it has a theory of
mind, because it knows that others expect a greeting when
meeting for the first time. Unfortunately, it has no memory,
so every meeting is for the first time. However, its output
is entirely consistent, given this constraint. I guess I've
just proved that "AI has been solved in theory"!
Please watch each new species of AI Mind germinate
and proliferate.

I'm still waiting to see *your* mind germinate. I've watched
grass grow faster. While ad homs are usually frowned
upon, I don't see any harm when applied to someone who
cannot be reasoned with anyway. Since you seem to have
single-handedly "solved the AI problem", I'd like to ask
you a few questions I (and I'm sure many others) have.

1) How does consciousness work?
2) Does an AI have the same feeling when it sees red
that I do? How do we know?
3) How are long-term memories formed?
4) How does an intelligent agent engage in abstract
reasoning?
5) How does language work?
6) How do emotions work?

Please don't refer me to sections of your site. I've seen
enough of your writing to know that the answers to my
questions cannot be found there.

Like a typical crackpot (or charlatan), you deceive via
misdirection. You attempt to draw attention to all the
alleged hype surrounding your ideas without addressing
the central issues. I challenged your entire scheme by
claiming that minds are not blank slates, and that human
brains are collections of specialized problem solvers
which must each be understood in considerable detail
in order to produce anything remotely intelligent. You
never gave a rebuttal, which tells me you don't have one.
Why don't you do yourself a favor and start out by
reading Society of Mind, by Minsky. After that, read
any good neurobiology or neuroscience text to see just
how "blank" your brain is when it starts out. Pinker
has several good texts you should read. There's a
reason why he's a professor at MIT, and you're a
crackpot trying to con programmers into fulfilling your
ridiculous fantasies.

Dave
 
W

White Wolf

Seems on topic to every group posted to to me. Also an interesting
project. But I guess you had to actually read his post to figure
that out, Mr. Net-Cop.

Look at the subject. Look at the content of the posted site. Then look at
the charter of this newsgroup:

"First of all, please keep in mind that comp.lang.c++ is a group for
discussion
of general issues of the C++ programming language, as defined by the
ANSI/ISO
language standard. "

If all that is not enough the list of the newsgroups he cross-posted to
should indicate that the topicality is questionable.

This newsgroup (and I am afraid all languag newsgroups are such) is not
created as a place for discussion of specific programming problems,
especially not if the post is cross-posted to unrelated newsgroups.

Discussion of specific C++ solutions would be topical, but not a genral
discussion for several languages. For that comp.programming etc. should be
used.
 
B

Buster

White Wolf said:
Look at the subject. Look at the content of the posted site. Then look at
the charter of this newsgroup:

"First of all, please keep in mind that comp.lang.c++ is a group for
discussion
of general issues of the C++ programming language, as defined by the
ANSI/ISO
language standard. "

Whoa, actually quoting the charter now. I didn't think you'd go that far.

Regards, Buster
 
W

White Wolf

Buster said:
Whoa, actually quoting the charter now. I didn't think you'd go that
far.

I did not go anywhere. I was here, in this newsgroup. The topic went far.
 
K

Kevin Goodsell

White said:
Look at the subject. Look at the content of the posted site. Then look at
the charter of this newsgroup:

"First of all, please keep in mind that comp.lang.c++ is a group for
discussion
of general issues of the C++ programming language, as defined by the
ANSI/ISO
language standard. "

[cross-posts removed]

Technically we don't have a charter (the group pre-dates newsgroup
charters). That's actually from the welcome message. But it's close enough.

-Kevin
 
W

White Wolf

Kevin said:
White said:
Look at the subject. Look at the content of the posted site. Then
look at the charter of this newsgroup:

"First of all, please keep in mind that comp.lang.c++ is a group for
discussion
of general issues of the C++ programming language, as defined by the
ANSI/ISO
language standard. "

[cross-posts removed]

Technically we don't have a charter (the group pre-dates newsgroup
charters). That's actually from the welcome message. But it's close
enough.

Well, it is the text, which defines the newsgroups purpose of existence.
While it may not be called a charter - IMHO - it is the same thing. ;-)
Anyways my "problem" is with the excess cross-posting and not necessarily
with the topic of AI with C++.
 
S

Steve Holden

White Wolf said:
I did not go anywhere. I was here, in this newsgroup. The topic went far.

People who live in glass houses shouldn't throw stones. Your lazy attitude
to editing the newsgroups these posting were sent to means that as a
comp.lang.python reader I have toi listen you you bitching and moaning about
what's appropriate in comp.lang.c++?

By all means try to keep posters of that group to the charter, but DON'T
subject others to your diatribes ;-)

regards
 
W

White Wolf

Steve said:
People who live in glass houses shouldn't throw stones. Your lazy
attitude to editing the newsgroups these posting were sent to means
that as a comp.lang.python reader I have toi listen you you bitching
and moaning about what's appropriate in comp.lang.c++?

By all means try to keep posters of that group to the charter, but
DON'T subject others to your diatribes ;-)

*PLONK*
 
S

Steve Holden

White Wolf said:


Ooh, that *hurts*. Not.

Well, now I won't annoy "White Wolf" (what a pretentious pseudonym, BTW),
aka Atilla the Net-Cop, with this follow-up, can I say how sorry I am for
the rest of comp.lang.c++ that you have to listen to that kind of rubbish.
Fortunately, adults usually realise that these issues die down much more
quickly when ignored.

This is the only post that adding me to his kill-list will filter out as I'm
not a habitual poster to c.l.c++, and don't intend to bore you any further
:)

Have a nice day.

regards
 
A

Arthur T. Murray

DBH:
Brittle.
ATM:
You are right. It is precariously brittle. That brittleness
is part of the "Grand Challenge" of building a viable AI Mind.
First we have to build a brittle one, then we must trust the
smarter-than-we-are crowd to incorporate fault-tolerance.
DBH:
Language-specific.
ATM:
Do you mean "human-language-specific" or "programming-language"?
With programming-language variables, we have to start somewhere,
and then we let adventitious AI coders change the beginnings.
With variables that lend themselves to polyglot human languages,
we achieve two aims: AI coders in non-English-speaking lands
will feel encouraged to code an AI speaking their own language;
and AI Minds will be engendered that speak polyglot languages.
Obiter dictu -- the Mentifex "Concept-Fiber Theory of Mind" --
http://mentifex.virtualentity.com/theory5.html -- features
a plausible explanation of how to implant multiple Chomskyan
syntaxes and multiple lexicons within one unitary AI Mind.
The AI textbook AI4U page 35 on the English language module --
http://mentifex.virtualentity.com/english.html -- and
the AI textbook AI4U page 77 on the Reify module --
http://mentifex.virtualentity.com/reify.html -- and the
AI textbook AI4U page 93 on the English bootstrap module --
http://mentifex.virtualentity.com/enboot.html -- all show
unique and original diagrams of an AI Mind that contains
the thinking apparatus for multiple human languages --
in other words, an AI capapble of Machine Translation (MT).

DBH:
Non-scalable.
ATM:
Once again, we have to start somewhere. Once we attain
critical mass in freelance AI programmers, then we scale up.

DBH:
You are trying to build something "intelligent", aren't you?
ATM:
http://mentifex.virtualentity.com/mind4th.html -- Machine...
http://mentifex.virtualentity.com/jsaimind.html -- Intelligence.

DBH:
Then you invite programmers to add to this core by using
indexes above a suitable threshold, as if we were defining
ports on a server. [...]
ATM:
http://mentifex.virtualentity.com/newcept.html#analysis
explains that Newconcept calls the English vocabulary
(enVocab) module to form an English lexical node for any
new word detected by the Audition module in the stream of
user input.

DBH:
Besides the fact that the "enVocab" module is embarrassingly
underspecified, the notion of indexing words is just silly.
ATM:
Nevertheless, here at the dawn of AI (flames? "Bring 'em on.")
we need to simulate conceptual gangs of redundant nerve fibers,
and so we resort to numeric indexing just to start somewhere.

DBH:
If a dictionary were a database, it might be a reasonable idea.
But trying to simulate human speech with a database-like
dictionary is the way of symbolic AI, and the combinatorial
nature of language is going to rear its ugly head when you try
to scale your system to realistic proportions. Hence, why
programs like SHRDLU were good at their blocks worlds,

http://www.semaphorecorp.com/misc/shrdlu.html -- by T. Winograd?
but terrible at everything else. Again, a little history would
do you well. If you want to refer to your text, let's take a
quick look at something you wrote:

6.4. Introduce aspects of massively parallel ("maspar")
learning by letting many uniconceptual filaments on the
mindgrid coalesce into conceptual minigrids that
redundantly hold the same unitary concept as a massively
parallel aggregate with massively parallel associative tags,
so that the entire operation of the AI Mind is massively
parallel in all aspects except such bottleneck factors as
having only two eyes or two ears -- in the human tradition.
Umm...pardon me, but the emperor is wearing no clothes.
"uniconceptual filaments"?
ATM:
Yes. Each simulated nerve fiber holds one single concept.
"conceptual minigrids"?
ATM:
Yes. Conceptual fibers may coalesce into a "gang" or minigrid
distributed across the entire mindgrid, for massive redundancy --
which affords security or longevity of concepts, and which
also aids in massively parallel processing (MPP).
"massively parallel aggregate"?

Where is the glossary for your pig Latin?
How on earth is a programmer supposed to build a
computational model from this fluff? Read your mind?
She certainly can't read your text. This sounds more
like a motivational speech from a pointy-haired boss in a
Dilbert strip than instructions for how to build an "AI Mind".
I would parody it, but you've done a fine job yourself.

Ha! You're funny there! said:
Here's the real cheerleading right here:

Then go beyond human frailties and human limitations
by having any number ad libitum of local and remote
sensory input devices and any number of local and
remote robot embodiments and robotic motor
opportunities. Inform the robot of human bondage in
mortal bodies and of robot freedom in possibilities yet
to be imagined.
Wow. I have a warm fuzzy feeling inside. I think I'll stay
up another hour writing more of the Sensorium module.
[...] At one point, you address programmers who might
have access to a 64-bit architecture. Pardon me, but
given things like the "Hard Problem of Consciousness",
the size of some programmer's hardware is completely
irrelevant. [...]

http://mentifex.virtualentity.com/standard.html#hardware
(q.v.) explains that not "the size of some programmer's
hardware" counts but rather the amount of memory
available to the artificial Mind.
The amount of memory is completely irrelevant, since you
have not given enough detail to build a working model.
ATM:
If the AI coder has an opportunity to go beyond 32-bit and
use a 64-bit machine, then he/she/it ought to do it, because
once we arrive at 64-bits (for RAM), we may stop a while.
It's like me saying: "If you have a tokamak transverse reactor,
then my spaceship plans will get you to Alpha Centauri in
8 years, but if you only have a nuclear fission drive, then it
will take 10. Oh and drop your carrots and onions in this
big black kettle I have here." Also, the memory space of a
single processor really isn't that important, since a serious
project would be designed to operate over clusters or grids
of processors. But I suppose it never occurred to you that
you might want an AI brain that takes advantage of more
than one processor, huh?
ATM:
The desired "unitariness of mind" (quotes for emphasis) may
preclude using "clusters or grids of processors."
I suppose you think the Sony
"Emotion Engine" is what Lt. Cmdr. Data installed so he
could feel human?
There's no doubt it's ambitious. And I have no doubt that
you believe you have really designed an AI mind. However,
I also believe you hear voices in your head and when you
look in the mirror you see a halo. Frankly, your theory has
too much fibre for me to digest.
If I knew what "morking" was, I would probably agree.
However, your first example of someone "morking" on it in
C++ tells me that "morking" isn't really a good thing. At
least not as far as C++ goes. Namely, it more or less proves
that the "interest" in this project mainly consists of the blind
being (b)led by the blind.
This is the only sign of progress you have shown. Without
even looking at the link, I can believe that the "VB Mind"
already has a higher IQ than you.
Oh, I see...so if enough people report on it, then it's "serious"
and should be taken seriously? A lot of people reported on
cold fusion. But I'd take the cold fusion researchers over
you any day of the week.
And what, pray tell, is a "mind species"? Is it subject to
crossover, selection, and mutation?
ATM:
http://www.seedai.e-mind.org tries to track each new species
of AI Mind. We do _not_ want standard Minds; we only wish
to have some standards in how we go about coding AI Minds.
LOL!!!! Wow! Whatever you're smoking, it has to be
illegal, because it's obviously great stuff!
Here is an example of "primitive, free AI source code":
10 PRINT "Hello, world!"
See? It's got a speech generation and emotion engine
built right in! And the AI is so reliable, it will never display
a bad attitude, even if you tell it to grab you a cold one
from the fridge. It always has a cheerful, positive
demeanor. It is clearly self-aware, because it addresses
others as being distinct from itself. And it has a theory of
mind, because it knows that others expect a greeting when
meeting for the first time. Unfortunately, it has no memory,
so every meeting is for the first time. However, its output
is entirely consistent, given this constraint. I guess I've
just proved that "AI has been solved in theory"!
I'm still waiting to see *your* mind germinate. I've watched
grass grow faster. While ad homs are usually frowned
upon, I don't see any harm when applied to someone who
cannot be reasoned with anyway. Since you seem to have
single-handedly "solved the AI problem", I'd like to ask
you a few questions I (and I'm sure many others) have.
1) How does consciousness work?
ATM:
Through a "searchlight of attention". When a mind is fooled
into a sensation of consciousness, then it _is_ conscious.
2) Does an AI have the same feeling when it sees red
that I do? How do we know?

ATM:
You've got me there. Qualia totally non-plus me :(
3) How are long-term memories formed?

ATM:
Probably by the lapse of time, so that STM *becomes* LTM.
4) How does an intelligent agent engage in abstract reasoning?

ATM:
Syllogistic reasoning is the next step, IFF we obtain funding.
http://www.kurzweilai.net/mindx/profile.php?id=26 - $send____.
5) How does language work?

http://www.amazon.com/exec/obidos/ASIN/0595654371/ -- AI4U.
6) How do emotions work?

ATM:
By the influence of physiological "storms" upon ratiocination.
Please don't refer me to sections of your site. I've seen
enough of your writing to know that the answers to my
questions cannot be found there.
Like a typical crackpot (or charlatan), you deceive via
misdirection. You attempt to draw attention to all the
alleged hype surrounding your ideas without addressing
the central issues. I challenged your entire scheme by
claiming that minds are not blank slates, and that human

IIRC the problem was with how you stated the question.
brains are collections of specialized problem solvers
which must each be understood in considerable detail
in order to produce anything remotely intelligent. You
never gave a rebuttal, which tells me you don't have one.
Why don't you do yourself a favor and start out by
reading Society of Mind, by Minsky. After that, read
any good neurobiology or neuroscience text to see just
how "blank" your brain is when it starts out. Pinker
has several good texts you should read. There's a
reason why he's a professor at MIT, and you're a
crackpot trying to con programmers into fulfilling your
ridiculous fantasies.

Arthur
 
T

Terry Reedy

David B. Held said:
Arthur T. Murray said:
on Wed, 10 Sep 2003:
[...]

This AI thread has nothing to do with Python (or Java, and maybe not
much C++ either, that I can see). Please delete comp.lang.python (and
maybe the other languages) from any further followups. Note: googling
all newsgroups for 'Mentifex' gets over 4000 hits. I wonder if there
is really much new to say.

Terry J. Reedy
 
D

David B. Held

Arthur T. Murray said:
DBH:
ATM:
You are right. It is precariously brittle. That brittleness
is part of the "Grand Challenge" of building a viable AI
Mind. First we have to build a brittle one, then we must
trust the smarter-than-we-are crowd to incorporate fault-
tolerance.

You have cross-posted this to comp.ai.neural-nets, but
you obviously don't understand neural nets, or you would
know that a distributed representation is not brittle.
DBH:
ATM:
Do you mean "human-language-specific" or
"programming-language"?

Human language, of course. Your system is very Anglo-
centric. I suppose that's because you probably only
know English, but it seems ridiculous to me to define
a standard for intelligence in terms of one language.
With programming-language variables, we have to
start somewhere,

Why not start at the solution? You said that "AI had
been solved in theory". Why can't you apply that
theory to produce a sophisticated AI on the first go?
and then we let adventitious AI coders change the
beginnings.

"Change the beginnings"?
[...]
http://mentifex.virtualentity.com/enboot.html -- all show
unique and original diagrams of an AI Mind that contains
the thinking apparatus for multiple human languages --
in other words, an AI capapble of Machine Translation
(MT).

LOL!!! Yes, the diagrams are certainly "unique" and
"original". And they're about as useful as my "diagram" to
design a human teleporter:

**<%-%>*** &$ @-++=! [human gets transported here]

If you feel there isn't enough detail to accomplish the task,
you know what I feel like when I look at your diagrams.
DBH:
ATM:
Once again, we have to start somewhere. Once we
attain critical mass in freelance AI programmers, then
we scale up.

LOL!! You don't know what "scalable" means, do you?
You think I meant: "has small scale". But I meant: "won't
work well when scaled up".
[...]
DBH:
Besides the fact that the "enVocab" module is
embarrassingly underspecified, the notion of indexing
words is just silly.
ATM:
Nevertheless, here at the dawn of AI (flames? "Bring 'em
on.")

Ok, I'll oblige you...I wish something would "dawn" on you!
we need to simulate conceptual gangs of redundant nerve
fibers, and so we resort to numeric indexing just to start
somewhere.

LOL!! Have you considered an artificial neural network
design? They have been studied for decades, have well-
known and desirable properties, and look nothing like
what you propose in your "theory". They have their own
set of problems, but it looks to me like you have no idea
what they are or how they work. One thing they do well
is content-addressable memory. That means that instead
of finding entities via numerical index, you find them via
their features.

Using Google so far is your only demonstrated skill. I
respect that more than anything you've had to say.
[...]
"uniconceptual filaments"?
ATM:
Yes. Each simulated nerve fiber holds one single concept.

LOL!!! Wow. If you were smart enough to write comics
instead of theories, you could give Scott Adams a run for his
money. I can only assume you got this idea from your meager
understanding of how the human brain works. And I further
postulate that you thought to yourself: "Hey, if it works for the
brain it must work for my AI Mind!" Well, I'm afraid you're
about a century behind the times. There is nothing intrinsic
to a real nerve fiber to suggest that it can store information,
let alone something as nebulous and possibly complex and
structured as a "concept".

In fact, real neurons store information in the *connections*.
That is, the pattern of configuration and the synaptic weights.
But it's ridiculous that I should have to explain this to
someone who has "solved AI in theory". Whereas you could
not use your fiber theory to build one useful working model,
Rosenblatt already built a working model about half a century
ago showing that connection strengths *could* lead to useful
computation. How can you call this the "dawn of AI" when
you haven't even solved a single toy problem, and people
50 years ago who are long dead have solved numerous ones?
The proof of the pudding is in the eating, but you don't even
have pudding yet! (Or maybe that's all you have...)
ATM:
Yes. Conceptual fibers may coalesce into a "gang" or
minigrid distributed across the entire mindgrid, for
massive redundancy -- which affords security or longevity
of concepts, and which also aids in massively parallel
processing (MPP).

This is all well and good, but until you define how fibers
"coalesce", it doesn't *mean* anything. You also don't
explain how MPP occurs, or why concepts would be
insecure or short-lived.
[...]
The amount of memory is completely irrelevant, since you
have not given enough detail to build a working model.
ATM:
If the AI coder has an opportunity to go beyond 32-bit and
use a 64-bit machine, then he/she/it ought to do it, because
once we arrive at 64-bits (for RAM), we may stop a while.
[...]
ATM:
The desired "unitariness of mind" (quotes for emphasis)
may preclude using "clusters or grids of processors."

LOL!!! Ok, let me get this straight...your theory maps
concepts onto "simulated nerve fibers", but "unitariness of
mind" precludes *distributed computing*??? Ok, this is
absolute hilarity! Umm...the whole point of neural network
architecture is that computation is distributed over a large
number of simple computational devices. You at least
understand this to some extent, because you see the value
of neurons, even if you don't really understand how they
work. But a "mind grid" of "nerve fibers" is the ultimate
distributed processing system!!! What does it matter if they
are simulated on one processor or one million? You
yourself said that mind != brain. That's about the only true
thing on your site, and the idea is at least hundreds of years
old. And now, you seem to think that "unitariness of mind"
demands "unitariness of brain". You thinking isn't just
muddled...it's completely off the mark.
[...]
1) How does consciousness work?
ATM:
Through a "searchlight of attention". When a mind is
fooled into a sensation of consciousness, then it _is_
conscious.

LOL!!! You just replaced one black box with another.
Namely, you replaced "consciousness" with "searchlight
of attention" and "sensation of consciousness". Barring
the obvious fact that the second part is embarrassingly
circular, it's interesting that you think consciousness is
so trivial that it can be described in two sentences.
Serious researchers have turned this question into an
entire field of inquiry, offering numerous theories which
attempt to explain various aspects of consciousness. But
you just demonstrated an overwhelming ignorance of both
the state of the art and the nature of consciousness itself
with your trite summary.

Attention is almost certainly a necessary condition for
consciousness, but it is hardly sufficient. Someone just
posted on this newsgroup a "Quine" program for C++.
That is a program which produces its own source as
the output. Is that sufficient "attention"? Is that program
"conscious"? What about a bacterium that is aware of
its chemical and luminous environment, and attends to
each in turn. Is such a bacterium conscious? Your
definition is woefully indiscriminate and incomplete.
ATM:
You've got me there. Qualia totally non-plus me :(

Really? They are exactly the reason that an AI needs a
body. And I guarantee that any theory of AI which does
not address qualia in some way will not be taken
seriously by anyone who studies cognitive science. Even
people who don't believe in qualia (like Dennett)
acknowledge that it must be addressed.
ATM:
Probably by the lapse of time, so that STM *becomes*
LTM.

"Probably"? So if you asked Newton: "What is the ballistic
trajectory of a cannonball absent air resistance", you would
be satisfied with: "Probably some polynomial curve"? If
Newton claimed to have a complete theory of mechanics,
*I*, for one, would not be satisfied with such a nebulous
answer. Neither am I satisfied with such a vague answer
from someone who claims to have "solved AI in theory".
The formation of long-term memories is such a fundamental
operation of intelligent agents that to treat it so non-chalantly
betrays nothing but a contempt for serious cog. sci. work.
After all, the formation of long-term memories is obviously
critical to *learning*, and an entity that couldn't learn could
barely be said to be intelligent.
ATM:
Syllogistic reasoning is the next step, IFF we obtain
funding. http://www.kurzweilai.net/mindx/profile.php?id=26
- $send____.

Ah, I see. So you reply to a question about the one feature
of intelligence that separates man from other creatures with
a single term, and then have the audacity to ask for money???
Humans engage in far more than syllogistic reasoning. You
should at least know what other types of reasoning exist.

You know what's really funny about that link? This part:

Customers who shopped for this item also shopped for these items:
a.. Hidden Order by John H. Holland
b.. The Career Programmer by Christopher Duncan
c.. Lies and the Lying Liars Who Tell Them by Al Franken
I'm not sure why customers who shopped for your book
also shopped for Al Franken's book, but it seems to be a
rather humorous juxtaposition. Anyway, I certainly am not
going to pay $30 to have you give me some hare-brained
idea of how language works.
ATM:
By the influence of physiological "storms" upon
ratiocination.

Oh, the "storms". Of course...why didn't I think of that?
How's the weather in your brain today, Arthur? Is it
raining? Got a little water on the brain? I think your
answers speak volumes about your theory. I just hope
would-be programmers get to see your answers
to the deep questions of AI before spending any time
on your project. Unless they're bad programmers. In
which case, I hope they waste time on your project
instead of subjecting the rest of the world to their code.

If you want a much better explanation of how emtions
work, you should really check out Damasio's work. It
is at least as engaging as your "storms" explanation, and
is backed by actual neurological studies.
[...]
Like a typical crackpot (or charlatan), you deceive via
misdirection. You attempt to draw attention to all the
alleged hype surrounding your ideas without addressing
the central issues. I challenged your entire scheme by
claiming that minds are not blank slates, and that human

IIRC the problem was with how you stated the question.
[...]

Oh, so you had a hard time understanding: "You're a crank;
defend yourself"? Somehow, that doesn't suprise me. Now,
you say that you are serious. But, only serious researchers
can admit that they are wrong. So, let's put your theory to
the falsifiability test. What would it take to convince you
that your fiber theory is wrong? Be careful, because if you
say: "Nothing", then you expose yourself for the dogmatic
crackpot that you are. And if you give some implausible
test, then your theory will be viewed as being useless, since
it can't be tested.

So that leaves you with providing a legitimate test. But
this is where you are stuck, because real theories make
predictions, and it is those predictions which are tested.
Since your theory makes no predictions, you won't be
able to propose a reasonable test of it. So I will do it
for you. Why don't we pit your theory against all the
other theories which were put forth during the *real*
"dawn of AI". Why don't you use your theory to describe
a solution to chess playing, checkers playing, simple
letter recognition, theorem proving, or any problem for
which a historical symbolic AI solution exists. This seems
like a rather fair challenge, since I'm only asking you to
do as much as *WHAT HAS ALREADY BEEN
ACCOMPLISHED*. If you fail to solve even one of
these problems with your theory, then it should leave
you to wonder if your state of AI really isn't more than
50 years behind the times. Given the state of hardware
today, I expect you to be able to solve several of the
given problems in the same AI.

Note that a "solution" must be specific enough that a
programmer who doesn't know anything about your
theory could write it. That means you need to describe
algorithms all the way down to pseudocode, or use
standard algorithms. Anything like: "Add the sensorium
module here" will be considered an admission of failure.
Good luck, and I look forward to the validation of your
grand theory!

Dave
 
A

Arthur T. Murray

on Mon, 15 Sep 2003:
[...]
DBH:
Human language, of course. Your system is
very Anglo-centric. I suppose that's because
you probably only know English,
ATM:
+German +Russian +Latin +Greek
[Saepe circumambulo cogitans Latine; ergo scio me esse.]
but it seems ridiculous to me to define a
standard for intelligence in terms of one language.
ATM:
http://mentifex.virtualentity.com/standard.html
merely uses English to discuss polyglot AI Minds.
DBH:
Why not start at the solution? You said that "AI had
been solved in theory". Why can't you apply that
theory to produce a sophisticated AI on the first go?
ATM:
Implementations of the Concept-Fiber Theory of Mind at
http://mentifex.virtualentity.com/theory5.html are hard to do.

Having written AI in REXX, Forth and JavaScript, now I seek help in
http://mentifex.virtualentity.com/cpp.html -- C++ with new AI code;
http://mentifex.virtualentity.com/java.html -- see Mind.JAVA #1 & #2;
http://mentifex.virtualentity.com/lisp.html -- Lisp AI Weblog;
http://mentifex.virtualentity.com/perl.html -- first Perl module;
http://mentifex.virtualentity.com/prolog.html -- Prolog AI Weblog;
http://mentifex.virtualentity.com/python.html -- Python AI Weblog;
http://mentifex.virtualentity.com/ruby.html -- Ruby AI Blog (OO AI);
http://mentifex.virtualentity.com/scheme.html -- Scheme AI Weblog;
http://mentifex.virtualentity.com/vb.html -- see "Mind.VB #001" link.
http://mentifex.virtualentity.com/scheme.html -- Scheme AI Weblog;
http://mentifex.virtualentity.com/vb.html -- see "Mind.VB #001" link.
DBH
"Change the beginnings"?
ATM:
The whole idea of standards for coding non-standard AI diversity
is that, along each branch of AI Mind evolution, the implementers
are free to change any module, any choice of variables, and any
theoretical consideration, for the sake of survival of the fittest.
[...]
http://mentifex.virtualentity.com/enboot.html -- all show
unique and original diagrams of an AI Mind that contains
the thinking apparatus for multiple human languages --
in other words, an AI capapble of Machine Translation
(MT).
DBH:
LOL!!! Yes, the diagrams are certainly "unique" and
"original". And they're about as useful as my "diagram" to
design a human teleporter:
**<%-%>*** &$ @-++=! [human gets transported here]
If you feel there isn't enough detail to accomplish the task,
you know what I feel like when I look at your diagrams.
ATM:
One reaon the AI4U book may prove to be valuable over time is
http://www.amazon.com/exec/obidos/ASIN/0595654371/ 35 diagrams.

LOL!! You don't know what "scalable" means, do you?
You think I meant: "has small scale". But I meant: "won't
work well when scaled up".

[...] ATM:
LOL!! Have you considered an artificial neural network
design? They have been studied for decades, have well-
known and desirable properties, and look nothing like
what you propose in your "theory". They have their own
set of problems, but it looks to me like you have no idea
what they are or how they work. One thing they do well
is content-addressable memory. That means that instead
of finding entities via numerical index, you find them via
their features.
ATM:
Over the years I have actually been afraid to learn too much
about artificial neural networks (ANN) because they might
subtly introduce an improper or flawed mindset into my thinking.
Meanwhile the Mentifex AI Minds in Forth and JavaScript
turned out to be their own form of associative neural net.

[...]
DBH:
LOL!!! Wow. If you were smart enough to write comics
instead of theories, you could give Scott Adams a run for his
money. I can only assume you got this idea from your meager
understanding of how the human brain works. And I further
postulate that you thought to yourself: "Hey, if it works for the
brain it must work for my AI Mind!" Well, I'm afraid you're
about a century behind the times. There is nothing intrinsic
to a real nerve fiber to suggest that it can store information,
let alone something as nebulous and possibly complex and
structured as a "concept".
ATM:
As you go on to elaborate below, of course the neuronal fiber
holds a concept by storing information in the *connections*.

The novel, original-contribution of the Mentifex AI theory at
http://mentifex.virtualentity.com/theory3.html is how to
organize conceptual fibers into two layers below the surface:
a deep-structure layer where thinking occurs by dint of
"spreading activation" from Psi concept to Psi concept; and
a shallow lexical area where lexical (vocabulary) fibers
control words stored in the surface area of auditory memory.
In fact, real neurons store information in the *connections*.
That is, the pattern of configuration and the synaptic weights.
But it's ridiculous that I should have to explain this to
someone who has "solved AI in theory". Whereas you could
not use your fiber theory to build one useful working model,
Rosenblatt already built a working model about half a century
ago showing that connection strengths *could* lead to useful
computation. How can you call this the "dawn of AI" when
you haven't even solved a single toy problem, and people
50 years ago who are long dead have solved numerous ones?
The proof of the pudding is in the eating, but you don't even
have pudding yet! (Or maybe that's all you have...) DBH:
This is all well and good, but until you define
how fibers "coalesce", it doesn't *mean* anything.
ATM:
If I had only one fiber or "grandmother cell" to hold
the concept of my Czech-speaking grandmother Anna,
the reliability of the concept would be impaired and
at risk, because the single fiber could easily die.
But if we imagine a new fiber being assigned to the
concept of "Anna" everytime we use the concept, so that
over time a redundancy of grandmother cells accrues,
then each concept becomes a minigrid within a mindgrid.

DBH:
You also don't explain how MPP occurs, or why
concepts would be insecure or short-lived.

http://mentifex.virtualentity.com/standard.html#masspar
describes how massively redundant elements in the brain-mind
provide an unbroken chain of massively parallel processing.
It is really an elegant beauty in the theory -- how the
maspar fibergrids send maspar associative tags over to
acoustic engrams stored with massive redundancy in audition.

[...]
DBH:
LOL!!! Ok, let me get this straight...your theory maps
concepts onto "simulated nerve fibers", but "unitariness of
mind" precludes *distributed computing*??? Ok, this is
absolute hilarity! Umm...the whole point of neural network
architecture is that computation is distributed over a large
number of simple computational devices. You at least
understand this to some extent, because you see the value
of neurons, even if you don't really understand how they
work. But a "mind grid" of "nerve fibers" is the ultimate
distributed processing system!!! What does it matter if they
are simulated on one processor or one million? You
yourself said that mind != brain. That's about the only true
thing on your site, and the idea is at least hundreds of years
old. And now, you seem to think that "unitariness of mind"
demands "unitariness of brain". You thinking isn't just
muddled...it's completely off the mark.
ATM:
We have heard a lot of talk about portions of the Internet
coming together to form one giant "Global Brain" in distributed
processing. Uneasiness ensues among us "neurotheoreticians."
How would you like it if your own brain-mind were distributed
across several earthly continents, with perhaps one lunar lobe,
and old engrams stored four light years away at Alpha Centauri?

The problem is continuity and the proper mesh of associations
during the "spreading activation" process. On one local computer,
functioning as the "brain" element of the "brain-mind" duality,
we may expect a successful fine-tuning of the cascades and
meanderings of associations from local concept to local concept.
I don't mind if knowledge base (KB) data are stored remotely,
because then it just seems that it took a while to remember
something (say, the name and SSAN of any given American, TIA),
but I want the actual thinking to occur SNAP! ZAP! CRACKLE!
right here *locally* on the quasi-cranial local mindgrid.
[...]
1) How does consciousness work?
ATM:
Through a "searchlight of attention". When a mind is
fooled into a sensation of consciousness, then it _is_
conscious.
DBH:
LOL!!! You just replaced one black box with another.
Namely, you replaced "consciousness" with "searchlight
of attention" and "sensation of consciousness". Barring
the obvious fact that the second part is embarrassingly
circular, it's interesting that you think consciousness is
so trivial that it can be described in two sentences.
Serious researchers have turned this question into an
entire field of inquiry, offering numerous theories which
attempt to explain various aspects of consciousness. But
you just demonstrated an overwhelming ignorance of both
the state of the art and the nature of consciousness itself
with your trite summary.
ATM:
http://mind.sourceforge.net/conscius.html is my summary.
Attention is almost certainly a necessary condition for
consciousness, but it is hardly sufficient. Someone just
posted on this newsgroup a "Quine" program for C++.
That is a program which produces its own source as
the output. Is that sufficient "attention"? Is that program
"conscious"? What about a bacterium that is aware of
its chemical and luminous environment, and attends to
each in turn. Is such a bacterium conscious? Your
definition is woefully indiscriminate and incomplete.
ATM:
Consciousness is no longer the problem that it used to be.
When an AI Mind becomes aware of itself -- voila!
DBH:
"Probably"? So if you asked Newton: "What is the ballistic
trajectory of a cannonball absent air resistance", you would
be satisfied with: "Probably some polynomial curve"? If
Newton claimed to have a complete theory of mechanics,
*I*, for one, would not be satisfied with such a nebulous
answer. Neither am I satisfied with such a vague answer
from someone who claims to have "solved AI in theory".
The formation of long-term memories is such a fundamental
operation of intelligent agents that to treat it so non-chalantly
betrays nothing but a contempt for serious cog. sci. work.
After all, the formation of long-term memories is obviously
critical to *learning*, and an entity that couldn't learn could
barely be said to be intelligent.
ATM:
Since I use Occam's Razor (and I know it by heart in the original
"Entia non sunt multiplicanda praeter necessitatem.") in the
theory and design of AI minds, I shy away from distinguishing
too much between "short-term memory" (STM) and long-term (LTM).
I strongly suspect that memory engrams are laid down seriatim
without regard to STM and LTM distinctions made by humans,
and that the STM-LTM distinctions are artificial distinctions
connected more with *access* to memory than engrammation of memory.
(I could be proved wrong, but it won't really matter much.)
Ah, I see. So you reply to a question about the one feature
of intelligence that separates man from other creatures with
a single term, and then have the audacity to ask for money???
Humans engage in far more than syllogistic reasoning. You
should at least know what other types of reasoning exist.
ATM:
http://mentifex.virtualentity.com/ai4udex.html#reasoning shows
imagistic, logical and syllogistic as mentioned in the AI4U book.
[...]
ATM:
By the influence of physiological "storms" upon
ratiocination.
DBH:
Oh, the "storms". Of course...why didn't I think of that?
How's the weather in your brain today, Arthur? Is it
raining? Got a little water on the brain? I think your
answers speak volumes about your theory. I just hope
would-be programmers get to see your answers
to the deep questions of AI before spending any time
on your project. Unless they're bad programmers. In
which case, I hope they waste time on your project
instead of subjecting the rest of the world to their code.
If you want a much better explanation of how emtions
work, you should really check out Damasio's work. It
is at least as engaging as your "storms" explanation, and
is backed by actual neurological studies.
ATM:
http://mentifex.virtualentity.com/emotion.html is my take
(AI4U page 80, Chapter 19) on the side-issue of emotions.
[...]
Like a typical crackpot (or charlatan), you deceive via
misdirection. You attempt to draw attention to all the
alleged hype surrounding your ideas without addressing
the central issues. I challenged your entire scheme by
claiming that minds are not blank slates, and that human

IIRC the problem was with how you stated the question.
[...]
DBH:
Oh, so you had a hard time understanding: "You're a crank;
defend yourself"? Somehow, that doesn't suprise me.
ATM:
In said:
You recognize the obvious fact that
the sensory modalities must be handled by
specialized hardware, but then you seem to think that
the rest of the brain is a "tabula rasa".
ATM:
I seem to myself to think that any brain-mind,
natural or artificial, will turn out to be
vast amounts of memory storage supervised and
controlled by relatively small intellectual structures.
To see why
that is utterly wrong, you should take a look at Pinker's
latest text by the same name (The Blank Slate).
The reason the ACT-R model is a *collection* of
models, rather than a single model, is very simple.
All of the best research indicates that the brain is
not a general-purpose computer, but rather a
collection of special-purpose devices, each of which
by itself probably cannot be called "intelligent".

DBH said:
Now, you say that you are serious. But, only serious researchers
can admit that they are wrong. So, let's put your theory to
the falsifiability test. What would it take to convince you
that your fiber theory is wrong? Be careful, because if you
say: "Nothing", then you expose yourself for the dogmatic
crackpot that you are. And if you give some implausible
test, then your theory will be viewed as being useless, since
it can't be tested.
So that leaves you with providing a legitimate test. But
this is where you are stuck, because real theories make
predictions, and it is those predictions which are tested.
Since your theory makes no predictions, you won't be
able to propose a reasonable test of it.
ATM:
My theory predicts prescriptively how to make artificial minds.
So I will do it
for you. Why don't we pit your theory against all the
other theories which were put forth during the *real*
"dawn of AI". Why don't you use your theory to describe
a solution to chess playing, checkers playing, simple
letter recognition, theorem proving, or any problem for
which a historical symbolic AI solution exists.
ATM:
Brute force techniques in chess are not genuine AI.
Checkers is a deterministic game. For pattern recognition,
http://mentifex.virtualentity.com/audrecog.html is my work.
Theorem proving, I'll grant you, I missed out on.
This seems
like a rather fair challenge, since I'm only asking you
to do as much as *WHAT HAS ALREADY BEEN ACCOMPLISHED*.
ATM:
Let's do something new for a change. Let's code AI in
many different programming languages and let's let evolve
not only the AI Minds but cummunities of AI adepts to
tend to the AI Minds and nurture them into full maturity.
If you fail to solve even one of these problems with
your theory, then it should leave you to wonder if
your state of AI really isn't more than 50 years behind
the times. Given the state of hardware today,
I expect you to be able to solve several of the
given problems in the same AI.
ATM:
OK, we'll take on consciousness; emotion; reasoning;
dreaming (but not yet hypnosis); and superintelligence.
Note that a "solution" must be specific enough that a
programmer who doesn't know anything about your
theory could write it. That means you need to describe
algorithms all the way down to pseudocode, or use
standard algorithms. Anything like: "Add the sensorium
module here" will be considered an admission of failure.
ATM:
http://mentifex.virtualentity.com/perl.html is at Sensorium.
Good luck, and I look forward to the validation of your
grand theory!
ATM:
Thank you. Your contributions here will go down in history,
and future graduate students (hum/rob) will study your mind.

Arthur
 
D

David B. Held

Arthur T. Murray said:
Human language, of course. Your system is
very Anglo-centric. I suppose that's because
you probably only know English,

+German +Russian +Latin +Greek
[Saepe circumambulo cogitans Latine; ergo scio
me esse.]

In that case, giving named concepts a numerical index is
an exercise in futility, since there is no 1-1 mapping
between words and concepts in different languages. For
example, if the word "FahrvergnYgen" has an index of
93824 in the geVocab module, what index(es) does that
correspond to in the enVocab module?
[...]
Implementations of the Concept-Fiber Theory of Mind
at http://mentifex.virtualentity.com/theory5.html are hard
to do.

Yes indeed, but not for the reasons you think. They are
hard to do because they are underspecified, not because
the model is complex. In fact, the problem is that the
model is too *simple*. *That* is what makes an
implementation "hard to do".
Having written AI in REXX, Forth and JavaScript,

But not a mentifax mind-whole wheat celery AI?
now I seek help

It's about time!

Your list repeats a few lines. Maybe the help wasn't so good.
I guess that's what you consider "advertising".
[...]
One reaon the AI4U book may prove to be valuable
over time is
http://www.amazon.com/exec/obidos/ASIN/0595654371/
35 diagrams.

Ah yes. But then, you should have called it: "Intelligence in
35 Diagrams". I underestimated you. I thought you were
just a random crackpot, but in reality, you are a dirty
capitalist. Not that all capitalists are dirty. Just that some
have less than honorable ways of obtaining capital. I see
now that there *is* financial gain for you involved, which
is why it is so important for me to help you make a fool of
yourself in public. After all, we wouldn't want any naive
people wasting their money on your book, would we?

You've reduced intelligence to 35 diagrams, but you were
the first one. Simply amazing. Your own intelligence must
be truly dizzying.
[...]
Over the years I have actually been afraid to learn too
much about artificial neural networks (ANN) because they
might subtly introduce an improper or flawed mindset into
my thinking.

LOL!!! If you are that weak-minded, perhaps you shouldn't
be reading anything at all. That is a very convenient excuse
for being wholly ignorant of the state-of-the-art.
Meanwhile the Mentifex AI Minds in Forth and
JavaScript turned out to be their own form of associative
neural net.

You mean the "non-function" form?
[...]
As you go on to elaborate below, of course the neuronal
fiber holds a concept by storing information in the
*connections*.

You're the only person I know that calls them "fibers".
Everyone else calls them "neurons" or "units". When you
say "fiber", it makes me think you are talking about the
axon, and don't really understand how neurons work.
The novel, original-contribution of the Mentifex AI theory
at http://mentifex.virtualentity.com/theory3.html is how to
organize conceptual fibers into two layers below the
surface: a deep-structure layer where thinking occurs by
dint of "spreading activation" from Psi concept to Psi
concept; and a shallow lexical area where lexical
(vocabulary) fibers control words stored in the surface
area of auditory memory.

LOL!!! Well, that's all very poetic, but not very useful for
building an intelligent artifact. I would hardly describe a
separate semantic and syntactic system for natural language
processing a "novel, original contribution". Such a thing is
so obvious that even a rank novice in AI would probably
*start* with such a structure if designing a natural language
system.
[...]
This is all well and good, but until you define
how fibers "coalesce", it doesn't *mean* anything.

If I had only one fiber or "grandmother cell" to hold
the concept of my Czech-speaking grandmother Anna,
the reliability of the concept would be impaired and
at risk, because the single fiber could easily die.
But if we imagine a new fiber being assigned to the
concept of "Anna" everytime we use the concept, so
that over time a redundancy of grandmother cells
accrues, then each concept becomes a minigrid within
a mindgrid.

LOL!!! This just gets better and better!!! Somehow,
you have managed to take an argument for distributed
representation, and provide the most ridiculous,
simplistic solution imaginable! You are the first person
I have encountered that believes simple redundancy of
local information has any merit whatsoever. Most
people follow up the grandmother cell argument with a
description of distributed representations in neural
networks, but it is obvious that you don't know how
those work.
[...]
http://mentifex.virtualentity.com/standard.html#masspar
describes how massively redundant elements in the
brain-mind provide an unbroken chain of massively
parallel processing.

No it doesn't. It does nothing of the sort.
It is really an elegant beauty in the theory -- how the
maspar fibergrids send maspar associative tags over
to acoustic engrams stored with massive redundancy
in audition.

If it is a beauty, then you should spend some time
describing it in depth...with examples.
[...]
We have heard a lot of talk about portions of the
Internet coming together to form one giant "Global
Brain" in distributed processing. Uneasiness ensues
among us "neurotheoreticians." How would you like
it if your own brain-mind were distributed across
several earthly continents, with perhaps one lunar lobe,
and old engrams stored four light years away at Alpha
Centauri?

LOL!!! You cannot possibly be serious!!! If I thought
you were actually intelligent, I would chalk this up to
satire.
The problem is continuity and the proper mesh of
associations during the "spreading activation" process.
On one local computer, functioning as the "brain" element
of the "brain-mind" duality, we may expect a successful
fine-tuning of the cascades and meanderings of
associations from local concept to local concept.

Umm...unless you simulate the fibers in hardware, you are
going to have to use a serialization strategy for simulating
your network. In that case, adding processors *increases*
the responsiveness of the network.
I don't mind if knowledge base (KB) data are stored
remotely, because then it just seems that it took a while
to remember something (say, the name and SSAN of
any given American, TIA), but I want the actual thinking
to occur SNAP! ZAP! CRACKLE! right here *locally*
on the quasi-cranial local mindgrid.

I think your mindgrid has already suffered too many SNAP!
ZAP! CRACKLE!s.

The fact that you sum up consciousness in 4 weak,
meaningless paragraphs is surpassed only by the fact
that you can't spell "conscius" [sic].
[...]
Consciousness is no longer the problem that it used to
be. When an AI Mind becomes aware of itself -- voila!

What does it mean for an AI to be aware of itself? What
does it mean for an AI to be aware? If I point a video
camera at a t.v. which displays the input to the video
camera, is the camera aware of itself? Is it conscious?
Consciousness is only a non-problem for people who
live in very small worlds.
[...]
I strongly suspect that memory engrams are laid down
seriatim without regard to STM and LTM distinctions
made by humans, and that the STM-LTM distinctions
are artificial distinctions connected more with *access*
to memory than engrammation of memory. (I could be
proved wrong, but it won't really matter much.)

Since you don't even know how memory works in the
first place, I would say that you're already wrong. I
suppose you think that brains remember everything they
experience.
[...]
http://mentifex.virtualentity.com/ai4udex.html#reasoning
shows imagistic, logical and syllogistic as mentioned in the
AI4U book.

Guess what? I tried out your Javascript AI, and it was at
least as smart as you. You must have uploaded your own
reasoning skills, because I typed in "Arthur is dumb", and the
AI spat out a stream of babbling nonsense like: "I know
Arthur I am dumb Arthur am dumb I am" I thought it was
beautiful and profound, and that the AI was trying to tell me
something important. Oh, I didn't detect any use of syllogistic
reasoning in its output, which leads me to conclude that
you don't know how to implement syllogistic reasoning,
let along any other form of reasoning. And I would be
extremely impressed if you could describe how to
implement syllogistic reasoning of a general nature in a
neural network.
[...]
http://mentifex.virtualentity.com/emotion.html is my take
(AI4U page 80, Chapter 19) on the side-issue of emotions.

LOL@"side issue". Once again, we see a childlike
simplicity in your understanding. Emotion is fundamental
to consciousness, but this notion from the latest research
has completely eluded your grasp.
[...]
I seem to myself to think that any brain-mind, natural
or artificial, will turn out to be vast amounts of memory
storage supervised and controlled by relatively small
intellectual structures.

Already you are proven wrong. Virtually any cognitive
study using MRI can show that much of the brain is *not*
engaged in *memory*, but in *computation*.
[...]
My theory predicts prescriptively how to make artificial
minds.

No it doesn't. If it actually described an artificial mind in
enough detail to build one, then that might be the case.
But your theory is too vague to create so much as a
modern art sculpture (which requires very little input at
all). Whatever you try to pass off as "AI" on your web
site is hilarious. Don't even attempt to refer to that as a
validation of your "theory".
[...]
Brute force techniques in chess are not genuine AI.

Really? Do you know how humans play chess? Novices
spend most of their time checking the legality of moves.
Does that sound familiar? It should. Their strategy more
or less amounts to a shallow brute force search of the
game tree. Only at the most advanced levels does the
play involve complex game-specific reasoning.
[...]
Checkers is a deterministic game.

So is go, but nobody has written an AI that can beat the
best humans.

Umm...I see a link to source code, but I don't know Forth,
so I can't tell if the code does anything interesting. Perhaps
you can post a session with input and the program output.
I need a good laugh, though you've been quite generous
and provided me with many already.
[...]
Let's do something new for a change. Let's code AI in
many different programming languages and let's let evolve
not only the AI Minds but cummunities of AI adepts to
tend to the AI Minds and nurture them into full maturity.

I intend to do just that, but not using the Mentifex model,
and not out in the open. But until such time as I have a
working prototype, I'm not going to ramble on about
speculations and mindless musings. Maybe there is a
lesson for you to be learned there. Oh, wait...this is all
a marketing gimmick for your book. While you are getting
in your free amazon.com links, I hope that anyone who
would click on them reads the surrounding text to see
what a crackpot you are. Just for fun, I'll throw in a
link of my own, just to confuse the folks that are just
scanning through looking for links to click on:

www.arthur-t-murray-is-a-crackpot.com/mentifax/mentos
[...]
OK, we'll take on consciousness; emotion; reasoning;
dreaming (but not yet hypnosis); and superintelligence.

Will you have a working prototype before I'm dead and
decomposed?

LOL!!! There's nothing there worth looking at!
[...]
Thank you. Your contributions here will go down in history,
and future graduate students (hum/rob) will study your mind.

If my contributions go down in history, it will be for something
meaningful and important...not a mere argument with a crazy
author.

Dave
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top