Arthur T. Murray said:
DBH:
ATM:
You are right. It is precariously brittle. That brittleness
is part of the "Grand Challenge" of building a viable AI
Mind. First we have to build a brittle one, then we must
trust the smarter-than-we-are crowd to incorporate fault-
tolerance.
You have cross-posted this to comp.ai.neural-nets, but
you obviously don't understand neural nets, or you would
know that a distributed representation is not brittle.
DBH:
ATM:
Do you mean "human-language-specific" or
"programming-language"?
Human language, of course. Your system is very Anglo-
centric. I suppose that's because you probably only
know English, but it seems ridiculous to me to define
a standard for intelligence in terms of one language.
With programming-language variables, we have to
start somewhere,
Why not start at the solution? You said that "AI had
been solved in theory". Why can't you apply that
theory to produce a sophisticated AI on the first go?
and then we let adventitious AI coders change the
beginnings.
"Change the beginnings"?
[...]
http://mentifex.virtualentity.com/enboot.html -- all show
unique and original diagrams of an AI Mind that contains
the thinking apparatus for multiple human languages --
in other words, an AI capapble of Machine Translation
(MT).
LOL!!! Yes, the diagrams are certainly "unique" and
"original". And they're about as useful as my "diagram" to
design a human teleporter:
**<%-%>*** &$ @-++=! [human gets transported here]
If you feel there isn't enough detail to accomplish the task,
you know what I feel like when I look at your diagrams.
DBH:
ATM:
Once again, we have to start somewhere. Once we
attain critical mass in freelance AI programmers, then
we scale up.
LOL!! You don't know what "scalable" means, do you?
You think I meant: "has small scale". But I meant: "won't
work well when scaled up".
[...]
DBH:
Besides the fact that the "enVocab" module is
embarrassingly underspecified, the notion of indexing
words is just silly.
ATM:
Nevertheless, here at the dawn of AI (flames? "Bring 'em
on.")
Ok, I'll oblige you...I wish something would "dawn" on you!
we need to simulate conceptual gangs of redundant nerve
fibers, and so we resort to numeric indexing just to start
somewhere.
LOL!! Have you considered an artificial neural network
design? They have been studied for decades, have well-
known and desirable properties, and look nothing like
what you propose in your "theory". They have their own
set of problems, but it looks to me like you have no idea
what they are or how they work. One thing they do well
is content-addressable memory. That means that instead
of finding entities via numerical index, you find them via
their features.
Using Google so far is your only demonstrated skill. I
respect that more than anything you've had to say.
[...]
"uniconceptual filaments"?
ATM:
Yes. Each simulated nerve fiber holds one single concept.
LOL!!! Wow. If you were smart enough to write comics
instead of theories, you could give Scott Adams a run for his
money. I can only assume you got this idea from your meager
understanding of how the human brain works. And I further
postulate that you thought to yourself: "Hey, if it works for the
brain it must work for my AI Mind!" Well, I'm afraid you're
about a century behind the times. There is nothing intrinsic
to a real nerve fiber to suggest that it can store information,
let alone something as nebulous and possibly complex and
structured as a "concept".
In fact, real neurons store information in the *connections*.
That is, the pattern of configuration and the synaptic weights.
But it's ridiculous that I should have to explain this to
someone who has "solved AI in theory". Whereas you could
not use your fiber theory to build one useful working model,
Rosenblatt already built a working model about half a century
ago showing that connection strengths *could* lead to useful
computation. How can you call this the "dawn of AI" when
you haven't even solved a single toy problem, and people
50 years ago who are long dead have solved numerous ones?
The proof of the pudding is in the eating, but you don't even
have pudding yet! (Or maybe that's all you have...)
ATM:
Yes. Conceptual fibers may coalesce into a "gang" or
minigrid distributed across the entire mindgrid, for
massive redundancy -- which affords security or longevity
of concepts, and which also aids in massively parallel
processing (MPP).
This is all well and good, but until you define how fibers
"coalesce", it doesn't *mean* anything. You also don't
explain how MPP occurs, or why concepts would be
insecure or short-lived.
[...]
The amount of memory is completely irrelevant, since you
have not given enough detail to build a working model.
ATM:
If the AI coder has an opportunity to go beyond 32-bit and
use a 64-bit machine, then he/she/it ought to do it, because
once we arrive at 64-bits (for RAM), we may stop a while.
[...]
ATM:
The desired "unitariness of mind" (quotes for emphasis)
may preclude using "clusters or grids of processors."
LOL!!! Ok, let me get this straight...your theory maps
concepts onto "simulated nerve fibers", but "unitariness of
mind" precludes *distributed computing*??? Ok, this is
absolute hilarity! Umm...the whole point of neural network
architecture is that computation is distributed over a large
number of simple computational devices. You at least
understand this to some extent, because you see the value
of neurons, even if you don't really understand how they
work. But a "mind grid" of "nerve fibers" is the ultimate
distributed processing system!!! What does it matter if they
are simulated on one processor or one million? You
yourself said that mind != brain. That's about the only true
thing on your site, and the idea is at least hundreds of years
old. And now, you seem to think that "unitariness of mind"
demands "unitariness of brain". You thinking isn't just
muddled...it's completely off the mark.
[...]
1) How does consciousness work?
ATM:
Through a "searchlight of attention". When a mind is
fooled into a sensation of consciousness, then it _is_
conscious.
LOL!!! You just replaced one black box with another.
Namely, you replaced "consciousness" with "searchlight
of attention" and "sensation of consciousness". Barring
the obvious fact that the second part is embarrassingly
circular, it's interesting that you think consciousness is
so trivial that it can be described in two sentences.
Serious researchers have turned this question into an
entire field of inquiry, offering numerous theories which
attempt to explain various aspects of consciousness. But
you just demonstrated an overwhelming ignorance of both
the state of the art and the nature of consciousness itself
with your trite summary.
Attention is almost certainly a necessary condition for
consciousness, but it is hardly sufficient. Someone just
posted on this newsgroup a "Quine" program for C++.
That is a program which produces its own source as
the output. Is that sufficient "attention"? Is that program
"conscious"? What about a bacterium that is aware of
its chemical and luminous environment, and attends to
each in turn. Is such a bacterium conscious? Your
definition is woefully indiscriminate and incomplete.
ATM:
You've got me there. Qualia totally non-plus me
Really? They are exactly the reason that an AI needs a
body. And I guarantee that any theory of AI which does
not address qualia in some way will not be taken
seriously by anyone who studies cognitive science. Even
people who don't believe in qualia (like Dennett)
acknowledge that it must be addressed.
ATM:
Probably by the lapse of time, so that STM *becomes*
LTM.
"Probably"? So if you asked Newton: "What is the ballistic
trajectory of a cannonball absent air resistance", you would
be satisfied with: "Probably some polynomial curve"? If
Newton claimed to have a complete theory of mechanics,
*I*, for one, would not be satisfied with such a nebulous
answer. Neither am I satisfied with such a vague answer
from someone who claims to have "solved AI in theory".
The formation of long-term memories is such a fundamental
operation of intelligent agents that to treat it so non-chalantly
betrays nothing but a contempt for serious cog. sci. work.
After all, the formation of long-term memories is obviously
critical to *learning*, and an entity that couldn't learn could
barely be said to be intelligent.
ATM:
Syllogistic reasoning is the next step, IFF we obtain
funding.
http://www.kurzweilai.net/mindx/profile.php?id=26
- $send____.
Ah, I see. So you reply to a question about the one feature
of intelligence that separates man from other creatures with
a single term, and then have the audacity to ask for money???
Humans engage in far more than syllogistic reasoning. You
should at least know what other types of reasoning exist.
You know what's really funny about that link? This part:
Customers who shopped for this item also shopped for these items:
a.. Hidden Order by John H. Holland
b.. The Career Programmer by Christopher Duncan
c.. Lies and the Lying Liars Who Tell Them by Al Franken
I'm not sure why customers who shopped for your book
also shopped for Al Franken's book, but it seems to be a
rather humorous juxtaposition. Anyway, I certainly am not
going to pay $30 to have you give me some hare-brained
idea of how language works.
ATM:
By the influence of physiological "storms" upon
ratiocination.
Oh, the "storms". Of course...why didn't I think of that?
How's the weather in your brain today, Arthur? Is it
raining? Got a little water on the brain? I think your
answers speak volumes about your theory. I just hope
would-be programmers get to see your answers
to the deep questions of AI before spending any time
on your project. Unless they're bad programmers. In
which case, I hope they waste time on your project
instead of subjecting the rest of the world to their code.
If you want a much better explanation of how emtions
work, you should really check out Damasio's work. It
is at least as engaging as your "storms" explanation, and
is backed by actual neurological studies.
[...]
Like a typical crackpot (or charlatan), you deceive via
misdirection. You attempt to draw attention to all the
alleged hype surrounding your ideas without addressing
the central issues. I challenged your entire scheme by
claiming that minds are not blank slates, and that human
IIRC the problem was with how you stated the question.
[...]
Oh, so you had a hard time understanding: "You're a crank;
defend yourself"? Somehow, that doesn't suprise me. Now,
you say that you are serious. But, only serious researchers
can admit that they are wrong. So, let's put your theory to
the falsifiability test. What would it take to convince you
that your fiber theory is wrong? Be careful, because if you
say: "Nothing", then you expose yourself for the dogmatic
crackpot that you are. And if you give some implausible
test, then your theory will be viewed as being useless, since
it can't be tested.
So that leaves you with providing a legitimate test. But
this is where you are stuck, because real theories make
predictions, and it is those predictions which are tested.
Since your theory makes no predictions, you won't be
able to propose a reasonable test of it. So I will do it
for you. Why don't we pit your theory against all the
other theories which were put forth during the *real*
"dawn of AI". Why don't you use your theory to describe
a solution to chess playing, checkers playing, simple
letter recognition, theorem proving, or any problem for
which a historical symbolic AI solution exists. This seems
like a rather fair challenge, since I'm only asking you to
do as much as *WHAT HAS ALREADY BEEN
ACCOMPLISHED*. If you fail to solve even one of
these problems with your theory, then it should leave
you to wonder if your state of AI really isn't more than
50 years behind the times. Given the state of hardware
today, I expect you to be able to solve several of the
given problems in the same AI.
Note that a "solution" must be specific enough that a
programmer who doesn't know anything about your
theory could write it. That means you need to describe
algorithms all the way down to pseudocode, or use
standard algorithms. Anything like: "Add the sensorium
module here" will be considered an admission of failure.
Good luck, and I look forward to the validation of your
grand theory!
Dave