Standards in Artificial Intelligence

Discussion in 'Perl Misc' started by John J. Trammell, Sep 10, 2003.

  1. On 10 Sep 2003 10:22:11 -0800, Arthur T. Murray mega-crossposted:
    > A webpage of proposed Standards in Artificial Intelligence is at
    > http://mentifex.virtualentity.com/standard.html -- updated today.
    >


    A killfile for excessive crossposters and other undesirables is
    at /home/trammell/.slrn/score -- updated about 15 seconds ago.
     
    John J. Trammell, Sep 10, 2003
    #1
    1. Advertising

  2. Arthur T. Murray, Sep 10, 2003
    #2
    1. Advertising

  3. "Arthur T. Murray" <> wrote in message
    news:...
    > A webpage of proposed Standards in Artificial Intelligence
    > is at http://mentifex.virtualentity.com/standard.html --
    > updated today.


    Besides not having anything to do with C++, you should
    stop posting your notices here because you are a crank.
    You claim to have a "theory of mind", but fail to recognize
    two important criteria for a successful theory: explanation
    and prediction. That is, a good theory should *explain
    observed phenomena*, and *predict non-trivial
    phenomena*. From what I have skimmed of your "theory",
    it does neither (though I suppose you think that it does
    well by way of explanation).

    In one section, you define a core set of concepts (like
    'true', 'false', etc.), and give them numerical indexes.
    Then you invite programmers to add to this core by using
    indexes above a suitable threshold, as if we were defining
    ports on a server. When I saw this, and many other things
    on your site, I laughed. This is such a naive and simplistic
    view of intelligence that you surely cannot be expected
    to be taken seriously.

    I dare say one of the most advanced AI projects in
    existence is Cog. The philosophy behind Cog is that
    an AI needs a body. You say more or less the same
    thing. However, the second part of the philosophy behind
    Cog is that a simple working robot is infinitely better
    than an imaginary non-working robot. That's the part
    you've missed. Cog is designed by some of the field's
    brightest engineers, and funded by one of the last
    strongholds of AI research. And as far as success
    goes, Cog is a child among children. You expect to
    create a fully developed adult intelligence from scratch,
    entirely in software, using nothing more than the
    volunteer labor of gullible programmers and your own
    musings. This is pure comedy.

    At one point, you address programmers who might
    have access to a 64-bit architecture. Pardon me, but
    given things like the "Hard Problem of Consciousness",
    the size of some programmer's hardware is completely
    irrelevant. These kinds of musings are forgivable when
    coming from an idealistic young high school student
    who is just learning about AI for the first time. But the
    prolific nature of the work implies that you have been
    at this for quite some time.

    Until such time as you can A) show that your theory
    predicts an intelligence phenomenon that is both novel
    and later confirmed by experiment or observation of
    neurological patients, or B) produce an artifact that is
    at least as intelligent as current projects, I must conclude
    that your "fibre theory" is just so much wishful rambling.

    The level of detail you provide clearly shows that you
    have no real understanding of what it takes to build a
    successful AI, let alone something that can even
    compete with the state of the art. The parts that you
    think are detailed, such as your cute ASCII diagrams,
    gloss over circuits that researchers have spent their
    entire lives studying, which you leave as "an exercise
    for the programmer". This is not only ludicrous, but
    insulting to the work being done by legitimate
    researchers, not to mention it insults the intelligence
    of anyone expected to buy your "theory".

    Like many cranks and crackpots, you recognize that
    you need to insert a few scholarly references here and
    there to add an air of legitimacy to your flights of fancy.
    However, a close inspection of your links shows that
    you almost certainly have not read and understood
    most of them, or A) you would provide links *into* the
    sites, rather than *to* the sites (proper bibliographies
    don't say: "Joe mentioned this in the book he published
    in '92" and leave it at that), or B) you wouldn't focus
    on the irrelevant details you do.

    A simple comparison of your model with something
    a little more respectable, such as the ACT-R program
    at Carnegie-Mellon, shows stark contrasts. Whereas
    your "model" is a big set of ASCII diagrams and some
    aimless wanderings on whatever pops into your head
    when you're at the keyboard, the "models" link (note
    the plural) on the ACT-R page takes you to what...?
    To a bibliography of papers, each of which addresses
    some REAL PROBLEM and proposes a DETAILED
    MODEL to explain the brain's solution for it. Your
    model doesn't address any real problems, because
    it's too vague to actually be realized.

    And that brings us to the final point. Your model has
    components, but the components are at the wrong
    level of detail. You recognize the obvious fact that
    the sensory modalities must be handled by
    specialized hardware, but then you seem to think that
    the rest of the brain is a "tabula rasa". To see why
    that is utterly wrong, you should take a look at Pinker's
    latest text by the same name (The Blank Slate).
    The reason the ACT-R model is a *collection* of
    models, rather than a single model, is very simple.
    All of the best research indicates that the brain is
    not a general-purpose computer, but rather a
    collection of special-purpose devices, each of which
    by itself probably cannot be called "intelligent".

    Thus, to understand human cognition, it is necessary
    to understand the processes whereby the brain
    solves a *PARTICULAR* problem, and not how it
    might operate on a global scale. The point being
    that the byzantine nature of the brain might not make
    analysis on a global scale a useful or fruitful avenue
    of research. And indeed, trying to read someone's
    mind by looking at an MRI or EEG is like trying to
    predict the stock market by looking at the
    arrangement of rocks on the beach.

    Until you can provide a single model of the precision
    and quality of current cognitive science models, for
    a concrete problem which can be tested and
    measured, I must conclude that you are a crackpot
    of the highest order. Don't waste further bandwidth
    in this newsgroup or others with your announcements
    until you revise your model to something that can be
    taken seriously (read: explains observed phenomena
    and makes novel predictions).

    Dave
     
    David B. Held, Sep 10, 2003
    #3
  4. John J. Trammell

    Matthias Guest

    (Arthur T. Murray) writes:

    > A webpage of proposed Standards in Artificial Intelligence is at
    > http://mentifex.virtualentity.com/standard.html -- updated today.


    How about using a mailing list where everyone interested in your
    website can subscribe and is informed about your frequent updates?

    If everybody posted their update notifications through usenet the news
    servers would immediately break down from overload. So please be
    polite and use the appropriate channels to communicate with the
    readers of your website.
     
    Matthias, Sep 17, 2003
    #4
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. White Wolf
    Replies:
    8
    Views:
    430
    Terry Reedy
    Sep 15, 2003
  2. Arthur T. Murray

    Standards in Artificial Intelligence

    Arthur T. Murray, Sep 10, 2003, in forum: C++
    Replies:
    76
    Views:
    1,750
    Rotes Sapiens
    Oct 4, 2003
  3. tommak
    Replies:
    2
    Views:
    466
    Kenneth P. Turvey
    Jul 4, 2006
  4. tommak
    Replies:
    1
    Views:
    357
    Terry Hancock
    Jul 4, 2006
  5. tommak
    Replies:
    0
    Views:
    395
    tommak
    Jul 4, 2006
Loading...

Share This Page