The direct result of the faculty of consciousness is the production of
abstract information. Abstract information is abstract in the sense
that it is abstract from a being's own being.
The original poster of the subject is frustrated with our discussion. From
our talk he couldn't grasp "what is intelligence". Why has the theme turn to
"nature of consciousness"? I think that we can differentiate some close
words that relate to intelligence. On my view consciousness actually helps
us in making decisions. Who operates with abstract information is
"intellect", not consciousness.
I agree with Roedy Green that consciousness is feeling. Sentient means
conscious. Sentience is the basic and simplest sign of consciousness.
The notion of "intellect" doesn't refer to capacity for feeling. Intellect
works by rules, uses language, makes logical corollaries from statements.
A conscious being, a simple animal may possess very weak or zero intellect.
Analogously, an apparent intellectual agent, the CYC, an expert system may
have tiny or zero consciousness.
At first, like other animals, human being had capacity to feel. When we
started to use language and make machines then we started to accumulate
"abstract" knowledge and intellect that is capacity to manipulate symbols.
Symbols don't occur in nature. Consciousness wishes something, intellect
think how to behave to realize the wish. Mind (consciousness plus intellect
plus experience) invents new symbols (words) for intellect to think better.
Consciousness operates with immediate feelings or images of feelings. They
are not abstract. I see (feel) symbols "sigma" and "integral". They are
patterns on paper, not abstract information. Intellect understands their
meaning, the knowledge that is implemented to these pictures. So intellect
operates with "abstract information".
Intellect logically compares possible decisions and choose better one. If
the decisions are "intellectually" equal then consciousness (free will,
want) works instead of intellect.
Let a robot can operate quite good in environment. In many circumstances
its programmed "artificial intellect" behaves properly. Battery is low - to
charge, it is dark - to turn light on etc. Some actions are difficult to
preview. So the designer implemented a "self-learning engine" into the
robot. This engine is to increase a value of a goal function by finding
right behavior when circumstances are logically equal for the intellectual
subsystem of the robot. Intellectual part of the robot behavior may be
explained by implemented knowledge of the designer. New learnt part of
behavior is a result of random search. There is no "logical" explanation of
the new behavior (without special investigation).
After a while robot has learnt to behave better (to choose better decisions)
in logically indistinguishable situations. It makes a decision because of a
"wish" of its self-learning subsystem. Robot wants to do such and such.
Intellectually it doesn't grasp reasons why such behavior is better.
While it is learning this robot have a chance to be a sentient being in
concern to the senses that are involved in learning.
When the new behavior is included in "intellectual engine" as a new rule the
behavior becomes automatic. If the robot is learning constantly then it may
be sentient always.
EK