Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@ibm.net>
Subject: On McCarthy, Logic and Cognition
Date: 28 May 1999 00:00:00 GMT
Message-ID: <7ijma2$duu$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <7gnil4$7c0$1@news.netvision.net.il> <7id7iu$c23$1@mulga.cs.mu.OZ.AU> <7iebdr$9kf$1@mulga.cs.mu.OZ.AU> <7iek0u$emi$1@mulga.cs.mu.OZ.AU> <7ii0m3$rdk$1@mulga.cs.mu.OZ.AU>
Organization: Intelliwise Research and Training
Followup-To: comp.ai
Newsgroups: comp.ai

Anders N Weinstein wrote in message <7ii0m3$rdk$1@mulga.cs.mu.OZ.AU>...
>In article <7iek0u$emi$1@mulga.cs.mu.OZ.AU>,
>Matthew L. Ginsberg <ginsberg@once.cirl.uoregon.edu> wrote:
>
>>As a scientist, I often wish it were otherwise.  AI would be a lot
>>easier if introspection counted for something.  But as a human, I'm
>>glad things are this way: our different areas of competence will make
>>it more likely that men and machines collaborate than that they
>>compete.
>
>Note I am not appealing to introspection. I believe a creature with a body
>and senses can get into mental states that a disembodied inference
>engine cannot, including the ones that are important for intelligence
>as commonly understood. Such a creature can use public language terms
>like the ones above in accordance with "logic" in the ordinary everyday
>sense of the word, but may not have logic *inside* its implementing
>mechanisms; and there seems to me to be no reason to believe that an
>artificial robot with logic as its internal implementing mechanism can
>achieve human competence at intelligent engagment with the world.
>

I'm mostly on your side of the viewpoint, agreeing with all your
previous text. But the same thoughts that led me to discuss some
of Matt's original ponderings also leads me to rethink about some
of the points you touched in your paragraph above.

I have no doubts that a creature with body and sense will develop
mental states that cannot be emulated in a disembodied computer.
I may even agree that the public language understanding by the
former may have no direct counterpart in the latter. I suppose
that this goes directly against some of Matt's ponderings. But I
have some issues to discuss about this, later in this text.

I also agree that a purely logical robot, even with full sensory
apparatus, will have problems in dealing with its external world
(although I can't see how the mapping between senses/logic would
possibly be done).

But I'd like to reason about the question of disembodied machines
as regarding to the presence of *intelligence*. I believe
that this issue may be a new way of observing the problem, and
that is because of the subtle absence of the concept *human-like*
intelligence.

A pertinent set of considerations about these questions was
addressed by some of John McCarthy's works. When McCarthy
proposes human-level intelligence in one of his papers, he was
apparently thinking of solving that in terms of logic, which
I disagree, but his take on the problem is wise.

I have a special admiration for McCarthy's work, in special his
ideas about nonmonotonic reasoning, the ascription of mental
states (beliefs, intentions, desires, etc) to machines, his
proposition of "common sense informatic situation", his writings
about approximate concepts and approximate theories and his witty
insights about elaboration tolerance.

McCarthy seems to have detected a great deal of the problems
that an AI must solve, perhaps failing only on the proposed way
to solve: throughout logic. However, when analyzing the problems
McCarthy raise under a cognitivist approach, one sees that most
problems raised, if not all, are *naturally* solved by approaches
that emphasize progressive concept formation and perceptual
refinement.

When one starts thinking seriously about these cognitive
approaches, it is very common to find an essential point:
the natural emergence of these structures as the result of
the perceptual activity of the agent within its world.

Then it is easy to conclude that a solid, human-like
vision of the world (and so, a comparable form of
intelligence) can *only* emerge if the agent acquires all
its concepts from direct interaction and perception of
the world. I guess that this is what Anders evoked in
his text, to which I mostly agree.

However, I *also* challenge this position. But I'll
left the details of this challenging for another time.
Suffice it to bring to surface some ideas about general
aspects of this problem. Lets consider a simple diagram
of an agent and its world:

                    +-------------+
                    |   inference  |
                    |     level    |
                    +-------------+
+--------+          |   sensory    |
| world  +----------+   level     |
+--------+          +-------------+

I don't see a way to question this naive diagram.
The agent interacts with the world and through its
sensory level (SL) obtain information and exerts actions
on it. The information captured by the sensory level someway
end up feeding the inference level (IL), which is where
the logicists think the greatest part of the work is done.

The logicists apparently say that intelligence is the
result of the processing of the IL. The empiricists and
connectionists say that intelligence is mostly perceptual and
cognitive, and is located predominantly in the SL.

The logicists *supress* the SL and make computers run *only*
the IL, hoping that inferences done at this level will *follow*
the idiosyncrasies of the world. The result is the frame problem
and the inability of this agent to reason in tandem with the
world. That hope is obviously not fulfilled and is the source
of some problems faced by logic.

The connectionist approach to AI forget about the IL and
concentrates attention on the SL, hoping that high-level,
logical reasoning will somewhat *emerge* from the low level
interaction with the world. So far, this hope also appears to
be unfruitful.

What I propose (please, note, this is a hypothesis, not a claim)
is that we're missing something here:

                    +-------------+
                    |   inference  |
                    |     level    |
                    +-------------+
                    |    mystery   |
                    |    level     |
                    +-------------+
+--------+          |   sensory    |
| world  +----------+   level     |
+--------+          +-------------+

This "mystery level" is where the important action can be unrolling.
It is the piece that allows the low level, connectionist-based
architecture to emerge rule-like, logical relations. It is
an interface, an area where concepts are grouped together in
order to assemble higher level structures, that can then be
modeled by some forms of logic. It is the level where most of
the perceptual puzzles are solved, in the case of humans in an
automatic and unconscious way. This is the level where highly
parallel processing of different threads of thought may be
battling with each other, in order for (usually) just one to
surface to the higher, conscious level.

Do I have any evidences to support this "summer night's dream"?
This is my primary concern today. These are some of my list:

a) Implicit Learning of artificial grammars

b) Shadmehr et. al, the gradative displacement of sensorimotor
learning to other parts of the brain, with increased efficacy

c) The problem of the Expert versus Novice, as Dreyfus suggests

d) Blindsight, prosopagnosia

e) Language aphasias

f) Language acquisition by children, in special rule-like patterns

g) Cognitive tests of word priming

and others.

Regards,
Sergio Navega.

---
[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net