Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: purely reactive intelligences
Date: 14 Apr 1999 00:00:00 GMT
Message-ID: <3714cad9@news3.us.ibm.net>
References: <3714B9DC.4517@esumail.emporia.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 14 Apr 1999 17:05:29 GMT, 129.37.182.51
Organization: Intelliwise Research and Training
Newsgroups: comp.ai

R Jones wrote in message <3714B9DC.4517@esumail.emporia.edu>...
>One purely reactive system could watch the world and also its "hands"
>and modify the positions of its "fingers".  A second purely reactive
>system could watch the other's hands and the world and act on the world.
>In this way all state is transferred into the world.
>So purely reactive systems can be universal machines.  But is it
>practical?  The amount of state that can be represented in this way
>is very small.  (Chemical trails are like this for insects.)
>Many parallel modules would help but only if tasks are sufficiently
>independent.  A simple, benign world would allow such creatures to
>exist but not a complex and dangerous one.

I agree with some of your ponderations and disagree with others.
Purely reactive creatures may develop and live well even in a complex
and uncertain world such as ours, provided that evolutionary pressures
are able to adapt the organism to different situations. A cockroach
is a typical example. In this case, "intelligence", is in the nature.

However, purely reactive creatures are not what one could call
"intelligent", on a more broad definition of the term. When behaviorists
tried to fit us humans into that category, they committed the sin
of thinking that stimulus/reaction architectures could be intelligent.
They can't (although they can survive in the real world).

Intelligent organisms *must* represent their world through some kind
of mental models (I can see now some behaviorists jumping off the
chair!). You can't have intelligence without some kind of world
representation to allow for predictive "thoughts". Creativity, for
instance, is something that appears only if one is able to "run"
virtual worlds in one's mind. Other than this, it is pure chance.

So when AI researchers started thinking about the representationalist
ways of doing AI, they were right. The problem is that several of them
chose logic as starting point and that is as big an error as
behaviorism. But this is a story for another time...

Regards,
Sergio Navega.


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net