Selected Newsgroup Message

Dejanews Thread

From: Sergio Navega <>
Subject: Re: Designing intelligent embedded agents (was Re: AI hardware)
Date: 30 May 1999 00:00:00 GMT
Message-ID: <7iovej$kqn$>
Approved: (Moderator
References: <7io057$7d3$> <7iom9b$gqq$>
Organization: Intelliwise Research and Training

David Kinny gave us an insightful and thought provoking post.
I'd like to comment on a few topics.

David Kinny wrote in message <7iom9b$gqq$>...

>While I'm comfortable with a behavioural approach to defining
>intelligence, it does seem that to have the capability to behave in
>the complex, purposeful, flexible and adaptive ways to which we apply
>that label does say something (at least at an abstract level) about
>how an agent is structured and how it operates.  You can't solve the
>problem with a HLUT, because of resource bounds and because behaviour
>needs to be history sensitive (at least stochastic) to avoid rigidity.
>You can't solve the problem by declaratively embedding all the
>knowledge you'll ever need in some form and then doing sound and
>correct reasoning, because it's intractable and unfocussed, because you
>need knowledge about the current and past and predictions about future
>states of the world, and because you have to survive and continue even
>when your knowledge is incomplete or just plain wrong.

This is an important point which appears to be subsumed into the
"counfounding cause and effect" fallacy that knowledge "causes"
intelligence, when it seems to be the opposite. When seeing the
problem upside down, it is clear to notice that only embodied
agents are able to obtain knowledge from their worlds through
the use of "a mechanism" (which is up to us to design) and that
the speed and depth of this process is a good candidate to be
the measure of the intelligence of this agent. As a matter of
fact, I have a proposition to challenge this need of embodiness,
but that's way beyond our subject here.

>To behave intelligently there's all sorts of different types of
>knowledge you need to have, including knowledge of what to do and how
>and when (and perhaps why) to do it, knowledge of what you are capable
>of doing, knowledge of how your environment (including other agents)
>will most likely behave, what has worked before and what hasn't, and
>knowledge about how to acquire and assimilate new knowledge.  Much of
>this may be hardwired, but some of it must be learned.  Some will be
>explicit, so that it can be reasoned about, but much will be implicit.
>I think you end up needing an underlying computational architecture
>that is a layered or structured assembly of different elements which
>employ different knowledge representation and "reasoning" techniques
>and operate in parallel over different timescales.  It must somehow
>combine the ability to react rapidly to sensory and internal stimuli
>in predefined or learned ways and the ability to perform activities
>automatically with the ability to develop/discover appropriate ways of
>responding to problems and situations it's never encountered before.

Nicely put. I will insist again that the best way for us to get
the necessary understanding of this process is through a careful
examination of human development as investigated by Cognitive

>This seems to imply a system which can in a real sense make decisions,
>i.e. choose what sensory inputs to notice and which goals to pursue,
>choose whether and how to reason about them and between alternative
>courses of action that may be appropriate, learn from the outcomes of
>such choices how to achieve a desired effect, and rapidly optimize the
>performance of learned routines so they are done smoothly with minimal
>expenditure of "mental" effort.  It's dramatically different from the
>"oracle" model of an intelligent system: the box you can put a
>question to that eventually pops out an intelligent answer.

Again, nicely put. I find specially relevant here the concept
of "choosing what sensory inputs to notice", which I can
reduce to the word "attention". Lots of insights can be obtained
by thinking about what it means to "direct our attention" to a
particular feature under study. What does an agent obtains by
doing this? What *drives* the agent to choose that special feature
and put aside all the rest? I find this question to be on the
root of the problem we're trying to solve. The answer seems
to be lurking in the concept of relevance and this appears to
point us to the idea of "judgment of value" which throws us to
the notion of "comparison".

Comparison means noticing differences, which leads me to propose
that the most basic operation that an agent *must* be able to
accomplish is to notice *similarity* and to notice *anomaly*.
I find these principles important and they seem to pervade all
levels of our cognition. From those principles we can derive
a lot of other important aspects, without being restricted by
any approach or architecture (symbolic, connectionist, hybrid).
These are, in my humble opinion, the starting points of the
"maxwell equations of thought" that, I believe, can be created.

>We are a long way from understanding how to design such systems,
>indeed it's questionable whether we will ever design them explicitly.
>I think it's more likely we'll find techniques that allow them to be
>developed, but we'll then be faced with a host of difficulties in
>understanding exactly how they work.[snip]

Fantastic. That's it. This is probably the first big thing
that we have to change *within ourselves*. In order to build that
AI we dream about, we will probably have to dismiss that desire
of knowing *exactly how it works*, down to the bit. I'm not
proposing connectionist systems here, where this is a natural
characteristic. I'm proposing that the way the AI agent will
understand its world will be so "personal" (which means, so
full of intertwined relations among abstractly created concepts
and specific sensory experiences) that the only way of
understanding it "to the bone" would be to be *subject to
the very same set of experiences*. That is possible only on
the most elementary systems or initial levels (where we will
have opportunities of debugging), but as the complexity of the
system grows, our hope of maintaining this level of
understanding will go away.

Sergio Navega.

[ is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <>, and ]
[ ask your news administrator to fix the problems with your system. ]

From: "Sergio Navega" <>
Subject: Re: Designing intelligent embedded agents (was Re: AI hardware)
Date: 01 Jun 1999 00:00:00 GMT
Message-ID: <7iu755$g2f$>
Approved: (Moderator
References: <7io057$7d3$> <7iom9b$gqq$> <7iovej$kqn$> <7iq4km$ipe$> <7iq8ln$l0t$>
Organization: Intelliwise Research and Training

Joshua Scholar wrote in message <7iq8ln$l0t$>...
>As for the primacy of knowledge over intelligence or the reverse: we
>have a long way to go before we have expert systems that can say "The
>theory you taught me about X is clearly wrong, I have better one."
>But that fact doesn't, in itself, prove anything except that we've
>never been as close to perfecting AI as anyone's marketing department

I agree with your comments here, except for the "long way to go".
This should occur from the beginning. This reminds me of an
experiment that Jean Piaget did with children that fits
like a glove to the concepts involved in this discussion.

Piaget approached a child about 5 or 6 years old and asked her
what causes the wind, what generates wind. The child thought
for a while and then she answered that it was the trees, the
leaves of the trees wave like this (and waved her hand in
front of Piaget's face, showing how wind occurs from movement).

Piaget did not say that the child was wrong, but commended
her for her reasoning. It would be easier to say that the
child was wrong and then say that wind occurs because of
temperature differences and so on. But that "rote learning"
would not be useful to the child. What Piaget proposed was
to make the child reach this conclusion by showing her
situations in which this concept was clearly observable,
detectable, *perceivable* by the child herself.

This is, in my opinion, incredibly relevant to AI. What
Piaget emphasized for the children was not the right
or wrong answer, but was the process of elaborating
theories to explain the things. I equate this with

So when an Expert System is being "loaded" with knowledge
it is not being loaded with intelligence. It will not
learn *how to do* important questions, but only to answer
questions. Add to that capacity of asking the ability
to judge evidences and perceive relevance, and we should
get a pretty good system to learn things by itself.

The importance of this learning process is great. No theory we
have today is definitive and none will ever be. Everything we
know today may change if something new is discovered (and
Einstein proved this at his time). That means we are constantly
*revising* our theories and beliefs and so should our systems.
Which leads me to suggest that the process of revising a theory
(a model) of the world is one of the important things
that an AI system must be able to do. From the beginning.

Sergio Navega.

[ is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <>, and ]
[ ask your news administrator to fix the problems with your system. ]

Back to Menu of Messages               Sergio's Homepage

Any Comments?