Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Just a thought....
Date: 28 Dec 1998 00:00:00 GMT
Message-ID: <3687f0b1.0@news3.ibm.net>
References: <xz5g2.51$pj.1364@nsw.nnrp.telstra.net> <36813CB0.4F90AF46@sandpiper.net> <RSig2.120$qn.4208@nsw.nnrp.telstra.net> <75u4ls$jei$1@bertrand.ccs.carleton.ca> <OQLVb2EM#GA.406@nih2naaa.prod2.compuserve.com> <36876a7e.0@news3.ibm.net> <O9XLUQpM#GA.309@ntdwwaaw.compuserve.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 28 Dec 1998 20:57:21 GMT, 166.72.29.207
Organization: SilWis
Newsgroups: comp.ai.philosophy

Josef <71351.356@compuserve.com> wrote in message ...
>Sergio Navega wrote in message <36876a7e.0@news3.ibm.net>...
>
>>In my vision, practically everything (except internal automatic
>>organ control and the initial level of sensory processing) is
>>learned in humans. This has profound consequences for AI, because
>>one can find few reasons to start with a system with lots of
>>"innate" knowledge. The problem is not that it's bad to have
>>prebuilt knowledge. The problem is that by focusing one's effort
>>on the preparation of this initial knowledge, it is often easy
>>to forget that the entity must be able to come up with this
>>knowledge *by itself*. Without this capacity, only one could get
>>from such an "AI" system is the electronic equivalent of an
>>encyclopedia (which means, a way to store the intelligence of
>>its designers).
>>
>So, you think it should be a bottom up approach.

From what I've said, you can correctly draw this. But that was not
my intention, because bottom-up approaches in general mean
connectionist, neural network approaches, and that's not what
I have in mind. It is a pity that today one can be put only in
two categories: you are a symbolicist or you are a connectionist.
I think that there's an "in between" class that must be explored.

>Would I be correct in
>thinking that traditionally, AI theorist were going top down, example,
>finding 'rules' and structures to do intelligence?
>

Again, you are correct in your inference, but that's not the way
I think. Traditional symbolicist AI don't go from top to bottom,
(even if they think they're doing so). They go from top to top,
meaning they go *horizontally*. Let me try to explain this.

When symbolicist AI researchers develop knowledge representation
using logic, for instance, they are starting at the top: a bunch
of symbols with no meaning to the system. But instead of going
to the bottom, which means, instead of making the system more
aware of the elemental knowledge that supports
a specific logic inference, they go sideways, supporting
a logic expression with *other* logic expressions, also unsupported.

Thus, they are not building intelligent systems, but only
representational systems. One example could clarify this.
(I will use logic here, but you can do it with semantic networks,
propositional calculus, description logics, whatever; unfortunately,
due to space and time restrictions, my examples are somewhat simplified).

Suppose you tell the system this:

V x, Bird(x) -> Fly(x)
For all x such that x is a Bird, then x can fly

To allow the system to understand that there are exceptions
to this rule, you tell it that abnormal birds do not fly.

V x, Bird(x) & Abnormal(x) -> ~Fly(x)
For all x such that x is a Bird and x is Abnormal, then
x cannot fly.

So far so good. This is enough to allow the system to
reason about Tweety, the ostrich (lines starting with -> are
system responses, other are user entries):

Bird(Tweety)
-> Bird(Tweety).

Fly(Tweety)?
-> Yes.

Abnormal(Tweety)
-> Abnormal(Tweety).

Fly(Tweety)?
-> No.

This is a successful case of logic reasoning and nonmonotonic
reasoning. But I could go on and define this:

V x, Put-On-Plane(x) -> Fly(x)
For all x such that I put x on a plane, then x can fly

What should happen when I ask the system:

Put-on-Plane(Tweety)
-> Put-on-Plane(Tweety).

Fly(Tweety)?
-> ????

What should the system answer? What wins here, the fact
that Tweety is abnormal or the fact that it is inside a plane?
Now I go on with another expression:

V x, Tie-On-The-Floor(x) -> ~Fly(x)
For all x such that I tie x to the floor, then x cannot fly

Tie-On-The-Floor(Tweety)
-> Tie-On-The-Floor(Tweety).

Fly(Tweety)?
-> ????

And then I define:

V x, Kick-on-the-butt(x) -> Fly(x)

... you know how to proceed.

How is the system supposed to reason with this kind of thing?
Besides intractability (explosion of logic expressions to
calculate), this kind of reasoning seems very far from what
we do in our head. What we seem to do involves knowledge of
causal models and patterns of previous experiences. We know,
by perceptual experience, that you can tie something on
the floor such that a kick on the butt will not make it
fly, but even tied, an airplane is usually able to break the
rope and fly.

So instead of supporting logic expressions with other
logic expresions, what we should be doing is supporting
them with *lower level knowledge* up to the point of
perceptual concepts such as movement, up, down, over,
around, join, break, fill, etc. When one system learns all
about Tweety *and* those lower level concepts, it is not just
learning about birds. It is learning about *things of
our world*, that it can analogize to *different* situations,
and solve different problems. This is exactly the opposite
of intractability, it is the acquisition of fundamental
concepts with a high degree of universality that allows
the system to *reduce* its future processing tasks.

Regards,
Sergio Navega.

From: "Josef" <71351.356@compuserve.com>
Subject: Re: Just a thought....
Date: 28 Dec 1998 00:00:00 GMT
Message-ID: <Owin5yqM#GA.305@ntawwabp.compuserve.com>
References: <xz5g2.51$pj.1364@nsw.nnrp.telstra.net> <36813CB0.4F90AF46@sandpiper.net> <RSig2.120$qn.4208@nsw.nnrp.telstra.net> <75u4ls$jei$1@bertrand.ccs.carleton.ca> <OQLVb2EM#GA.406@nih2naaa.prod2.compuserve.com> <36876a7e.0@news3.ibm.net> <O9XLUQpM#GA.309@ntdwwaaw.compuserve.com> <3687f0b1.0@news3.ibm.net>
Newsgroups: comp.ai.philosophy
X-MimeOLE: Produced By Microsoft MimeOLE V4.72.3155.0

Sergio Navega wrote in message <3687f0b1.0@news3.ibm.net>...
>Josef <71351.356@compuserve.com> wrote in message ...
>>Would I be correct in
>>thinking that traditionally, AI theorist were going top down, example,
>>finding 'rules' and structures to do intelligence?
>>
>Again, you are correct in your inference, but that's not the way
>I think. Traditional symbolicist AI don't go from top to bottom,
>(even if they think they're doing so). They go from top to top,
>meaning they go *horizontally*. Let me try to explain this.
>
>When symbolicist AI researchers develop knowledge representation
>using logic, for instance, they are starting at the top: a bunch
>of symbols with no meaning to the system. But instead of going
>to the bottom, which means, instead of making the system more
>aware of the elemental knowledge that supports
>a specific logic inference, they go sideways, supporting
>a logic expression with *other* logic expressions, also unsupported.
>
>Thus, they are not building intelligent systems, but only
>representational systems.
snip
>How is the system supposed to reason with this kind of thing?
>Besides intractability (explosion of logic expressions to
>calculate), this kind of reasoning seems very far from what
>we do in our head. What we seem to do involves knowledge of
>causal models and patterns of previous experiences. We know,
>by perceptual experience, that you can tie something on
>the floor such that a kick on the butt will not make it
>fly, but even tied, an airplane is usually able to break the
>rope and fly.
>
We have a "theater" but instead of a Shakespearian drama, we are replaying
Bill Nuy the science guy! (joke)

>So instead of supporting logic expressions with other
>logic expresions, what we should be doing is supporting
>them with *lower level knowledge* up to the point of
>perceptual concepts such as movement, up, down, over,
>around, join, break, fill, etc. When one system learns all
>about Tweety *and* those lower level concepts, it is not just
>learning about birds. It is learning about *things of
>our world*, that it can analogize to *different* situations,
>and solve different problems. This is exactly the opposite
>of intractability, it is the acquisition of fundamental
>concepts with a high degree of universality that allows
>the system to *reduce* its future processing tasks.
>

But, couldn't representation, supported by a large enough knowledgebase,
like Cyc, behave correctly?   And thus have no need to go vertical toward
supporting lower level concepts?   And couldn't lower level concepts also
not be modeled with representational means?

Of course, how would such a system evolve or handle novel situations.

Josef.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Just a thought....
Date: 29 Dec 1998 00:00:00 GMT
Message-ID: <3688c6ab.0@news3.ibm.net>
References: <xz5g2.51$pj.1364@nsw.nnrp.telstra.net> <36813CB0.4F90AF46@sandpiper.net> <RSig2.120$qn.4208@nsw.nnrp.telstra.net> <75u4ls$jei$1@bertrand.ccs.carleton.ca> <OQLVb2EM#GA.406@nih2naaa.prod2.compuserve.com> <36876a7e.0@news3.ibm.net> <O9XLUQpM#GA.309@ntdwwaaw.compuserve.com> <3687f0b1.0@news3.ibm.net> <Owin5yqM#GA.305@ntawwabp.compuserve.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 29 Dec 1998 12:10:19 GMT, 129.37.182.237
Organization: SilWis
Newsgroups: comp.ai.philosophy

Josef <71351.356@compuserve.com> wrote in message ...
>
>Sergio Navega wrote in message <3687f0b1.0@news3.ibm.net>...
>
>>So instead of supporting logic expressions with other
>>logic expresions, what we should be doing is supporting
>>them with *lower level knowledge* up to the point of
>>perceptual concepts such as movement, up, down, over,
>>around, join, break, fill, etc. When one system learns all
>>about Tweety *and* those lower level concepts, it is not just
>>learning about birds. It is learning about *things of
>>our world*, that it can analogize to *different* situations,
>>and solve different problems. This is exactly the opposite
>>of intractability, it is the acquisition of fundamental
>>concepts with a high degree of universality that allows
>>the system to *reduce* its future processing tasks.
>>
>
>But, couldn't representation, supported by a large enough knowledgebase,
>like Cyc, behave correctly?
>And thus have no need to go vertical toward
>supporting lower level concepts?

Depends on what you're meaning by behave. If you want something that
is able to reason following the knowledge that was introduced into
it, then CYC will probably do that. But if you want something that
can help you to solve unseen problems, then that demands real
intelligence. I like Jean Piaget's definition of intelligence:
"Intelligence is what you uses to solve a problem that you
don't know how to solve".

> And couldn't lower level concepts also
>not be modeled with representational means?
>

Ah! This is exactly my line of research. I often post messages with
criticism to traditional approaches to AI, as if I was a critic
to any approach to AI. That's not the case: I'm firmly convinced
that AI is possible in conventional computers, if we can only
"do the right thing". The discovery of what is this "right thing"
is my main short-term task. And this involves knowing what's
happening inside biological brains, because they are the only
reference of intelligence we have so far.

Regards,
Sergio Navega.


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net