Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Dr.Lenat theory
Date: 05 May 1999 00:00:00 GMT
Message-ID: <37303a17@news3.us.ibm.net>
References: <7gnil4$7c0$1@news.netvision.net.il> <372F74D2.8C72B572@clickshop.com> <7gnv5t$ggf$1@once.cirl.uoregon.edu> <7gomkt$d8a$1@newsmonger.rutgers.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 5 May 1999 12:31:19 GMT, 129.37.183.205
Organization: Intelliwise Research and Training
Newsgroups: comp.ai

I'm sympathetic to both Luke's and Matthew's viewpoints. With Matthew
I agree that CYC is much like watching a David Coperfield show:
awesome performance with smoke and mirrors, but you know it's just
a trick. However, Luke's viewpoint is also valid: CYC is indeed the
most interesting thing done on knowledge engineering, with a lot of
potential and exciting practical applications.

The essential point here seems to be just that: that knowledge
engineering *is not* artificial intelligence. Then, what's wrong
with CYC, IMHO, is not its performance, but its claims. To claim
that CYC is (or will be) intelligent is putting the cart ahead of
the horses because of a simple confusion.

One of the primary starting points of the CYC project is to consider
that, to be intelligent, one has to have common sense (which means
to have a lot of previously inserted world knowledge). That's the
problem, because it should be the exact opposite.

It is exactly the intelligent system that is able to acquire
by itself all the knowledge of its world, in order to allow it to
reason with common sense. Common sense reasoning is something
*acquired*, and the ability to acquire is one of the essential
points behind intelligent performance.

Sergio Navega.

Luke Kaven wrote in message <7gomkt$d8a$1@newsmonger.rutgers.edu>...
>But Matthew, have you ever had access to the full system?  Putting aside
the
>question of its scientific value for a moment, it is by far the most
>impressive ontology existing anywhere.  Doug's claims concern its
>instrumental value by and large.  And those claims don't seem unreasonable.
>It is certainly not many of the things that people imagine to be claimed of
>it.  But I can imagine any number of knowledge engineering efforts that
>could not get off the ground without such a thing.  Perhaps the use of
>invective here is a bit hasty, even for casual discourse.
>
>Luke Kaven
>
>Matthew L. Ginsberg wrote in message
<7gnv5t$ggf$1@once.cirl.uoregon.edu>...
>
>>As far as I can tell, CYC stands for "smoke and mirrors."  The project
>>has been around for a long time (1984?), and Lenat has always said
>>that amazing things would come out of CYC in "about ten years."  He
>>said it in 1984, again in 1994, and, as far as I know, he's still
>>saying it.  Nothing amazing has happened yet.
>>
>>My own impression, based on my knowledge of CYC and of other projects,
>>is that the real AI is elsewhere, and that CYC is a scientific dead
>>end.  Of course, I've been wrong before ...
>
>

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Dr.Lenat theory
Date: 12 May 1999 00:00:00 GMT
Message-ID: <7h9r9n$6fa$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <7gnil4$7c0$1@news.netvision.net.il> <7h8r85$j4l@cs.vu.nl> <7h9ftl$k2$1@mulga.cs.mu.OZ.AU> <7h9mtd$4f0$1@mulga.cs.mu.OZ.AU> <7h9nvs$4s3$1@mulga.cs.mu.OZ.AU>
Organization: Intelliwise Research and Training
Followup-To: comp.ai
Newsgroups: comp.ai

Matthew L. Ginsberg wrote in message <7h9nvs$4s3$1@mulga.cs.mu.OZ.AU>...
>In article <7h9mtd$4f0$1@mulga.cs.mu.OZ.AU>,
>Randy Crawford  <rlc@ncsa.uiuc.edu> wrote:
>>
>>Perhaps then, it would be more accurate to say, "behaving intelligently
>>implies the presence of intelligence", and leave it at that.
>
>No.  I mean what I say (and I say what I mean).
>
>Intelligence implies intelligent behavior.  Intelligent behavior can
>involve doing dumb things; indeed, it can be argued that intelligent
>behavior *requires* doing the occasional dumb thing because the
>approximations needed to get by in the real world don't always work.
>

I have mixed opinions about this way of seeing the question. For one
side, by saying that intelligent behavior must take into account even
the "silly" things performed by the agent, I happen to agree with
you, as this seems important to the process of discovery.

On the other hand, when I imagine one system with a lot of preloaded
knowledge, even if occasionally doing silly things, I reluct to
ascribe intelligence to it. If the system does not augment its
"knowledge", relying only on what has been previously inserted, then
I prefer to ascribe intelligence not to the system, but to its
"designers".

Regards,
Sergio Navega.

---
[ comp.ai is moderated.  To submit, try posting, but expect some delay before ]
[ your article appears: if that fails mail it to <comp-ai@moderators.isc.org> ]
[ and ask your news administrator to fix the problems with your news system.  ]

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Dr.Lenat theory
Date: 14 May 1999 00:00:00 GMT
Message-ID: <7hh3v6$qdn$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <7gnil4$7c0$1@news.netvision.net.il> <7h9ftl$k2$1@mulga.cs.mu.OZ.AU> <7h9mtd$4f0$1@mulga.cs.mu.OZ.AU> <7h9nvs$4s3$1@mulga.cs.mu.OZ.AU> <7hg4ku$gau$1@mulga.cs.mu.OZ.AU>
Organization: Intelliwise Research and Training
Followup-To: comp.ai
Newsgroups: comp.ai

Anders N Weinstein wrote in message <7hg4ku$gau$1@mulga.cs.mu.OZ.AU>...
>In article <7h9nvs$4s3$1@mulga.cs.mu.OZ.AU>,
>Matthew L. Ginsberg <ginsberg@cirl.uoregon.edu> wrote:
>>Turing's fundamental insight was to *define* intelligence as "behaving
>>intelligently."  He then went on to define intelligent behavior in a
>>particular way, using his imitation game.  But the fundamental insight
>>remains, and I would argue that no other definition of intelligence
>>makes any sense at all.
>
>Sure, but in a very ordinary sense of behavior, a purely symbolic
>machine such as Turing envisioned does not behave at all. Dancing is a
>form of behavior in this ordinary sense. Rooting around in a drawer
>looking for your pen is a form of behavior in this ordinary sense.  And
>so on. But Turing's machine has no body to move. It can't do anything
>at all in the world. If it had to maintain itself in existence as an
>animal does it would fail quickly and die, perhaps proving theorems in
>pure number theory along the way. If it were a remotely realistic model
>of a human mind it would go insane from sensory deprivation.
>
>If one prefers, one could say that Turing was relying on an
>extraordinarily restricted notion of "behavior", limited purely to
>formal symbolic interactions. But there is no reason our criteria for
>intelligent (i.e. human-like) behavior ought to be so limited.
>
>It never ceases to amaze me that Turing thought there was nothing
>outrageous in the idea of an entity that has never had body nor sense
>experience discoursing quite calmly on the meaning of "shall I compare
>thee to a summer's day". One should not confuse the possibilities of
>'behavior" fin Turing's basically Cartesian vision of a disembodied,
>purely symbolic intellect with the everyday behavior in the world
>that we can observe in a normal, flesh and blood, human being.
>I'm all for behavioral tests, but Turing's test is way too limited.
>

I agree entirely with what you said. But there's a way in which
Turing's test may be useful, and that seems to be related to
a subset of intelligent activities which excludes physically
related behaviors. This could happen, for instance, when one
queries the machine about mathematics. But lets take care with
the subtle difference of proving theorems and thinking
mathematically, the way humans do.

Newell and Simon's Logic Theorist, for instance, could be the
first example to come to mind related to these aspects. Then, we
have GPS, which was able to go through some chapters of Russell
and Whitehead's Principia Mathematica duplicating successfully
some proofs of theorems. Later we have Lenat's AM and Eurisko and
then Ken Haase's Cyrano.

My question would then be this one: in which way all these
programs could be "turingistically" compared to a human operator?
In another experiment, how would such systems compare not with
a knowledgeable mathematician, but with a children that learns
things almost without prior knowledge?

In which ways these programs could be said to be learning
from experience and from these experiences, deriving intelligent
conclusions? (I insist again in learning from experience as
I see this as a fundamental aspect behind intelligence).

It is easy to dismiss Turing's test based on the problems of
comparing sensory beings such as us with "blind" computers. But
the question, in my vision, remains unsolved even if we limit
the Turing's test to purely mathematical reasoning. In my vision,
we're still not done with a competent "child-like" mathematician,
let alone with a machine able to reason about our world.

Regards,
Sergio Navega.

---
[ comp.ai is moderated.  To submit, try posting, but expect some delay before ]
[ your article appears: if that fails mail it to <comp-ai@moderators.isc.org> ]
[ and ask your news administrator to fix the problems with your news system.  ]

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Dr.Lenat theory
Date: 22 May 1999 00:00:00 GMT
Message-ID: <7i46e2$l80$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <7gnil4$7c0$1@news.netvision.net.il> <7gomkt$d8a$1@newsmonger.rutgers.edu> <37303a17@news3.us.ibm.net> <7h8r85$j4l@cs.vu.nl> <7h9ftl$k2$1@mulga.cs.mu.OZ.AU> <7h9mtd$4f0$1@mulga.cs.mu.OZ.AU>
Organization: Intelliwise Research and Training
Followup-To: poster
Newsgroups: comp.ai

Randy Crawford wrote in message <7h9mtd$4f0$1@mulga.cs.mu.OZ.AU>...
>"Matthew L. Ginsberg" wrote:
>>
>> In article <7h8r85$j4l@cs.vu.nl>, J1 <jbroeks@not4mail.cs.vu.nl> wrote:
>>
>> >There ofcourse (?) is a difference between behaving intelligently and
>> >being intelligent.
>>
>> No.  This is, at some level, the whole point of the Turing test: there
>> is *NO* difference between behaving intelligently and being
>> intelligent.  AI is a performance discipline.
>>
>>                                                  Matt Ginsberg
>
>OK.  You're a logic advocate.  To state there's no difference is to state:
>
>    behaving intelligently <==> being intelligent
>
>This has the consequents:
>
>1) If one is behaving intelligently, then one is intelligent.
>   (Your intent, I think, and statistically defensible.)
>
>2) If one is intelligent, then one behaves intelligently.
>   (Unintended, I think, and statistically indefensible.  Intelligent
>   entities non-randomly belie their intelligence by OFTEN doing stupid
>   things.  In fact, the more intelligent you are, the more likely you
>   are to act stupidly (in my experience).)
>
>Therefore, with half of your statement refuted by contradiction, your
>equivalence works only one way and behaving intelligently is NOT
>equivalent to being intelligent; it only implies it.
>
>Perhaps then, it would be more accurate to say, "behaving intelligently
>implies the presence of intelligence", and leave it at that.
>

Randy's answer was on the mark. Item 2) above is one of the
precious things to keep in mind.

But I'd like to introduce another factor, closer to the original
topic of the thread: knowledge and intelligence.

     knows to answer your question <==> being intelligent ???

One may be intelligent without being able to answer to one's
question. One may be able to know the answer to your question
without being intelligent. I think this is the main confusion
that lurks in CYC.

Unfortunately, knowing to answer appropriately to one's question
is the preferred method used to assess intelligence. It seems to
be the way to assert if one behaves intelligently or not. But is
this an indication of "being intelligent?". So the problem is this:
can we equate knowing with *being* intelligent?

If CYC was build to "know" everything in order to reason with
common sense adequately, could we say that it is intelligent?

I answer this by saying that I find intelligence to be an
attribute related to the ability of an entity to acquire
knowledge by itself, and we can try a mathematically abstract
way of expressing this. Intelligence is the derivative of the
knowledge accumulation, by one's own effort, with respect
to time.

Under this definition, CYC, as much as we can say here
from "outside", is unable to acquire knowledge by itself,
then it would have a constant amount of knowledge. The
derivative of a constant is "zero".

Regards,
Sergio Navega.

---
[ comp.ai is moderated.  To submit, try posting, but expect some delay before ]
[ your article appears: if that fails mail it to <comp-ai@moderators.isc.org> ]
[ and ask your news administrator to fix the problems with your news system.  ]

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Dr.Lenat theory
Date: 22 May 1999 00:00:00 GMT
Message-ID: <7i5qmb$a21$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <7gnil4$7c0$1@news.netvision.net.il> <7gomkt$d8a$1@newsmonger.rutgers.edu> <37303a17@news3.us.ibm.net> <7h8r85$j4l@cs.vu.nl> <7h9ftl$k2$1@mulga.cs.mu.OZ.AU> <7h9mtd$4f0$1@mulga.cs.mu.OZ.AU>
Supersedes: <7i46e2$l80$1@mulga.cs.mu.OZ.AU>
X-Mod: ?
Organization: Intelliwise Research and Training
Followup-To: comp.ai
Newsgroups: comp.ai

Randy Crawford wrote in message <7h9mtd$4f0$1@mulga.cs.mu.OZ.AU>...
>"Matthew L. Ginsberg" wrote:
>>
>> In article <7h8r85$j4l@cs.vu.nl>, J1 <jbroeks@not4mail.cs.vu.nl> wrote:
>>
>> >There ofcourse (?) is a difference between behaving intelligently and
>> >being intelligent.
>>
>> No.  This is, at some level, the whole point of the Turing test: there
>> is *NO* difference between behaving intelligently and being
>> intelligent.  AI is a performance discipline.
>>
>>                                                  Matt Ginsberg
>
>OK.  You're a logic advocate.  To state there's no difference is to state:
>
>    behaving intelligently <==> being intelligent
>
>This has the consequents:
>
>1) If one is behaving intelligently, then one is intelligent.
>   (Your intent, I think, and statistically defensible.)
>
>2) If one is intelligent, then one behaves intelligently.
>   (Unintended, I think, and statistically indefensible.  Intelligent
>   entities non-randomly belie their intelligence by OFTEN doing stupid
>   things.  In fact, the more intelligent you are, the more likely you
>   are to act stupidly (in my experience).)
>
>Therefore, with half of your statement refuted by contradiction, your
>equivalence works only one way and behaving intelligently is NOT
>equivalent to being intelligent; it only implies it.
>
>Perhaps then, it would be more accurate to say, "behaving intelligently
>implies the presence of intelligence", and leave it at that.
>

Randy's answer was on the mark. Item 2) above is one of the
precious things to keep in mind.

But I'd like to introduce another factor, closer to the original
topic of the thread: knowledge and intelligence.

     knows to answer your question <==> being intelligent ???

One may be intelligent without being able to answer to one's
question. One may be able to know the answer to your question
without being intelligent. I think this is the main confusion
that lurks in CYC.

Unfortunately, knowing to answer appropriately to one's question
is the preferred method used to assess intelligence. It seems to
be the way to assert if one behaves intelligently or not. But is
this an indication of "being intelligent?". So the problem is this:
can we equate knowing with *being* intelligent?

If CYC was build to "know" everything in order to reason with
common sense adequately, could we say that it is intelligent?

I answer this by saying that I find intelligence to be an
attribute related to the ability of an entity to acquire
knowledge by itself, and we can try a mathematically abstract
way of expressing this. Intelligence is the derivative of the
knowledge accumulation, by one's own effort, with respect
to time.

Under this definition, CYC, as much as we can say here
from "outside", is unable to acquire knowledge by itself,
then it would have a constant amount of knowledge. The
derivative of a constant is "zero".

Regards,
Sergio Navega.

---
[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: difference between being and behaving ?
Date: 12 May 1999 00:00:00 GMT
Message-ID: <3739e63f@news3.us.ibm.net>
References: <7gnil4$7c0$1@news.netvision.net.il> <372F74D2.8C72B572@clickshop.com> <7gnv5t$ggf$1@once.cirl.uoregon.edu> <7gomkt$d8a$1@newsmonger.rutgers.edu> <37303a17@news3.us.ibm.net> <7h8r85$j4l@cs.vu.nl> <37385198.FD1302F3@clickshop.com> <373856E6.2AB01E78@ix.netcom.com> <3739a944.1207932@news.lrz-muenchen.de>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 12 May 1999 20:36:15 GMT, 166.72.21.191
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

Bjoern Guenzel wrote in message <3739a944.1207932@news.lrz-muenchen.de>...
>On Tue, 11 May 1999 12:12:22 -0400, "Phil Roberts, Jr."
><philrob@ix.netcom.com> wrote:
>
>[...]
>>Recently I have been wondering about the Big Blue Kasparov match and
>>wondering if we shouldn't modify our notion of intelligence to include
>>some capacity to learn from experience.  I'm not sure as to what
>
>How can we modify something we don't even have.
>
>Or have I missed the proclamation of the official notion of
>intelligence?
>

No, you haven't missed, because obviously that doesn't exist. What
is not so obvious is that it will *never* exist, because complex
concepts such as "intelligence", "honor", "justice", "passion"
cannot be defined.

These concepts cannot be defined, because they can only be
*recognized*. Any child can do that recognition and yet we adults
still struggle attempting to define them. As an example, try
to "define" a cup. You'll see that any definition that you
bring may be stretched to encompass unlikely cases.

Phil appeared to notice that "learning" is a concept that should
take part on our "perceptual" notion of intelligence.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: difference between being and behaving ?
Date: 12 May 1999 00:00:00 GMT
Message-ID: <373967ae@news3.us.ibm.net>
References: <7gnil4$7c0$1@news.netvision.net.il> <372F74D2.8C72B572@clickshop.com> <7gnv5t$ggf$1@once.cirl.uoregon.edu> <7gomkt$d8a$1@newsmonger.rutgers.edu> <37303a17@news3.us.ibm.net> <7h8r85$j4l@cs.vu.nl> <37385198.FD1302F3@clickshop.com> <37387046@news3.us.ibm.net> <3738B0D9.2AEFE8A3@clickshop.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 12 May 1999 11:36:14 GMT, 200.229.240.181
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

Seth Russell wrote in message <3738B0D9.2AEFE8A3@clickshop.com>...
>Sergio Navega wrote:
>
>> I find intelligence to be related to the
>> rate of growth of knowledge, where the knowledge must be
>> acquired by the entity itself. So if the entity is unable of acquiring
>> nothing new by itself, it is not intelligent. But that's just the way I
>> use that word.
>
>I can accept that.  An agent will not test intelligent relative to Sergio
>unless he determines that it can acquire knowledge by itself.  I dare say
that
>many high school and undergraduate students would test badly with you.  I
know
>I would have.  I can count on my fingers the times I actually added to my
own
>knowledge in the lower grades without that knowledge being feed to me by
the
>teacher or the text books ... it's doubtful that you would have caught me
at
>one of those times.  (Cheeze .. that sounds like something Longley would
have
>said.)   But seriously, I think you're under estimating the value of
following
>syntax and vocabulary from the environment, and overestimating the role of
>adding new knowledge,  when it comes to learning experiences in human
>culture.
>

Seth, this does not work with humans. I mean, there's no such thing as
teachers "feeding" knowledge to their students. Teachers only show
things, it's up to the students to intelligently grab it. So it's almost
the same thing being taught by lectures, reading books or going to the
lab (the big difference is, obviously, in the sensorimotor support).
But basically it's all the same thing, it all depends on the agent
finding *what is relevant* and what's not. When I write about
"spoon feeding" I'm referring to a process that, in humans, would
be equivalent of opening his/her skull and manually altering the
synapses of the brain. That's what I find the equivalent of writing
directly in CycL or Meld over CYC.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: difference between being and behaving ?
Date: 12 May 1999 00:00:00 GMT
Message-ID: <3739e641@news3.us.ibm.net>
References: <7gnil4$7c0$1@news.netvision.net.il> <372F74D2.8C72B572@clickshop.com> <7gnv5t$ggf$1@once.cirl.uoregon.edu> <7gomkt$d8a$1@newsmonger.rutgers.edu> <37303a17@news3.us.ibm.net> <7h8r85$j4l@cs.vu.nl> <37385198.FD1302F3@clickshop.com> <37387046@news3.us.ibm.net> <3738B0D9.2AEFE8A3@clickshop.com> <373967ae@news3.us.ibm.net> <37399D6B.30E4E030@clickshop.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 12 May 1999 20:36:17 GMT, 166.72.21.191
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

Seth Russell wrote in message <37399D6B.30E4E030@clickshop.com>...
>Sergio Navega wrote:
>
>> Seth, this does not work with humans. I mean, there's no such thing as
>> teachers "feeding" knowledge to their students. Teachers only show
>> things, it's up to the students to intelligently grab it. So it's almost
>> the same thing being taught by lectures, reading books or going to the
>> lab (the big difference is, obviously, in the sensorimotor support).
>> But basically it's all the same thing, it all depends on the agent
>> finding *what is relevant* and what's not. When I write about
>> "spoon feeding" I'm referring to a process that, in humans, would
>> be equivalent of opening his/her skull and manually altering the
>> synapses of the brain. That's what I find the equivalent of writing
>> directly in CycL or Meld over CYC.
>
>I'm saying that humans learn by following the syntax and vocabulary
>queues in their environment.  With a little help from innate abilities this
>process produces a reflection of their environment in their minds.

I don't have much against this.

>That symbolic reflection has the innate ability to animate apart from the
>sensual contact with the environment.  There is quite a distinction
>between that view and your characterization of my view above.
>Inmho, without syntax and vocabulary queues that are already adapted
>to innate abilities there will be no salient learning.
>

I'm not sure I understand you fully. I agree that we develop an
inner world vision in our minds, and that this vision is where we
"run" a lot of mental experiments, trying to discover things that
can be, later, tested on the real world. We build this inner world
based on the regularities and patterns we perceive from sensory
experiences, and also by instruction throughout language. What
I put as a great problem is manually assembling this inner world
of ours inside a computer. Doing this will just copy "our" inner
world very imperfectly to the computer (imperfectly because our
conscious awareness is not able to "see" all the components of
our internal representation). Other problem is that the process
of building this world also informs us about the nature of
this world. This cannot be copied to the computer.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Dr.Lenat theory
Date: 24 May 1999 00:00:00 GMT
Message-ID: <7ibjhn$n5e$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <7gnil4$7c0$1@news.netvision.net.il> <372F74D2.8C72B572@clickshop.com> <7gnv5t$ggf$1@once.cirl.uoregon.edu> <7gomkt$d8a$1@newsmonger.rutgers.edu> <37303a17@news3.us.ibm.net> <7h8r85$j4l@cs.vu.nl> <7i121c$5k4$1@mulga.cs.mu.OZ.AU> <7i14dg$dj3$1@mulga.cs.mu.OZ.AU> <7i1vgn$nnj$1@mulga.cs.mu.OZ.AU> <7i9ico$qga$1@mulga.cs.mu.OZ.AU>
Organization: Intelliwise Research and Training
Followup-To: comp.ai
Newsgroups: comp.ai

Tom Breton wrote in message <7i9ico$qga$1@mulga.cs.mu.OZ.AU>...
>"Sergio Navega" <snavega@ibm.net> writes:
>
>>
>>
>> Or, a much more polemic question (I hope not being flamed on this)
>> is intelligence, the way we want it, reducible to just FOPC with
>> some interesting extensions? The mathematical and logical gurus
>> of AI would promptly say yes. I have reasons to disagree with them.
>
>Well, "some interesting extensions" covers a lot of ground.
>Defeasible reasoning, second-order quantification, etc are not mere
>bells and whistles on FOPC, they change the whole nature of it.
>

I agree, and so do circumscription, situation calculus, event
calculus, all are not only minor additions, but are significant
enhancements to simple FOPC. Yet, all these additions fail to
capture one of the essences of intelligence (in my vision),
that of recognition of patterns. These formalisms are ideal to
run in computers, but are very bad at presenting good results
in the uncertain and vague world of ours.

Regards,
Sergio Navega.

---
[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]

From: ginsberg@once.cirl.uoregon.edu (Matthew L. Ginsberg)
Subject: Re: Dr.Lenat theory
Date: 25 May 1999 00:00:00 GMT
Message-ID: <7ibofs$psv$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <7gnil4$7c0$1@news.netvision.net.il> <7i1vgn$nnj$1@mulga.cs.mu.OZ.AU> <7i9ico$qga$1@mulga.cs.mu.OZ.AU> <7ibjhn$n5e$1@mulga.cs.mu.OZ.AU>
Organization: Computational Intelligence Research Laboratory
Followup-To: comp.ai
Newsgroups: comp.ai

David Kinny wrote that he thought all this discussion about the Turing
test as defining intelligence was "a mite pedantic."  For what it's
worth, I only use it when someone asks me what I do.  When I say, "I'm
trying to build an artifact that will reliably pass the Turing test,"
I'm at least on solid ground with regard to my answer.

Sergio Navega wrote the extensions to FOPC are lacking because they
don't capture pattern matching, which he calls "one of the essences of
intelligence."  While pattern matching is an essence for us (it is,
after all, how we do things like play chess), I see no evidence at all
this it's an essence for machines (which play chess in a very
non-pattern matching way).

                                                Matt Ginsberg

---
[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Dr.Lenat theory
Date: 25 May 1999 00:00:00 GMT
Message-ID: <7icbu7$irr$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <7gnil4$7c0$1@news.netvision.net.il> <7i1vgn$nnj$1@mulga.cs.mu.OZ.AU> <7i9ico$qga$1@mulga.cs.mu.OZ.AU> <7ibjhn$n5e$1@mulga.cs.mu.OZ.AU> <7ibofs$psv$1@mulga.cs.mu.OZ.AU>
Organization: Intelliwise Research and Training
Followup-To: comp.ai
Newsgroups: comp.ai

Matthew L. Ginsberg wrote in message <7ibofs$psv$1@mulga.cs.mu.OZ.AU>...
>David Kinny wrote that he thought all this discussion about the Turing
>test as defining intelligence was "a mite pedantic."  For what it's
>worth, I only use it when someone asks me what I do.  When I say, "I'm
>trying to build an artifact that will reliably pass the Turing test,"
>I'm at least on solid ground with regard to my answer.
>
>Sergio Navega wrote the extensions to FOPC are lacking because they
>don't capture pattern matching, which he calls "one of the essences of
>intelligence."  While pattern matching is an essence for us (it is,
>after all, how we do things like play chess), I see no evidence at all
>this it's an essence for machines (which play chess in a very
>non-pattern matching way).
>

It is hard to disagree with your point, after all we have
machines today doing great things without using any pattern matching
at all. So the evidence you're asking is hard to provide outside
the context (and philosophy) of connectionist systems. However,
I don't think we have to go to connectionism to see that something
is missing from purely logic solutions. At first, I'd like to remember
that *we* are telling the machines what to do. So it may be possible
that we don't have evidences because we are be missing the point.

The first thing that suggests we're on the wrong way (using
just extensions to FOPC) is the troubled relation of computers
with natural language processing. CYC was supposed to learn by
itself the "rest" of the knowledge, once a sufficiently large
common-sense knowledge base was inserted. I doubt that this
will ever be achieved.

I claim that, using FOPC and the extensions mentioned, that CYC
will not be able to acquire knowledge from "real-world" texts
or, if it do anything like that, that it will not "learn" useful
things (which means, Cycorp will have to hire a lot of human
operators to "fix" the wrong things learned, demonstrating *again*
that the only intelligence, in CYC, lies in that "equipment"
located between the keyboard and the chair). One of the reasons
for this is that CYC will not recognize the patterns that lie
behind the text that are not expressible in FOPC, and that this
is also essential for the ability to communicate with us.

So unless our desire of intelligent computers dismiss the
ability to understand language, we've got a big problem to
solve. But the problem is, in my vision, deeper because it
extends well beyond language, encompassing a bunch of other
cognitive abilities (that we take for granted in humans) that
make the core of what we think is intelligent behavior.

Regards,
Sergio Navega.

---
[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]

From: Tom Breton <tob@world.std.com>
Subject: Re: Dr.Lenat theory
Date: 25 May 1999 00:00:00 GMT
Message-ID: <7id7iu$c23$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <7gnil4$7c0$1@news.netvision.net.il> <372F74D2.8C72B572@clickshop.com> <7gnv5t$ggf$1@once.cirl.uoregon.edu> <7gomkt$d8a$1@newsmonger.rutgers.edu> <37303a17@news3.us.ibm.net> <7h8r85$j4l@cs.vu.nl> <7i121c$5k4$1@mulga.cs.mu.OZ.AU> <7i14dg$dj3$1@mulga.cs.mu.OZ.AU> <7i1vgn$nnj$1@mulga.cs.mu.OZ.AU> <7i9ico$qga$1@mulga.cs.mu.OZ.AU> <7ibjhn$n5e$1@mulga.cs.mu.OZ.AU>
X-Date: 24 May 1999 20:27:31 -0400
X-Mod: ?
Organization: ?
Followup-To: comp.ai
Newsgroups: comp.ai

"Sergio Navega" <snavega@ibm.net> writes:

> Tom Breton wrote in message <7i9ico$qga$1@mulga.cs.mu.OZ.AU>...
> >"Sergio Navega" <snavega@ibm.net> writes:
> >
> >> Or, a much more polemic question (I hope not being flamed on this)
> >> is intelligence, the way we want it, reducible to just FOPC with
> >> some interesting extensions? The mathematical and logical gurus
> >> of AI would promptly say yes. I have reasons to disagree with them.
> >
> >Well, "some interesting extensions" covers a lot of ground.
> >Defeasible reasoning, second-order quantification, etc are not mere
> >bells and whistles on FOPC, they change the whole nature of it.

> I agree, and so do circumscription, situation calculus, event
> calculus, all are not only minor additions, but are significant
> enhancements to simple FOPC. Yet, all these additions fail to
> capture one of the essences of intelligence (in my vision),
> that of recognition of patterns.

I don't want to get any further into "what is intelligence", but I
want to point out that pattern matching is quite well known, as in the
HMM and Viterbi algorithm modern speech recognition software uses.
(OK, I'm simplifying what SR does) Perhaps that's not the sort of
pattern matching you meant.

> These formalisms are ideal to
> run in computers, but are very bad at presenting good results
> in the uncertain and vague world of ours.

--
Tom Breton, http://world.std.com/~tob
Ugh-free Spelling (no "gh") http://world.std.com/~tob/ugh-free.html

---
[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Dr.Lenat theory
Date: 26 May 1999 00:00:00 GMT
Message-ID: <7iebdr$9kf$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <7gnil4$7c0$1@news.netvision.net.il> <372F74D2.8C72B572@clickshop.com> <7gnv5t$ggf$1@once.cirl.uoregon.edu> <7gomkt$d8a$1@newsmonger.rutgers.edu> <37303a17@news3.us.ibm.net> <7h8r85$j4l@cs.vu.nl> <7i121c$5k4$1@mulga.cs.mu.OZ.AU> <7i14dg$dj3$1@mulga.cs.mu.OZ.AU> <7i1vgn$nnj$1@mulga.cs.mu.OZ.AU> <7i9ico$qga$1@mulga.cs.mu.OZ.AU> <7ibjhn$n5e$1@mulga.cs.mu.OZ.AU> <7id7iu$c23$1@mulga.cs.mu.OZ.AU>
Organization: Intelliwise Research and Training
Followup-To: comp.ai
Newsgroups: comp.ai

Tom Breton wrote in message <7id7iu$c23$1@mulga.cs.mu.OZ.AU>...
>"Sergio Navega" <snavega@ibm.net> writes:
>
>> Tom Breton wrote in message <7i9ico$qga$1@mulga.cs.mu.OZ.AU>...
>> >"Sergio Navega" <snavega@ibm.net> writes:
>> >
>> >> Or, a much more polemic question (I hope not being flamed on this)
>> >> is intelligence, the way we want it, reducible to just FOPC with
>> >> some interesting extensions? The mathematical and logical gurus
>> >> of AI would promptly say yes. I have reasons to disagree with them.
>> >
>> >Well, "some interesting extensions" covers a lot of ground.
>> >Defeasible reasoning, second-order quantification, etc are not mere
>> >bells and whistles on FOPC, they change the whole nature of it.
>
>> I agree, and so do circumscription, situation calculus, event
>> calculus, all are not only minor additions, but are significant
>> enhancements to simple FOPC. Yet, all these additions fail to
>> capture one of the essences of intelligence (in my vision),
>> that of recognition of patterns.
>
>I don't want to get any further into "what is intelligence", but I
>want to point out that pattern matching is quite well known, as in the
>HMM and Viterbi algorithm modern speech recognition software uses.
>(OK, I'm simplifying what SR does) Perhaps that's not the sort of
>pattern matching you meant.
>

No, it's not. I meant the sort of pattern matching that can be done
symbolically. One of the sources of inspiration for this idea is
the work of Gerry Wolff (Univ.. Wales at Bangor). He develops a
comprehensive vision of computing and cognition as the result
of pattern matching and unification:

http://saturn.sees.bangor.ac.uk/~gerry/sp_summary.html

Another source of inspiration is Doug Hofstadter's Copycat: it
is a symbolic processing machine built on top of an almost
connectionist network (the slipnet). Although Copycat is
restricted by a fixed number of concepts, the algorithm
demonstrates the kind of flexibility and use of analogy that
often occurs in human thinking. Copycat is surprisingly
"intelligent", in its own way.

Then, when we read about some psychological tests (like the
Wason selection test) we see that human reasoning does not
work within a logic-based substrate. Well, that would
not be enough to say that logic is insufficient, as some may
argue that there's nothing forcing us to duplicate human
reasoning in AI. It is exactly this last assertion that
I'm trying to challenge. We *must* use plausible, human-like
reasoning as a model for our systems.

Again, I risk being flamed by what I'm about to say. I have
more than one way to reach this conclusion, but I'll start
with a very simple argument.

Logic, like much of what is done in mathematics, occupies
itself extensively with the derivation of consequents from
antecedents. There's nothing to do with formal methods if you
don't have a good set of antecedents to start with, and this
is the main problem: much of what humans do is creative, which
means, we creatively obtain *interesting antecedents*. It is
funny the number of mathematicians who disregard this question.

Where does these antecedents come from? How are our brains able
to come up with them? Far from being a secondary problem, I claim
that this is the *primary problem* that we have to understand
in order to build intelligent machines.

Logic and formal methods do not specify how we can obtain
interesting antecedents. Logic methods don't even know how to
evaluate "interesting", for that matter. "Interesting things" are
things that must be evaluated cognitively, in terms of importance
for the agent, and here we get the first clue that purely logical
methods will fail.

Cognitively interesting things are things that have an obvious
relevance in relation to our world. Interesting, obvious, world.
These words do not appear often when we talk about formal methods.
I propose that this "distance" of logic from the real world is the
cause of a multitude of problems and that the implementations of AI
based on these methods could not be considered relevant to the
meaning of "machine intelligence".

Regards,
Sergio Navega.

---
[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]

From: ginsberg@once.cirl.uoregon.edu (Matthew L. Ginsberg)
Subject: Re: Dr.Lenat theory
Date: 26 May 1999 00:00:00 GMT
Message-ID: <7iek0u$emi$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <7gnil4$7c0$1@news.netvision.net.il> <7ibjhn$n5e$1@mulga.cs.mu.OZ.AU> <7id7iu$c23$1@mulga.cs.mu.OZ.AU> <7iebdr$9kf$1@mulga.cs.mu.OZ.AU>
Organization: Computational Intelligence Research Laboratory
Followup-To: comp.ai
Newsgroups: comp.ai

In article <7iebdr$9kf$1@mulga.cs.mu.OZ.AU>,
Sergio Navega <snavega@ibm.net> wrote:

>We *must* use plausible, human-like reasoning as a model for our systems.

Why?

>Logic, like much of what is done in mathematics, occupies
>itself extensively with the derivation of consequents from
>antecedents. There's nothing to do with formal methods if you
>don't have a good set of antecedents to start with...

Logic, like much of what is done in mathematics, is universal.  That
means that if you can describe precisely what you're doing in *any*
language, you can do so using logic.  If you have a coherent method of
generating antecedents, logic can describe it.

>Logic and formal methods do not specify how we can obtain
>interesting antecedents ...

Of course they don't.  Logic is a language in which such things
*could* be specified.  If we aren't currently smart enough to do so,
it's not because logic is lacking.  It's because we're lacking.

>I propose that this "distance" of logic from the real world is the
>cause of a multitude of problems and that the implementations of AI
>based on these methods could not be considered relevant to the
>meaning of "machine intelligence".

This is simply nonsense.  Logic is neither close to, nor distant from,
anything.

The fundamental question is not whether logic can describe the
reasoning mechanisms that will make machines intelligent: it provably
can.  The question is whether that description will be computationally
viable.

For *us*, such a description is not computationally viable.  We are
pattern matchers, not serial solvers.  But all the evidence out there
indicates that for machines, a logic-based description will not only
be computationally viable, it will be vastly preferable to
descriptions based on attempts to mimic human reasoning.

As a scientist, I often wish it were otherwise.  AI would be a lot
easier if introspection counted for something.  But as a human, I'm
glad things are this way: our different areas of competence will make
it more likely that men and machines collaborate than that they
compete.

Like it or not, though, it's the way things are.

                                                Matt Ginsberg

---
[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Dr.Lenat theory
Date: 26 May 1999 00:00:00 GMT
Message-ID: <7ieruu$1l0$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <7gnil4$7c0$1@news.netvision.net.il> <7ibjhn$n5e$1@mulga.cs.mu.OZ.AU> <7id7iu$c23$1@mulga.cs.mu.OZ.AU> <7iebdr$9kf$1@mulga.cs.mu.OZ.AU> <7iek0u$emi$1@mulga.cs.mu.OZ.AU>
Organization: Intelliwise Research and Training
Followup-To: comp.ai
Newsgroups: comp.ai

Matthew L. Ginsberg wrote in message <7iek0u$emi$1@mulga.cs.mu.OZ.AU>...
>In article <7iebdr$9kf$1@mulga.cs.mu.OZ.AU>,
>Sergio Navega <snavega@ibm.net> wrote:
>
>>We *must* use plausible, human-like reasoning as a model for our systems.
>
>Why?
>

Because we are the only exemplars of intelligence we have handy.
Anything else would be like getting a lottery ticket to buy a
car.

The first men who tried to fly like birds, started jumping from
hills flapping wings. After some deaths, they started conjecturing
what went wrong. Eventually the Wright brothers succeeded, although
without much knowledge of why. Aviation only grew after we discovered
the essential principles of aerodynamics. AI must discover its
essential principles too.

However, I should refine a bit what I mean by "human-like" reasoning.
It is not the kind of introspection that one could do in an armchair.
It is the scientific investigation (cognitive and neurobiological)
of the principles of intelligence. It is the search for the "Maxwell
equations of thought" that Doug Lenat said it doesn't exist.

Lenat found no way of devising learning algorithms capable of
creating common sense (he had invested a good deal of time
in AM and Eurisko, so I suppose he studied deeply the subject).
However, he also claimed that the study of human cognition
was not important to the task of creating intelligent computers.

So he hypothesized that there isn't such algorithms and that
common sense is "provoked" by the presence of a huge number of
facts. But this hypothesis does not make sense in human-like
terms. How do we humans got it started? Is it reasonable to
assume that we're born with innate knowledge? The evidences
that we're collecting from neuroscience and cognitive science
points to the unlikeliness of this, even for language.

I'm playing the "join the dots" game here. We have, in my
opinion, no other way to go other than understanding how
a child becomes aware of the world.

>>Logic, like much of what is done in mathematics, occupies
>>itself extensively with the derivation of consequents from
>>antecedents. There's nothing to do with formal methods if you
>>don't have a good set of antecedents to start with...
>
>Logic, like much of what is done in mathematics, is universal.  That
>means that if you can describe precisely what you're doing in *any*
>language, you can do so using logic.  If you have a coherent method of
>generating antecedents, logic can describe it.
>

I have nothing against this, I appreciate the expressiveness and
power of logic. But this only asserts that logic is not what we
should be investigating. We should be investigating how to
generate those antecedents and only then to go to logic to
"code" it. But this is not what appears to be going on.

From the "detection" of the frame problem by McCarthy and Hayes,
much effort has been directed to the goal of "fixing" logic, such
as nonmonotonic techniques like circumscription, default logic, etc.
I don't think that this goes in the direction of easing the
creation of antecedents. This seems to me to be walking away
from our goal of understanding intelligence.

On the other hand, cognitive investigations of our brain indicate
that much of what we do is centered around perceptual recognition.
Since J.J. Gibson's ideas of high-level invariants, we've been
gaining insights that point to the way our brain solves the
frame problem. It is based in recognition and pattern matching,
not in logic inference.

>>Logic and formal methods do not specify how we can obtain
>>interesting antecedents ...
>
>Of course they don't.  Logic is a language in which such things
>*could* be specified.  If we aren't currently smart enough to do so,
>it's not because logic is lacking.  It's because we're lacking.
>

One way or another, if we keep using the computers we have today
we will be using solutions that can be reduced to formal methods
of computation. Even connectionists fall into that category. But
I question this reductionistic way of seeing things. This appears
to be similar to the effort of enhancing the transistors and
capacitors of our microprocessors in order to obtain a more
user friendly operating system.

>>I propose that this "distance" of logic from the real world is the
>>cause of a multitude of problems and that the implementations of AI
>>based on these methods could not be considered relevant to the
>>meaning of "machine intelligence".
>
>This is simply nonsense.  Logic is neither close to, nor distant from,
>anything.
>
>The fundamental question is not whether logic can describe the
>reasoning mechanisms that will make machines intelligent: it provably
>can.  The question is whether that description will be computationally
>viable.
>

That's right, I agree. Amazingly, this is a point to support my
initial considerations about human-like methods. Our brain is certainly
a limited resource. If our computers don't have comparable "horsepower"
today, in 20 or 30 years they will. But even with comparable processing
power, we cannot say that our computers will be intelligent. Something
is missing, and it is not speed.

I recall a project named Parka which used a massively parallel
computer to process a gigantic semantic network. The project is indeed
very beautiful and was very well conceived. It is an example of what
could be called performance-based AI. But I have doubts that a
system like that could do any better than CYC. Not because of
lack of horsepower or lack of pre-loaded knowledge, but because
it is not able to recognize and acquire knowledge by itself.

>For *us*, such a description is not computationally viable.  We are
>pattern matchers, not serial solvers.  But all the evidence out there
>indicates that for machines, a logic-based description will not only
>be computationally viable, it will be vastly preferable to
>descriptions based on attempts to mimic human reasoning.
>
>As a scientist, I often wish it were otherwise.  AI would be a lot
>easier if introspection counted for something.  But as a human, I'm
>glad things are this way: our different areas of competence will make
>it more likely that men and machines collaborate than that they
>compete.
>
>Like it or not, though, it's the way things are.
>

Matt, I'm sorry if I'm sounding too pesky in my comments, it is not
my intention. But I'm trying hard to challenge this vision
because I believe we still have a significant step forward to give.
And it does not seem to be in the direction of improving logic.

Regards,
Sergio Navega.

---
[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net