Selected Newsgroup Message

Dejanews Thread

From: Sergio Navega <snavega@ibm.net>
Subject: Re: "imaging the brain while it perceives" - Brooks.
Date: 16 Jun 1999 00:00:00 GMT
Message-ID: <7k8a4m$5f9$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <7k82g5$f5$1@mulga.cs.mu.OZ.AU>
Organization: Intelliwise Research and Training
Followup-To: comp.ai
Newsgroups: comp.ai

Matthew Stanfield wrote in message <7k82g5$f5$1@mulga.cs.mu.OZ.AU>...
>In Rodney Brook's introduction to the MIT AI Lab he writes:
>
>"A whole host of new techniques for imaging the brain while it perceives,
>acts, and produces utterances are giving new insights into how components
>of human intelligence interact. The answers all look very different from
>earlier models derived from introspection and protocol analysis.
>Multi-cell recording in animal nervous systems is further illuminating how
>natural systems perceive, remember, and act."
>
>My internet searches to get more info. about what these 'new insights' are
>and the underlying imaging technology have been unsuccessful because I get
>so many results however I seem to word the query.
>
>Can someone point me to on-line or other resources so that I can learn
>more about this please.
>
>Thanks and regards,
>

Hello, Matthew,

This is, in my opinion, hot stuff. I may have some quibbles with
Brook's "intelligence without representation", but he definitely
raised an important issue here. Neuroscientists are discovering,
in the last couple of decades, that much of our intelligence stems
on the use of sensorimotor areas of our cortex. Just to mention
some recent discoveries that I have over my desk now:

Sources of Mathematical Thinking: Behavioral and Brain-Imaging
Evidence, by Stanislas Dehaene et. al, SCIENCE 7/may/1999 vol 284

Motor Cortical Encoding of Serial Order in a Context-Recall Task
Adam Carpenter et. al, SCIENCE 12/march/1999 vol 283

I have plenty of other interesting stuff, tell me if you want
some more. For an online reading, try this one:

Cortical Software Re-Use: A Neural Basis for Creative Cognition
http://www.compapp.dcu.ie/~tonyv/MIND/ronan.html

In summary, what's being discovered is that much of our reasoning,
including higher level ones such as mathematical abstractions,
is done by the brain using sensorimotor and visual areas. That
means that these areas are used not only when we activate our
limbs (motor areas) or when we see something (visual areas),
but *also* when we "think" about these things and even when
we think about something that uses spatial concepts (Dehaene
discovered, for instance, that approximate calculations use
parts of the brain responsible for visuo-spatial processing).

This is a strong indication that human intelligence
is very linked to the aspects that emerge from our interactions
with the world, since our childhood. Jean Piaget was right a
long time before fMRI and PET scans.

Regards,
Sergio Navega.

---
[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]

From: ginsberg@once.cirl.uoregon.edu (Matthew L. Ginsberg)
Subject: Re: "imaging the brain while it perceives" - Brooks.
Date: 17 Jun 1999 00:00:00 GMT
Message-ID: <7k8guu$9d9$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <7k82g5$f5$1@mulga.cs.mu.OZ.AU> <7k8a4m$5f9$1@mulga.cs.mu.OZ.AU>
Organization: Computational Intelligence Research Laboratory
Followup-To: comp.ai
Newsgroups: comp.ai

In article <7k8a4m$5f9$1@mulga.cs.mu.OZ.AU>,
Sergio Navega  <snavega@ibm.net> wrote:
>Matthew Stanfield wrote in message <7k82g5$f5$1@mulga.cs.mu.OZ.AU>...
>>In Rodney Brook's introduction to the MIT AI Lab he writes:
>>
>>"A whole host of new techniques for imaging the brain while it perceives,
>>acts, and produces utterances are giving new insights into how components
>>of human intelligence interact. The answers all look very different from
>>earlier models derived from introspection and protocol analysis.
>>Multi-cell recording in animal nervous systems is further illuminating how
>>natural systems perceive, remember, and act."
>
>This is, in my opinion, hot stuff.

This is, in my opinion, irrelevant to the AI enterprise.

Please understand that I'm defining AI to be the field of building
an intelligent artifact.  If you define AI to be the field of mimicing
human intelligence (a vastly different enterprise), then Brooks' work
will obviously be relevant.

But for those of us who care simply about building things, it's as if
someone came up to me and said, "I'd like to make my VW Beetle go a
bit faster.  Can you help me?" and I replied, "Sure.  Let's go over
there and take that F-17 apart and see how it works.  That'll give us
some good insights."  Or perhaps more accurately still, if I replied,
"Sure.  Let's go dissect that sparrow; it goes a lot faster than your
VW and that will help us figure things out."

If you want to make your VW go faster, you might learn a little from
the F-17 (better exhaust systems?).  You're unlikely to learn anything
at all from the sparrow.  But you're likely to learn the most by
tearing the VW down.

For better or worse, people in AI are currently working with VW's or
their computational equivalents.  That's why scientists or engineers
trying to make progress in AI should study computers and algorithms,
not people or philosophy.

                                                Matt Ginsberg

---
[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: "imaging the brain while it perceives" - Brooks.
Date: 17 Jun 1999 00:00:00 GMT
Message-ID: <7k8q5v$f6i$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <7k82g5$f5$1@mulga.cs.mu.OZ.AU> <7k8a4m$5f9$1@mulga.cs.mu.OZ.AU> <7k8guu$9d9$1@mulga.cs.mu.OZ.AU>
Organization: Intelliwise Research and Training
Followup-To: comp.ai
Newsgroups: comp.ai

Matthew L. Ginsberg wrote in message <7k8guu$9d9$1@mulga.cs.mu.OZ.AU>...
>In article <7k8a4m$5f9$1@mulga.cs.mu.OZ.AU>,
>Sergio Navega  <snavega@ibm.net> wrote:
>>Matthew Stanfield wrote in message <7k82g5$f5$1@mulga.cs.mu.OZ.AU>...
>>>In Rodney Brook's introduction to the MIT AI Lab he writes:
>>>
>>>"A whole host of new techniques for imaging the brain while it perceives,
>>>acts, and produces utterances are giving new insights into how components
>>>of human intelligence interact. The answers all look very different from
>>>earlier models derived from introspection and protocol analysis.
>>>Multi-cell recording in animal nervous systems is further illuminating how
>>>natural systems perceive, remember, and act."
>>
>>This is, in my opinion, hot stuff.
>
>This is, in my opinion, irrelevant to the AI enterprise.
>

Matthew, I'm glad you disagreed, this is the only way to have
interesting exchanges in newsgroups :-)

>Please understand that I'm defining AI to be the field of building
>an intelligent artifact.  If you define AI to be the field of mimicing
>human intelligence (a vastly different enterprise), then Brooks' work
>will obviously be relevant.
>

I understand that under a special definition of 'intelligent artifacts',
your considerations may apply. It is obvious that I'm preaching here
a different way to see the word "inteligent" in that expression.
I'm not really interested in mimicking human intelligence, I guess
we all agree that there's a lot of "problems" in human reasoning
and machines should not inherit them.

But I offer here the vision that we're not understanding the basic
principles under which our own intelligence works, let alone how to
implement it in other "hardware".

>But for those of us who care simply about building things, it's as if
>someone came up to me and said, "I'd like to make my VW Beetle go a
>bit faster.  Can you help me?" and I replied, "Sure.  Let's go over
>there and take that F-17 apart and see how it works.  That'll give us
>some good insights."  Or perhaps more accurately still, if I replied,
>"Sure.  Let's go dissect that sparrow; it goes a lot faster than your
>VW and that will help us figure things out."
>

You're good at devising analogies (I have already been inspired by
some you concocted in your book "Essentials.."), but in this case
it does not seem to address entirely the "mystery" of the circumstance.
Hence, I'll propose some fancy modifications that may put the story
a bit closer to the big question.

Suppose that we're given an alien spacecraft to investigate. That
vehicle is able to fly noiselessly with impressive speed and
control. At first, we're astonished by the inextricability of
the machine. We question how it works, what principles it uses.
It does not seem to use jets for propulsion, as it is noiseless.
It does not seem to be magnetic, iron pieces around are not affected.

We are able to determine all its behavioral aspects, we can graph
its acceleration, maximum speed, warm-up time, etc. But we don't
have a clue as to what principles it uses. We can replicate (in a
very limited fashion and using different principles) *some* of its
behaviors, for example using just a conventional military jet, but
it pales in comparison with the alien device. We know that the jet
is insignificant next to that alien device. Nobody is interested
in the jet, everybody wants the alien vehicle.

Now imagine if this alien craft works by some gravitational principle
yet undiscovered. Suppose that we're just on the verge of discovering
some *simple* physics principle that will open up the doors of our
understanding of that craft. Imagine that, by discovering what are
those essential principles, we could design *new ways* to implement it.

I don't believe in aliens, I don't believe in ET's here on Earth. But
I believe that *all of us* that work with AI are missing something
important, something that is essential to the word 'intelligence'
as it is normally used. The best exemplar of this kind of machine
is inside our own skull. What I'm proposing is not just a search for
the working principles of our neurons, but a search for the kind of
high-level principles that stand behind its operation. No, I'm not
a connectionist! Neither I want to model neurons in computers, I let
this hard task to computational neuroscientists. I find it necessary
to understand in which way intelligent competence arises, no matter
the architectural substrate with which it is built.

It is under this aspect that my reference to Dehaene's work in my
previous post should be seen.

>If you want to make your VW go faster, you might learn a little from
>the F-17 (better exhaust systems?).  You're unlikely to learn anything
>at all from the sparrow.  But you're likely to learn the most by
>tearing the VW down.
>
>For better or worse, people in AI are currently working with VW's or
>their computational equivalents.  That's why scientists or engineers
>trying to make progress in AI should study computers and algorithms,
>not people or philosophy.
>

I would say that those studying VWs will eventually obtain a pretty good
machine, like those dragster cars. But this dragster may be an ant
if compared to that alien craft. I want to know how that damn machine
works to build a better one (I don't trust alien engineers ;-).

Regards,
Sergio Navega.

---
[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]

From: ginsberg@once.cirl.uoregon.edu (Matthew L. Ginsberg)
Subject: Re: "imaging the brain while it perceives" - Brooks.
Date: 17 Jun 1999 00:00:00 GMT
Message-ID: <7k9184$25b$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <7k82g5$f5$1@mulga.cs.mu.OZ.AU> <7k8a4m$5f9$1@mulga.cs.mu.OZ.AU> <7k8guu$9d9$1@mulga.cs.mu.OZ.AU> <7k8q5v$f6i$1@mulga.cs.mu.OZ.AU>
Organization: Computational Intelligence Research Laboratory
Followup-To: comp.ai
Newsgroups: comp.ai

In article <7k8q5v$f6i$1@mulga.cs.mu.OZ.AU>,
Sergio Navega <snavega@ibm.net> wrote:
>Matthew L. Ginsberg wrote in message <7k8guu$9d9$1@mulga.cs.mu.OZ.AU>...
>>But for those of us who care simply about building things, it's as if
>>someone came up to me and said, "I'd like to make my VW Beetle go a
>>bit faster.  Can you help me?" and I replied, "Sure.  Let's go over
>>there and take that F-17 apart and see how it works.  That'll give us
>>some good insights."  Or perhaps more accurately still, if I replied,
>>"Sure.  Let's go dissect that sparrow; it goes a lot faster than your
>>VW and that will help us figure things out."

>I'll propose some fancy modifications that may put the story
>a bit closer to the big question.
>
>Suppose that we're given an alien spacecraft to investigate. That
>vehicle is able to fly noiselessly with impressive speed and
>control. At first, we're astonished by the inextricability of
>the machine. We question how it works, what principles it uses.
>It does not seem to use jets for propulsion, as it is noiseless.
>It does not seem to be magnetic, iron pieces around are not affected.

Sounds like a sparrow to me!

The only difference is that in the sparrow case, we know enough about
the architecture to conclude with confidence that it won't help us
build a better VW.  Your alien case differs from the sparrow *only* in
that you've blocked our ability to conclude that the sparrow is
irrelevant.

But consider: We do have a *vague* understanding of how human
intelligence works.  We easily know enough, I believe, to know that
the techniques we use and the techniques our machines will use are
enormously different.  This follows not only from our knowledge of
neuroscience, but from the performance results of attempts to play
chess (or do other things) using "humanlike" and non-humanlike
methods, and from the general inapplicability of naturally evolved
techniques to artificial devices for all purposes -- flying, digging
ditches, what have you.

[stuff about how AI still needs some basic science done]

I never said otherwise.  But it needs *basic science*, not random
disassembly of complex artifacts or phenomena.  That's why I said,

>>scientists or engineers
>>trying to make progress in AI should study computers and algorithms,
>>not people or philosophy.

It seems you actually agree with me:

>Now imagine if this alien craft works by some gravitational principle
>yet undiscovered. Suppose that we're just on the verge of discovering
>some *simple* physics principle that will open up the doors of our
>understanding of that craft. Imagine that, by discovering what are
>those essential principles, we could design *new ways* to implement it.

Even you aren't suggesting disassembling the spacecraft, but doing
physics.  If you're studying flight, that's a fine idea.

AI is the same: there is much to be learned in algorithmics, and other
hard areas of computer science.  But being driven by some vague notion
of, "This seems relevant to human thought, so I better study it,"
strikes me as groundless.  There is evidence that AI still needs basic
science done.  There is NO evidence (other than introspection, which I
discount) that the basic science needed is in any way a part of, or
even related to, human intelligence.

                                                Matt Ginsberg

---
[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: "imaging the brain while it perceives" - Brooks.
Date: 17 Jun 1999 00:00:00 GMT
Message-ID: <7k97kb$5kt$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <7k82g5$f5$1@mulga.cs.mu.OZ.AU> <7k8a4m$5f9$1@mulga.cs.mu.OZ.AU> <7k8guu$9d9$1@mulga.cs.mu.OZ.AU> <7k8q5v$f6i$1@mulga.cs.mu.OZ.AU> <7k9184$25b$1@mulga.cs.mu.OZ.AU>
Organization: Intelliwise Research and Training
Followup-To: comp.ai
Newsgroups: comp.ai

Matthew L. Ginsberg wrote in message <7k9184$25b$1@mulga.cs.mu.OZ.AU>...
>In article <7k8q5v$f6i$1@mulga.cs.mu.OZ.AU>,
>Sergio Navega <snavega@ibm.net> wrote:
>>Matthew L. Ginsberg wrote in message <7k8guu$9d9$1@mulga.cs.mu.OZ.AU>...
>>>But for those of us who care simply about building things, it's as if
>>>someone came up to me and said, "I'd like to make my VW Beetle go a
>>>bit faster.  Can you help me?" and I replied, "Sure.   Let's go over
>>>there and take that F-17 apart and see how it works.  That'll give us
>>>some good insights."  Or perhaps more accurately still, if I replied,
>>>"Sure.  Let's go dissect that sparrow; it goes a lot faster than your
>>>VW and that will help us figure things out."
>
>>I'll propose some fancy modifications that may put the story
>>a bit closer to the big question.
>>
>>Suppose that we're given an alien spacecraft to investigate. That
>>vehicle is able to fly noiselessly with impressive speed and
>>control. At first, we're astonished by the inextricability of
>>the machine. We question how it works, what principles it uses.
>>It does not seem to use jets for propulsion, as it is noiseless.
>>It does not seem to be magnetic, iron pieces around are not affected.
>
>Sounds like a sparrow to me!
>

That feathered animal wasn't present in my intentions when I wrote
that text!

>The only difference is that in the sparrow case, we know enough about
>the architecture to conclude with confidence that it won't help us
>build a better VW.  Your alien case differs from the sparrow *only* in
>that you've blocked our ability to conclude that the sparrow is
>irrelevant.
>

My intention was not to put the sparrow into the scene, but rather
to take out the VW! We don't have that VW! My argument here is that
what we have today is a cart (without horses), a vehicle that runs
only when we put it on top of a hill and kick it.

>But consider: We do have a *vague* understanding of how human
>intelligence works.  We easily know enough, I believe, to know that
>the techniques we use and the techniques our machines will use are
>enormously different.  This follows not only from our knowledge of
>neuroscience, but from the performance results of attempts to play
>chess (or do other things) using "humanlike" and non-humanlike
>methods, and from the general inapplicability of naturally evolved
>techniques to artificial devices for all purposes -- flying, digging
>ditches, what have you.
>

I agree that we're dealing with tremendously different hardware
(brain and computers). But that means we should step back and try to
use the same the fundamental principles, not implementation strategies.
Failing to do so will (as I think it is happening) make us conceive
solutions that don't work correctly in that so different substrate.

>[stuff about how AI still needs some basic science done]
>
>I never said otherwise.  But it needs *basic science*, not random
>disassembly of complex artifacts or phenomena.  That's why I said,
>
>>>scientists or engineers
>>>trying to make progress in AI should study computers and algorithms,
>>>not people or philosophy.
>
>It seems you actually agree with me:
>
>>Now imagine if this alien craft works by some gravitational principle
>>yet undiscovered. Suppose that we're just on the verge of discovering
>>some *simple* physics principle that will open up the doors of our
>>understanding of that craft. Imagine that, by discovering what are
>>those essential principles, we could design *new ways* to implement it.
>
>Even you aren't suggesting disassembling the spacecraft, but doing
>physics.  If you're studying flight, that's a fine idea.
>

You're right that I am proposing a somewhat reductionistic vision,
in search for basic principles. Obviously, not in physics, but in
information theory. The first step that we give from this starting
point is decisive, as it can direct us to totally different worlds.

>AI is the same: there is much to be learned in algorithmics, and other
>hard areas of computer science.  But being driven by some vague notion
>of, "This seems relevant to human thought, so I better study it,"
>strikes me as groundless.  There is evidence that AI still needs basic
>science done.  There is NO evidence (other than introspection, which I
>discount) that the basic science needed is in any way a part of, or
>even related to, human intelligence.
>

Ok, let me follow the good path you presented, that of considering
evidences. Computers are more capable than humans in a lot of tasks.
But this capability is due to their speed and accuracy only. I can't
find any computational task that a man with a *lot* of paper and
pencil could also perform, given enough time (and food). Our
accuracy is really not that good, but it is fine enough, considering
that we use a general purpose "reasoning and perceiving mechanism"
adapted to the rough world of our distant forebears.

However, the reverse *is not* true today: computers can't solve
some problems that we solve easily, even given plenty of time.
Examples? Object recognition and language understanding to cite
just two. Both problems are solved by 4-year-old children *with no
apparent effort*. How come we can't devise a program to exibit
similar performance? How come most approaches to NLP or perception
fail or, if they work, they don't scale up? We need that in order
to communicate with computers and we need communication if they
are to be intelligently useful to us.

For me this is an indication that we're not paying attention
to the right problem. In your terms, we seem to be wanting
a better VW, but in fact what we have in our hands is not a
VW, it is a brain that's similar to that unknown alien
spacecraft (sparrows notwithstanding :-).

So what is interesting to conjecture is the problem as seen
upside down: instead of starting with KR and logic inference,
start with perception and categorization, the Cognitive
Science way (recall that I'm not a connectionist!). Would that
seem to be too strange? Following this path we will be obliged,
sooner or later, to find a way to reason logically *in perceptual
terms*. Is this really an insane proposition? I offer the idea
that this is not insane, and to my rescue I call what CogSci is
discovering about the way we humans reason. Mathematics and simple
reasoning (an example would be the Wason Selection task) should
be seen as the result of the same kind of perceptual abilities
that we use to visually identify objects or, in a more
sophisticated example, to recognize mood in a written text.

Few seem today to be on the side of Roger Penrose's criticisms
to AI as published in "The Emperor's New Mind" and "Shadows
of the mind". He tried to find a way to defeat AI by saying
that our brain work in a way that is not subject to Goedel's
incompleteness theorem. That's fine. But this is an argument
that could (badly) shake the grounds of traditional AI, not
the AI that focus on perceptual processes. I put Penrose's
criticisms in the same basket as other problems, like the
frame problem. If our brain handles both questions effortlessly,
maybe we should find out what it uses to tackle the problems.

It could be discovered, for instance, that logical reasoning
and knowledge representation are things that "emerge"
naturally from perceptual processes and that this procedure
happens in a way to be resilient to the frame problem and
all sort of questions that plague logic today.

Well, then, what I propose as seeing the problem upside down
shows that the dangers of "simulating" the human way of thinking
are not really dangers, but are *requirements* for intelligent
behavior. It is not necessary to simulate it "to the bones",
but just the breadth and flexibility, something that John
McCarthy called "elaboration tolerance".

Following the human way of solving problems don't mean that it
will be perfect, neither that it can't be improved. I guess future AIs
can be better than us. But first, they must be *equivalent* to us!
And that means starting to see the problem by the other end of
the tunnel.

Regards,
Sergio Navega.

---
[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]

From: ginsberg@once.cirl.uoregon.edu (Matthew L. Ginsberg)
Subject: Re: "imaging the brain while it perceives" - Brooks.
Date: 17 Jun 1999 00:00:00 GMT
Message-ID: <7k9npg$fvn$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <7k82g5$f5$1@mulga.cs.mu.OZ.AU> <7k8q5v$f6i$1@mulga.cs.mu.OZ.AU> <7k9184$25b$1@mulga.cs.mu.OZ.AU> <7k97kb$5kt$1@mulga.cs.mu.OZ.AU>
Organization: Computational Intelligence Research Laboratory
Followup-To: comp.ai
Newsgroups: comp.ai

In article <7k97kb$5kt$1@mulga.cs.mu.OZ.AU>,
Sergio Navega <snavega@ibm.net> wrote:

[lots of stuff deleted]

>I guess future AIs
>can be better than us. But first, they must be *equivalent* to us!

I understand that this is the cornerstone of your position.  But there
is simply no evidence for it.

For what it's worth, I used to believe this, too.  AI would certainly
be easier if it were the case.  But the evidence to my mind supports
the opposite conclusion.  And there is, as far as I can tell, no
evidence at ALL that supports the conclusion above.

                                                Matt Ginsberg
---
[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: "imaging the brain while it perceives" - Brooks.
Date: 17 Jun 1999 00:00:00 GMT
Message-ID: <7kaoaf$6gc$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <7k82g5$f5$1@mulga.cs.mu.OZ.AU> <7k8q5v$f6i$1@mulga.cs.mu.OZ.AU> <7k9184$25b$1@mulga.cs.mu.OZ.AU> <7k97kb$5kt$1@mulga.cs.mu.OZ.AU> <7k9npg$fvn$1@mulga.cs.mu.OZ.AU>
Organization: Intelliwise Research and Training
Followup-To: comp.ai
Newsgroups: comp.ai

Matthew L. Ginsberg wrote in message <7k9npg$fvn$1@mulga.cs.mu.OZ.AU>...
>In article <7k97kb$5kt$1@mulga.cs.mu.OZ.AU>,
>Sergio Navega <snavega@ibm.net> wrote:
>
>[lots of stuff deleted]
>
>>I guess future AIs
>>can be better than us. But first, they must be *equivalent* to us!
>
>I understand that this is the cornerstone of your position.  But there
>is simply no evidence for it.
>

This is indeed the cornerstone of my position. As for evidences,
this is not really easy to find, we have only clues. Taking all the
exemplars of intelligence in this world (not only us, but also close
mammals) shows us a clear ascending path, all individually starting
from perceptual ("ecological" as some say) forms of intelligence.

This is not evidence that we can't do it differently, but it is
evidence that, doing that way, there is a chance of success. I don't
have this guarantee with other approaches.

>For what it's worth, I used to believe this, too.   AI would certainly
>be easier if it were the case.  But the evidence to my mind supports
>the opposite conclusion.  And there is, as far as I can tell, no
>evidence at ALL that supports the conclusion above.
>

I beg to you to return to your beliefs ;-)
But I disagree that AI would be easier: it would be harder. A short list
of desired characteristics in a useful AI (natural language, creativity,
empathy, perceptual cleverness, etc) shows that we're too far from
our desired dreams of a thoughtful mechanical companion.

Regards,
Sergio Navega.

---
[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: "imaging the brain while it perceives" - Brooks.
Date: 18 Jun 1999 00:00:00 GMT
Message-ID: <7kdjb0$af3$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <7k82g5$f5$1@mulga.cs.mu.OZ.AU> <7k8tkl$t01$1@mulga.cs.mu.OZ.AU> <7k91ai$25q$1@mulga.cs.mu.OZ.AU> <7kbrub$6nf$1@mulga.cs.mu.OZ.AU> <7kc0ls$942$1@mulga.cs.mu.OZ.AU> <7kcab7$fe4$1@mulga.cs.mu.OZ.AU>
Organization: Intelliwise Research and Training
Followup-To: comp.ai
Newsgroups: comp.ai

>ginsberg@once.cirl.uoregon.edu (Matthew L. Ginsberg) writes:
>
> I can't say what behavior qualifies as intelligent, and what
> doesn't.  As I said, Turing's definition lacks much, but it's the
> only game in town.
>

Matthew's appeal to the Turing test for a definition of intelligence
is commendable on the scientific side of the equation: it shows
clearly the criterion for evaluation. However, it leaves open the
door of the personal (subjective) evaluation (after all, a human
will give the result, the machine is or is not intelligent).
Depending on who is doing the evaluation, the results can be
amazingly different.

Christopher R. Barry wrote in message <7kcab7$fe4$1@mulga.cs.mu.OZ.AU>...
>
> Intelligence = the computational part of the ability to achieve
>                 goals in the world. [McCarthy]*
>

Cristopher's citation of McCarthy appears to be a better alternative.
Unfortunately, it also incurs in another set of problems.
A washing machine can be said to be intelligent if it achieves its
goals of cleaning our cloth. But this machine does that using the
"expertise" embedded in its mechanisms because of the engineers
that designed it. So it is clear that *the engineers* are intelligent,
not the machine. I feel that CYC is such a machine.

My question now is that: HOW to transform that washing machine into
an intelligent machine?

I'll put up a very simple case that I believe illustrates the
fundamental concepts that I'm referring and also shows why neither
Matthew's nor McCarthy's are good ways to define intelligence.

To transform a washing machine into an intelligent one, let us suppose
that this machine has, as sensors, a simple camera able to detect
minute brightness variations in the cloths that we put in, a sensor
for the presence of incoming water and a sensor for the presence
of soap in its reservoir. Suppose that it has as effectors only a
single control which allows it to feed its soap reservoir. The
machine doesn't know what this effector is for, neither what
is the meaning of what the sensors inform (the machine does not
know what is "soap", for that matter). Let the machine also
possess a simple keyboard, in which we can insert some words
that can be speech-synthesized in appropriate circumstances.

The machine starts its "life" with few "innate knowledge": it
compares the brightness of recently cleaned cloth with the
brightness of that same cloth when initially inserted. It goes
"accustomed" to that variation: cleaned cloths are more bright,
indicating less dirt. The engineers of that machine (the
equivalent of evolution in the case of humans) did not specify
nothing more than that. It is up to the machine to make
sense of its "world".

One day, however, somebody turned off its water input. The machine
didn't care, it was not "programmed" to do anything with this
information (although it could "sense" something was different).

As a result of the lack of water, the cloth after the "cleaning"
had the same brightness as before. The machine noticed that
something *different* happened, something that its innate knowledge
said was not trivial, although it didn't know neither what nor why.

It then starts beeping ("crying" like a baby would be more
appropriate). We come to its rescue and "teach" it (using the
keyboard) that "water" was missing and open again the faucet. The
machine *doesn't know* what "w-a-t-e-r" (letters keyed in) means.
However, it senses that something have been altered and next time
this happens, the machine could utter "water" using its speech
synthesizer. And so we're happy with our first lesson to the
intelligent machine. Are we done yet?

In another day, the soap reservoir went empty. As a result, the
cloth hadn't been cleaned correctly and the machine
stephenhawkingly started screaming "water". Suppose then, that
this day nobody is at home: the poor machine was left alone!

How should it face this problem intelligently?

It starts conjecturing (randomly at first) lots of possibilities,
including that, although the cloth seems not clean, its sensor
*is* indicating presence of water. This is not the same thing as
its previous experience! Something is different now!

As a result of this random process to test hypotheses, the
machine "decides" (in a babylike, inconsequent manner) to try its
only effector and reload the soap reservoir. After testing the
whole process, the cloth appears bright again, and the machine
had intelligently solved a new problem. More: it also *perceived*
that the sensor for the presence of soap was indicating that it
was empty! And that by activating the effector, that sensor
changed to full! And that by doing that, the cloth have
been cleaned!

Because of that, the machine develops a simple *causal model*
of its simple world: cloth can fail to be cleaned because of lack
of water OR because the soap reservoir is empty. To the first
problem, utter "water" in the speech synthesizer (no better
alternative was found) and to solve the latter problem try
activating the effector. No engineer had programmed this into
this machine, it discovered it all by itself. That's what I
call an intelligent entity.

Left for itself, the intelligent machine will use all of its
innate resources to develop its fullest potential. This involves
a lot of try and error, accidents, dumb experiments, etc.
But that's exactly the way babies work! The question is that
each experiment, no matter if successful or not, teaches
something about the causal nature of the world of the machine,
as informed by its sensors and effectors.

Now guess what could happen if this machine had a way to
"communicate" with other similar machines, recently built: we
would have a chance to see the emergence of language (that
would, obviously, require more sensing devices and some
emphasizing abilities of the "teacher").

Reasoning, in this context, is the ability to devise random
hypotheses but, more importantly, giving more time to the
exploration of the ones that had produced good results in
previously "similar" situations (the concept of "similarity" is
also something that must be refined according to what is giving
good results). This process can prevent the combinatorial
explosion of hypotheses that naturally arise when one is facing
a totally new problem: experiences from the past are used
(often in a slightly modified way) to try to solve the current
"mystery". The result of the solution will be kept like a
potential candidate for new situations. This is not a complete
recipe for intelligence, but just the beginning.

I am, now, in condition to utter my first (preliminary) attempt to
define intelligence:

      A measure of the ability to solve new problems using world
      perceptions and prior knowledge and the extent in which this
      newly developed solutions are remembered and used to solve
      future problems.

Regards,
Sergio Navega.

---
[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]

From: cbarry@2xtreme.net (Christopher R. Barry)
Subject: Definition of intelligence (was Re: "imaging the brain while it   perceives" - Brooks.)
Date: 19 Jun 1999 00:00:00 GMT
Message-ID: <7ke2ff$ihl$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <7k82g5$f5$1@mulga.cs.mu.OZ.AU> <7k8tkl$t01$1@mulga.cs.mu.OZ.AU> <7k91ai$25q$1@mulga.cs.mu.OZ.AU> <7kbrub$6nf$1@mulga.cs.mu.OZ.AU> <7kc0ls$942$1@mulga.cs.mu.OZ.AU> <7kcab7$fe4$1@mulga.cs.mu.OZ.AU> <7kdjb0$af3$1@mulga.cs.mu.OZ.AU>
Organization: 2xtreme.net NewsReader Service
Followup-To: comp.ai
Newsgroups: comp.ai

"Sergio Navega" <snavega@ibm.net> writes:
> I am, now, in condition to utter my first (preliminary) attempt to
> define intelligence:
>
>   A measure of the ability to solve new problems using world
>   perceptions and prior knowledge and the extent in which this
>   newly developed solutions are remembered and used to solve
>   future problems.

The whole gist of your article emphasized learning as prerequisite for
intelligence. The above definition of intelligence is not far from one
of my dictionary's definitions:

  Intelligence - The capability to acquire and apply knowledge.

The definition of learn is not too far off from that of
intelligence. The difference is that intelligence is the ability to
use what you've learned.

  Learn - To gain knowledge, comprehension, or command of through
          experience or study.

So to really simplify and get things clearly into view and to avoid
circularity and mutual recursiveness of definitions (that so plague
our dictionaries):

  Perception - Usage of sensors to sense environmental phenomenon and
               stimuli.

  Knowledge - Remembered perceptions.

  Learn - Ability to acquire knowledge.

  Intelligence - Ability to learn and use what you've learned.

But is any level of learning enough to consider an artifact
intelligent? There are some pretty cheesy neural-network programs that
"learn" sufficiently well to meet the above definitions of learn and
apply what they've "learned" sufficiently well to be considered
"intelligent".

Christopher

---
[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Definition of intelligence (was Re: "imaging the brain while it perceives" - Brooks.)
Date: 19 Jun 1999 00:00:00 GMT
Message-ID: <7kecnl$6h5$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <7k82g5$f5$1@mulga.cs.mu.OZ.AU> <7k8tkl$t01$1@mulga.cs.mu.OZ.AU> <7k91ai$25q$1@mulga.cs.mu.OZ.AU> <7kbrub$6nf$1@mulga.cs.mu.OZ.AU> <7kc0ls$942$1@mulga.cs.mu.OZ.AU> <7kcab7$fe4$1@mulga.cs.mu.OZ.AU> <7kdjb0$af3$1@mulga.cs.mu.OZ.AU> <7ke2ff$ihl$1@mulga.cs.mu.OZ.AU>
Organization: Intelliwise Research and Training
Followup-To: comp.ai
Newsgroups: comp.ai

Christopher R. Barry wrote in message <7ke2ff$ihl$1@mulga.cs.mu.OZ.AU>...
>"Sergio Navega" <snavega@ibm.net> writes:
>
>> I am, now, in condition to utter my first (preliminary) attempt to
>> define intelligence:
>>
>>   A measure of the ability to solve new problems using world
>>   perceptions and prior knowledge and the extent in which this
>>   newly developed solutions are remembered and used to solve
>>   future problems.
>
>The whole gist of your article emphasized learning as prerequisite for
>intelligence.

Although learning is indeed a prerequisite to my vision of
intelligence, taken in isolation learning is senseless. A videocassette
recorder is better than us in learning visual information, but
we can't say that it is intelligent.

>The above definition of intelligence is not far from one
>of my dictionary's definitions:
>
>  Intelligence - The capability to acquire and apply knowledge.
>

It may seem that this is close to what I said, but it isn't. My
definition emphasized the ability to solve new problems, and the
above phrase is centered on the ability to acquire and apply
knowledge. Suppose we have a robot capable of perfect imitation:
everything it "sees" it can duplicate. You move your arms to
grab an orange and put it in a box. The robot can do the exact
same thing. On the above definition, it would be intelligent,
but under my definition, doing just that, it won't (unless it
uttered "take the orange yourself, I'm not your slave, dude" :-).

>The definition of learn is not too far off from that of
>intelligence. The difference is that intelligence is the ability to
>use what you've learned.
>
>  Learn - To gain knowledge, comprehension, or command of through
>          experience or study.
>
>So to really simplify and get things clearly into view and to avoid
>circularity and mutual recursiveness of definitions (that so plague
>our dictionaries):
>
>  Perception - Usage of sensors to sense environmental phenomenon and
>                stimuli.
>
>  Knowledge - Remembered perceptions.
>
>  Learn - Ability to acquire knowledge.
>
>  Intelligence - Ability to learn and use what you've learned.
>

These are the dictionary definitions of the words. I'll propose a
different meaning for each one:

Perception - The ability to notice *what is relevant* and what is not
             in a bunch of data coming from sensory apparatuses and
             also from internal thoughts.

Learn      - The ability to store the relevant perceptions discovered
             and the situations in which they emerged.

Knowledge  - A coherent and intertwined set of "learned memories".

Intelligence- The ability to solve new problems using perceived things
              and previous knowledge, and adding the result to memory
              for future reuse.

>But is any level of learning enough to consider an artifact
>intelligent? There are some pretty cheesy neural-network programs that
>"learn" sufficiently well to meet the above definitions of learn and
>apply what they've "learned" sufficiently well to be considered
>"intelligent".
>

Learning alone is not a criterion for intelligence. It is a necessary
condition, but it is far not sufficient. Neural networks could be seen
as exemplars, but they fail on the "new problems" item. NN can
generalize up to a certain point, but often generalization is not
enough, it is necessary to try, experiment with new things. Besides,
it is often necessary to reuse knowledge learned in one domain to
help the solution in other domains (akin Case-Based reasoning
or analogical reasoning) and neural nets have difficulties in
this regard.

I guess that definitions of intelligence that stem strongly on
learning are dismissed very fast because of the problems of the
word "learn" taken in isolation. Learning is important, but
it is part of a more complex set of things. We have to find the
other components of this set, if we are to make progress in AI.

Regards,
Sergio Navega.

---
[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net