Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@ibm.net>
Subject: Intelligence: Interaction is Essential
Date: 11 Nov 1998 00:00:00 GMT
Message-ID: <36498787.0@news3.ibm.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 11 Nov 1998 12:48:07 GMT, 200.229.240.143
Organization: SilWis
Newsgroups: comp.ai.philosophy,comp.ai

This may be obvious for some of you, but I finally
settled this subject in my mind: intelligence is
something that depends *crucially* on the interactions
between the entity and its environment.

Some time ago, I was in the middle of an interesting
discussion here in c.a.p between Bill Modlin and
Neil Rickert. Modlin's point of view (very convincing,
for that matter) was that one "voyeur" entity, acting
just as an observer of its world, would have enough
"inputs" to develop intelligence as we usually
understand it.

Rickert, on the other hand, postulated that strong
interaction was necessary to support the development
of intelligence.

My position at that time was in the middle, a
little bit bent to Modlin's position. I thought that
just perceiving regularities and working on them would
be enough preconditions to progressively develop
intelligent organisms. I thought that interaction
with the world was just useful, providing one acceleration
opportunity, but that it was not a necessary condition.

Well, I am now in a situation where I can restate my
position.

Interaction is not just an optimization factor as I used
to think, it is an essential condition, something that can
determine the success or not of the organism.

In spite of the obviousness for a lot of people, this
position is not very easy to substantiate and I concede I'm
still trying to support it more solidly. Suffice it to say
that my primary reasons have two parts:

a) Combinatorial Explosion
It is known that sensory perception is an exercise in trying
to find regularities in a continuous flux of signals. This
flux, if not processed, would fill the entities' brain very
rapidly. The usual mechanism proposed is trying to recognize
repetitions and then, to avoid combinatorial explosion,
try to group those regularities into categories. This is the
way concepts such as "chair" are created: they are symbolic
receptacles to categories of things that we use to sit.
My suspicion now is that even an automatic process specialized
in categorization will not be able to dominate the explosion
of information without additional mechanisms of disambiguation.
Category formation may, by itself, generate an enormous amount
of "fake" attempts (and I believe that this really happens in
our brain). The question is that throughout interaction with the
world our brain receives a chance to reinforce those categories
that must survive. Our interaction with the world (mostly during
childhood) reflect our need to do "little experiments" (such as
dropping a glass on the floor or putting our finger on the fire)
which will provide the brain with the necessary information to
select the best induced set of categories. Without that, we would
have lots of categories that would have to remain "active" in the
brain waiting for one future disambiguation situation to be selected
(confirmed) or not. This would impair significantly our learning
abilities, up to the point of "crab-like" behavior.

b) Disambiguation of Causal Models
We human beings are distinct from animals not only by our intelligence,
but also by our attempt to mystify explanations. A casual look to the
amount of pseudo-scientific explanations we have today is a good
indication of this. In my vision, this would be *much* worse if
our brain didn't interact with the environment (which means, if it
only received information from the environment, without actuating
on it). The same problem of explosion with categories could also
happen with causal and explanatory models that we develop in our
mind: we need to make experiments, in an attempt to disambiguate
the possible causes of the events that unfolds before our eyes.

I'm sure there's much more arguments to support the interactionist
viewpoint and for me this means AI should look more carefully to this
problem before conceiving solutions that can present mechanized
intelligence.

Sergio Navega.

From: ohaneef@paul.rutgers.edu (Omar Haneef)
Subject: Re: Intelligence: Interaction is Essential
Date: 11 Nov 1998 00:00:00 GMT
Message-ID: <72d6cp$3hg$1@paul.rutgers.edu>
References: <36498787.0@news3.ibm.net>
Followup-To: comp.ai.philosophy,comp.ai
Organization: Rutgers University LCSR
Newsgroups: comp.ai.philosophy,comp.ai

Sergio Navega (snavega@ibm.net) wrote:
: This may be obvious for some of you, but I finally
: settled this subject in my mind: intelligence is
: something that depends *crucially* on the interactions
: between the entity and its environment.

    Ironically enough, what seems obvious to me is not the answer you
get.

: Some time ago, I was in the middle of an interesting
: discussion here in c.a.p between Bill Modlin and
: Neil Rickert. Modlin's point of view (very convincing,
: for that matter) was that one "voyeur" entity, acting
: just as an observer of its world, would have enough
: "inputs" to develop intelligence as we usually
: understand it.

    I would agree with Modlin so long as the voyeur was motivated to
learn something about the world it was peering into and had some stake
in predicting the world.

: Rickert, on the other hand, postulated that strong
: interaction was necessary to support the development
: of intelligence.

: My position at that time was in the middle, a
: little bit bent to Modlin's position. I thought that
: just perceiving regularities and working on them would
: be enough preconditions to progressively develop
: intelligent organisms. I thought that interaction
: with the world was just useful, providing one acceleration
: opportunity, but that it was not a necessary condition.

I think I am closest to this position. Makes sense to me. So, now that
we know who is where. Lets get to the arguments.

: Well, I am now in a situation where I can restate my
: position.

: Interaction is not just an optimization factor as I used
: to think, it is an essential condition, something that can
: determine the success or not of the organism.

Well, I agree that it *can* determine the success of failure of an
organism, but that doesn't mean its not "just" an optimization
factor. Optimization factors *can*, and often *do*, determine the
success or failure of an organism.

: In spite of the obviousness for a lot of people, this
: position is not very easy to substantiate and I concede I'm
: still trying to support it more solidly. Suffice it to say
: that my primary reasons have two parts:

: a) Combinatorial Explosion
: It is known that sensory perception is an exercise in trying
: to find regularities in a continuous flux of signals. This
: flux, if not processed, would fill the entities' brain very
: rapidly. The usual mechanism proposed is trying to recognize
: repetitions and then, to avoid combinatorial explosion,
: try to group those regularities into categories. This is the
: way concepts such as "chair" are created: they are symbolic
: receptacles to categories of things that we use to sit.
: My suspicion now is that even an automatic process specialized
: in categorization will not be able to dominate the explosion
: of information without additional mechanisms of disambiguation.
: Category formation may, by itself, generate an enormous amount
: of "fake" attempts (and I believe that this really happens in
: our brain). The question is that throughout interaction with the
: world our brain receives a chance to reinforce those categories
: that must survive.

I'm with you so far.

: Our interaction with the world (mostly during
: childhood) reflect our need to do "little experiments" (such as
: dropping a glass on the floor or putting our finger on the fire)
: which will provide the brain with the necessary information to
: select the best induced set of categories. Without that, we would
: have lots of categories that would have to remain "active" in the
: brain waiting for one future disambiguation situation to be selected
: (confirmed) or not. This would impair significantly our learning
: abilities, up to the point of "crab-like" behavior.

I fail to feel the force of this argument. There are two ways of
performing experiments in science in order to distinguish between two
hypotheses: (1) you perform an experiment, (2) you collect data from a
natural experiment. It is true that (1) is usually preferable, but
there are many cases where (2) is necessary: in astronomy and in human
neuropsychology. In both cases you have to wait and see if the proper
pheonomenon occurs in nature. It is (scientifically) inconvenient and
sometimes nature may not provide one with the required situation at
all. It doesn't logically follow, though, that a model cannot be built
on the basis of non-interactive data. (Um, all this is leaving
Heisenberg aside. Lets not misuse him again.)

: b) Disambiguation of Causal Models
: We human beings are distinct from animals not only by our intelligence,
: but also by our attempt to mystify explanations. A casual look to the
: amount of pseudo-scientific explanations we have today is a good
: indication of this. In my vision, this would be *much* worse if
: our brain didn't interact with the environment (which means, if it
: only received information from the environment, without actuating
: on it). The same problem of explosion with categories could also
: happen with causal and explanatory models that we develop in our
: mind: we need to make experiments, in an attempt to disambiguate
: the possible causes of the events that unfolds before our eyes.

I don't know where you get the evidence for this from. I think people
have always developed crazy theories especially about the things they
interacted with the most. Ancient mythologies place rain gods at the
forefront (Zeus, Odin) because they were concerned about their crops
which they toiled with all day. These seem like orthogonal issues.

: I'm sure there's much more arguments to support the interactionist
: viewpoint and for me this means AI should look more carefully to this
: problem before conceiving solutions that can present mechanized
: intelligence.

Well I certainly hope (and believe) this last statement is
true. Interactionism, as you call, is extremely important for
intelligence. I don't think anybody would disagree. I just don't see
how it logically necessary.

    -Omar

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Intelligence: Interaction is Essential
Date: 12 Nov 1998 00:00:00 GMT
Message-ID: <364b1506.0@news3.ibm.net>
References: <36498787.0@news3.ibm.net> <72d6cp$3hg$1@paul.rutgers.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 12 Nov 1998 17:04:06 GMT, 166.72.21.13
Organization: SilWis
Newsgroups: comp.ai.philosophy,comp.ai

Omar Haneef wrote in message <72d6cp$3hg$1@paul.rutgers.edu>...
>
>Sergio Navega (snavega@ibm.net) wrote:
>: This may be obvious for some of you, but I finally
>: settled this subject in my mind: intelligence is
>: something that depends *crucially* on the interactions
>: between the entity and its environment.
>
>    Ironically enough, what seems obvious to me is not the answer you
>get.
>

Thank you for disagreeing, Omar, this will give me an opportunity
to further think about my arguments, perhaps adjusting some
points to your pertinent observations. More follows.

>: Some time ago, I was in the middle of an interesting
>: discussion here in c.a.p between Bill Modlin and
>: Neil Rickert. Modlin's point of view (very convincing,
>: for that matter) was that one "voyeur" entity, acting
>: just as an observer of its world, would have enough
>: "inputs" to develop intelligence as we usually
>: understand it.
>
>    I would agree with Modlin so long as the voyeur was motivated to
>learn something about the world it was peering into and had some stake
>in predicting the world.
>

Yes, motivation is really fundamental here, and this ends up being
transformed in the need to *peer* the world. This is one of the
clues for my main suspicion: the agent must direct its attention
to a particular aspect. It is not exactly what one can call
interaction, but is certainly something in that direction.
I will refer to this detail later.

>: Rickert, on the other hand, postulated that strong
>: interaction was necessary to support the development
>: of intelligence.
>
>: My position at that time was in the middle, a
>: little bit bent to Modlin's position. I thought that
>: just perceiving regularities and working on them would
>: be enough preconditions to progressively develop
>: intelligent organisms. I thought that interaction
>: with the world was just useful, providing one acceleration
>: opportunity, but that it was not a necessary condition.
>
>I think I am closest to this position. Makes sense to me. So, now that
>we know who is where. Lets get to the arguments.
>
>: Well, I am now in a situation where I can restate my
>: position.
>
>: Interaction is not just an optimization factor as I used
>: to think, it is an essential condition, something that can
>: determine the success or not of the organism.
>
>Well, I agree that it *can* determine the success of failure of an
>organism, but that doesn't mean its not "just" an optimization
>factor. Optimization factors *can*, and often *do*, determine the
>success or failure of an organism.
>

Yes, optimization can really be the difference between life and
death, but what I'm preaching is more: that interaction is
necessary for the stabilization of the process that generates
intelligence. I'll have some difficulties in supporting this
assertion, though.

>: In spite of the obviousness for a lot of people, this
>: position is not very easy to substantiate and I concede I'm
>: still trying to support it more solidly. Suffice it to say
>: that my primary reasons have two parts:
>
>: a) Combinatorial Explosion
>: It is known that sensory perception is an exercise in trying
>: to find regularities in a continuous flux of signals. This
>: flux, if not processed, would fill the entities' brain very
>: rapidly. The usual mechanism proposed is trying to recognize
>: repetitions and then, to avoid combinatorial explosion,
>: try to group those regularities into categories. This is the
>: way concepts such as "chair" are created: they are symbolic
>: receptacles to categories of things that we use to sit.
>: My suspicion now is that even an automatic process specialized
>: in categorization will not be able to dominate the explosion
>: of information without additional mechanisms of disambiguation.
>: Category formation may, by itself, generate an enormous amount
>: of "fake" attempts (and I believe that this really happens in
>: our brain). The question is that throughout interaction with the
>: world our brain receives a chance to reinforce those categories
>: that must survive.
>
>I'm with you so far.
>

Then you should agree that without interaction we won't have
an easy way to "test" hypotheses. I'm offering the point that
without one way to verify our hypotheses, the problem of learning
becomes, if not intractable in several cases, *very* impaired.

>: Our interaction with the world (mostly during
>: childhood) reflect our need to do "little experiments" (such as
>: dropping a glass on the floor or putting our finger on the fire)
>: which will provide the brain with the necessary information to
>: select the best induced set of categories. Without that, we would
>: have lots of categories that would have to remain "active" in the
>: brain waiting for one future disambiguation situation to be selected
>: (confirmed) or not. This would impair significantly our learning
>: abilities, up to the point of "crab-like" behavior.
>
>I fail to feel the force of this argument. There are two ways of
>performing experiments in science in order to distinguish between two
>hypotheses: (1) you perform an experiment, (2) you collect data from a
>natural experiment. It is true that (1) is usually preferable, but
>there are many cases where (2) is necessary: in astronomy and in human
>neuropsychology. In both cases you have to wait and see if the proper
>pheonomenon occurs in nature. It is (scientifically) inconvenient and
>sometimes nature may not provide one with the required situation at
>all. It doesn't logically follow, though, that a model cannot be built
>on the basis of non-interactive data. (Um, all this is leaving
>Heisenberg aside. Lets not misuse him again.)
>

Yes, I agree, we can develop models in areas where experimentation is
not possible and astronomy is a pretty good example. Now my point is
that our "pure" knowledge of astronomy is very restricted, if we
compare it to our knowledge of other sciences in which experiments
can be done.

Most of what we know about astronomy (and astrophysics in particular)
came derived from models and experiments that were developed in other
sciences.

The meaning of those stunning photographs that Hubble takes must be
seen through our models that we built from prior experimentation.
So astronomy itself is being sensibly accelerated by the knowledge we
obtained experimentally, much more than the simple peering through our
telescopes.

>: b) Disambiguation of Causal Models
>: We human beings are distinct from animals not only by our intelligence,
>: but also by our attempt to mystify explanations. A casual look to the
>: amount of pseudo-scientific explanations we have today is a good
>: indication of this. In my vision, this would be *much* worse if
>: our brain didn't interact with the environment (which means, if it
>: only received information from the environment, without actuating
>: on it). The same problem of explosion with categories could also
>: happen with causal and explanatory models that we develop in our
>: mind: we need to make experiments, in an attempt to disambiguate
>: the possible causes of the events that unfolds before our eyes.
>
>I don't know where you get the evidence for this from. I think people
>have always developed crazy theories especially about the things they
>interacted with the most. Ancient mythologies place rain gods at the
>forefront (Zeus, Odin) because they were concerned about their crops
>which they toiled with all day. These seem like orthogonal issues.
>

Ah, evidence! Now you got me. I don't have it, it is just a curious
bunch of situations that is leading me to think this way. If we
suppose our brain starts working with very few previous knowledge
(pretty reasonable, given current evidences) we may think of a situation
like if the baby was looking to his world through a telescope,
light years from it, without any possible interaction. What kind
of intelligence will emerge?

Without interaction, he would have difficulties to develop good causal
models of that world, that would help him in making sense of all
those "untouchable events" that he is watching. The question is,
obviously, not being able to touch the things, but to *manipulate* the
objects to reveal other "sides", that happen to be significant
to his current understanding process. This involves disambiguation.

I agree with you that the effect of this is, basically, acceleration
of learning. My great doubt is that without this acceleration and
due to fixed life span, his brain could not have enough time to
"pick" all the details in future experiences. This could mean that
without interaction, this world may not have known Homo sapiens.

Although we're discussing all this using high level cognitive
examples (scientists, models, etc) the argument I'm trying to
refine is more precious for me when we look at the more low
level aspects, those close to initial sensory processing.

If you're given a strange object to examine with an unusual shape,
what's your first action? To turn it in front of your eyes, trying
to see interesting angles. I find that the way we turn the object,
the way we look for "interesting" features is driven directly by
this mechanism of disambiguation, something that "proposes",
unconsciously, to sensorimotor centers the way to "turn" the
object, to better show the aspect that is in need of further
detail. Turning an object or walking around a larger one to
see its back is interaction, in my use of the word.

This is a way to discard very fast a bunch of equally
probable "hypotheses", leaving only one (or a few) more pertinent.
Without doing this, it would be impractical (read: intractable)
to advance to the next level of exploration. Our brain simply
could not have enough resources. This is what I call interaction
driven by the low level.

Now interaction in high-level. Suppose you've been assigned
for some time to a different university. One week after
being there, you start having digestive problems. So you
begin an analysis (search) for aspects that can provoke
this. Your attention goes to the university's restaurant.
You start to notice the kind of spice used, the "environmental"
conditions of the restaurant, the kind of vegetables.
Unless you have an history of hypersensibility to, say,
cucumber, you will start with no clues. Yes, after some time
you may find the right mixture that causes the problem. But
that's not the way we usually go: we choose to drive our attention
to some hypotheses and devise *experiments* trying to
"disambiguate" each one (we avoid eating one of the
vegetables, or prevent using certain spices). This interaction
allows us to reveal very fast what could be the potential
cause. Without using interaction, just by observing the effects
caused by "natural and random combination of ingredients" we would
eventually get to the same point, but usually in an inordinate
amount of time. We usually don't have that spare time.

Now if this is a problem even in this simple thought experiment,
imagine how tough it would be to discover, without interaction,
*thousands* of correct correlations that a baby's brain
must infer about its world.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Intelligence: Interaction is Essential
Date: 11 Nov 1998 00:00:00 GMT
Message-ID: <3649feee.0@news3.ibm.net>
References: <36498787.0@news3.ibm.net> <72cm7v$9j0@ux.cs.niu.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 11 Nov 1998 21:17:34 GMT, 200.229.243.63
Organization: SilWis
Newsgroups: comp.ai.philosophy,comp.ai

Neil Rickert wrote in message <72cm7v$9j0@ux.cs.niu.edu>...
>"Sergio Navega" <snavega@ibm.net> writes:
>
>>This may be obvious for some of you, but I finally
>>settled this subject in my mind: intelligence is
>>something that depends *crucially* on the interactions
>>between the entity and its environment.
>
>>...
>
>>Rickert, on the other hand, postulated that strong
>>interaction was necessary to support the development
>>of intelligence.
>
>Welcome to the club.
>

Well, it was a tough hill to climb but I think I got there.
I bet there are a lot of people who still disagree but
now I've got some good arguments.

>I agree with your reasoning (which I have deleted for brevity).
>Roughly speaking, it is the point that science crucially depends on
>experimentation, and we should expect the acquisition of individual
>knowledge to have similar requirements.
>

This is one of the fundamental considerations that weighted a lot
for me: I recognize something that can be called
"folk scientific method" that is the expression of our intuitive way
of developing "common sense". It is like if our brain followed the
cycle similar to perception/thinking/experimentation that is so
natural in child (and scientists) and that appears to conduct to
a solid world knowledge, no matter the "IQ" of the child (although
it does matter the IQ of the scientist :-).

But I still have some doubts and strong conjectures to develop.

The strongest of all is my attempt to find a single and simple
mechanism that can account for *all* human cognitive abilities
(geez, this is *really* strong!).

Vision, mathematical reasoning, common sense, audition, mental
imagery, language, etc, all at the same time. I'm looking for a
single method to explain all of these, at once. I know, it is
really an insane proposition, but something is driving me to it.

Needless to say, I'm struggling with "domain specificity" and
its supporters, quite a bunch of renowned scientists.
I'm having a hard time with it, but also some good moments, the
score so far is even. Fortunately, the game had just begun.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Intelligence: Interaction is Essential
Date: 11 Nov 1998 00:00:00 GMT
Message-ID: <3649feea.0@news3.ibm.net>
References: <36498787.0@news3.ibm.net> <72cje5$1ag$1@nnrp1.dejanews.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 11 Nov 1998 21:17:30 GMT, 200.229.243.63
Organization: SilWis
Newsgroups: comp.ai.philosophy,comp.ai

ognen@my-dejanews.com wrote in message <72cje5$1ag$1@nnrp1.dejanews.com>...
>
>> This may be obvious for some of you, but I finally
>> settled this subject in my mind: intelligence is
>> something that depends *crucially* on the interactions
>> between the entity and its environment.
>
>This might be off-topic (I am new to this newsgroup) but I was just wondering
>- how does your theory account for a supposedly existant person that is born
>deaf and blind ? Would one be able to develop intelligence then ? To what
>degree ? Clearly, in a condition like this (born deaf, blind - one would also
>be mute), so the range of interactions is pretty limited. What would be that
>person's internal representation of the world ? Would there be any ?
>
>Did anyone do any research on people with such impairments ?
>
>Sorry if I am completely off-topic or if i just repeated a question raised a
>thousand times :)
>
>Regards,
>Ognen Duzlevski

One of the reasons I use to support my reasoning stems exactly on these
cases: unfortunate people that were born without vision and audition.
Having only touch as main sensory input, they learn everything through
their hands, being able to learn language to the point of writing books.
In case you're curious, you can go to:

http://www.tr.wou.edu/dblink/hands2.htm

where you'll find an article on techniques to educate deaf-blind children.
There are several papers describing the mechanism in the brain that
seems to be associated with this "miracle": it is brain plasticity.
Native blind people have, after some time, parts of their visual
cortex directed to process sensory inputs corresponding to their
fingertips, used for Braille reading.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Intelligence: Interaction is Essential
Date: 11 Nov 1998 00:00:00 GMT
Message-ID: <3649feeb.0@news3.ibm.net>
References: <36498787.0@news3.ibm.net> <72cje5$1ag$1@nnrp1.dejanews.com> <72cmg3$9k2@ux.cs.niu.edu> <3649E189.74B3C49B@REMOVE_TO_EMAIL.jhuapl.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 11 Nov 1998 21:17:31 GMT, 200.229.243.63
Organization: SilWis
Newsgroups: comp.ai.philosophy,comp.ai

Jim Hunter wrote in message
<3649E189.74B3C49B@REMOVE_TO_EMAIL.jhuapl.edu>...
>
>
>Neil Rickert wrote:
>
>> ognen@my-dejanews.com writes:
>>
>> >> This may be obvious for some of you, but I finally
>> >> settled this subject in my mind: intelligence is
>> >> something that depends *crucially* on the interactions
>> >> between the entity and its environment.
>>
>> >This might be off-topic (I am new to this newsgroup) but I was just wondering
>> >- how does your theory account for a supposedly existant person that is born
>> >deaf and blind ? Would one be able to develop intelligence then ? To what
>> >degree ? Clearly, in a condition like this (born deaf, blind - one would also
>> >be mute), so the range of interactions is pretty limited. What would be that
>> >person's internal representation of the world ? Would there be any ?
>>
>> I'll answer only for myself.  Sergio can give his own response if he
>> disagrees.
>>
>> No doubt somebody born both deaf and blind is at a severe
>> disadvantage.  However, they still can interact with their tactile
>> sense, and use this to experiment as they try to understand their
>> world.  I expect that it is very important for them, as children, to
>> be given as many and varied opportunities as possible for tactile
>> exploration.
>
>   A lot of that depends on the cause of the deafness and blindness.
>   Artificial sensors are one beneficial product of brain & AI
>   research that shouldn't be too much longer coming.
>

Good point. We can't forget that some times the deafness or blindness
may be caused by a brain damage that could impair normal thinking on
some cases. If that is not the case of a particular person, I agree
with you that AI and "bioelectronics" could be essential to fully
integrate these people to our world (perhaps even enhanced, with
infrared, UV, etc, although last time I've heard this, the cost
was a little high, around $6 million :-)

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Intelligence: Interaction is Essential
Date: 13 Nov 1998 00:00:00 GMT
Message-ID: <364c7c75.0@news3.ibm.net>
References: <36498787.0@news3.ibm.net> <72hm8r$8ns$1@news.campus.mci.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 13 Nov 1998 18:37:41 GMT, 129.37.183.67
Organization: SilWis
Newsgroups: comp.ai.philosophy,comp.ai

andi babian wrote in message <72hm8r$8ns$1@news.campus.mci.net>...
>
>Sergio Navega wrote in message <36498787.0@news3.ibm.net>...
>>This may be obvious for some of you, but I finally
>>settled this subject in my mind: intelligence is
>>something that depends *crucially* on the interactions
>>between the entity and its environment.
>
>
>interaction is essential for life, not intelligence.  you might also want to
>say
>that life is necessary for intelligence,  then interaction will be necessary
>for intelligence.

I agree that interaction is essential to life, but it is also essential
to intelligence, for reasons I've sketched in previous posts (refined
in my response to Omar Haneef) and can be summarized on the necessity
we have to strongly limit the ambiguities we receive through sensory
inputs.

> What you get in a living intelligence is something that
>evolves
>and grows,  but it might be perfectly fine to have a dead intelligence,
>locked up in a box like a computer program.

Well, here I disagree. Although I don't find myself confortable with the
position of those who define intelligence just by behavioral aspects,
I find it necessary that intelligence be assessed by the relationship
of the entity with its environment.

If that was not the case, we could be fooling ourselves thinking that
dogs are less intelligent than us (after all, they could be more intelligent
than humans and kept a low profile just to continue to receive privileged
treatment and free food; however, this is unlikely, because my dog is well
treated but I can't say the same about the dogs of my neighbors).

> The example of the blind and
>deaf person does show that intelligence is much less dependent on action,
>since
>any less intelligent animal that was blind and deaf would almost certainly
>die
>quickly.
>

I had some problems following your line of thought. Exactly under this
situation of deaf-blindedness interaction is even more fundamental,
because information can enter practically only by touch. Almost all
concepts a deaf-blind person develops is grounded entirely in one sense,
and the need to execute *active* exploratory movements (much more than
a normal person) with the hands demonstrate clearly the need to interact
with the world to acquire those concepts.

>And it might depend on what you want to call intelligence, whether you
>include
>into intelligence the behaviors and actions that you merely associate with
>it and
>how human-like you want it to be. The definition of intelligence is
>notoriously fuzzy.
>

That is indeed true. But it doesn't make sense to define intelligence
without taking into consideration behavioral aspects, because without
it a rock could be eventually much more intelligent than us. Imagine
a rock that could "peek" the world around it through a series of
unknown quantum alterations if its internal structure. Imagine that
this alteration is able not only to represent the world around, but
also to predict, using as memory quantum alterations of other
parts of the rock. It could be much more capable than us in theorizing
the world (besides having a much greater life span ;-) and a rock
the size of our hand could store more knowledge than 10 men. Should
our definition of intelligence encompass this rock?

>And one last point,  there are two main different area of interaction that
>are involved
>in intelligence,  interacting with tools and nonliving things, and
>interacting with
>other people.  Probably interacting with other thinkers is a lot more
>important
>to intelligence than just interacting with static objects.  And if you
>subscribe
>to the theory that the mind includes  bunch of internal agents,  there is
>already
>plenty of interactivity on the inside.
>

This is an interesting thought, but I don't agree. I can find situations
where interacting with other people is worse than interacting with the
instruments that are measuring or detecting something important for
our research. This gets even worse if we think that language is not
a perfect method for communication. Language can only communicate
syntactic, ungrounded concepts. If the listener doesn't have strong
semantic notions, it may be unable to understand what another
human thinker is trying to say.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Intelligence: Interaction is Essential
Date: 19 Nov 1998 00:00:00 GMT
Message-ID: <36541262.0@news3.ibm.net>
References: <36498787.0@news3.ibm.net> <72hm8r$8ns$1@news.campus.mci.net> <364c7c75.0@news3.ibm.net> <36534902.FF6@mitre.org>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 19 Nov 1998 12:43:14 GMT, 129.37.182.24
Organization: SilWis
Newsgroups: comp.ai

Carl Burke wrote in message <36534902.FF6@mitre.org>...
>Sergio Navega wrote:
>...
>> ... Imagine
>> a rock that could "peek" the world around it through a series of
>> unknown quantum alterations if its internal structure. Imagine that
>> this alteration is able not only to represent the world around, but
>> also to predict, using as memory quantum alterations of other
>> parts of the rock. It could be much more capable than us in theorizing
>> the world (besides having a much greater life span ;-) and a rock
>> the size of our hand could store more knowledge than 10 men. Should
>> our definition of intelligence encompass this rock?
>
>That's a very Shroedinger's Cat situation.

Indeed, my example also suffers from the same problem of altering the
object being contemplated.

> If you postulate
>that we humans can look in and see it thinking and recognize that
>as intelligent behavior, then yes; otherwise, if the rock doesn't
>act, if it doesn't send any signals we can detect as communication,
>then as far as we're concerned we can't tell that it's intelligent.
>If we can't tell that it's intelligent it just doesn't matter.
>It's a meaningless question from our standpoint.
>

Yes, and this is why I can't say that that rock is intelligent
even if we're able to see it under a (quantum) microscope!
Our concept of intelligence, no matter how difficult to define,
demands some kind of assessment or at least a way to see that
the actions of the agent are important to itself (or to us,
in the case of AI). This is why I say that a rock (or a computer,
for that matter) that is able to peek the world but is not able
to act on it (or at least show externally identifiable signs of
understanding, reacting to queries) cannot be considered intelligent.

But the question is, in my vision, a little bit deeper: interaction
is not only essential to show us that the entity is intelligent.
It is also a pre-requisite for the *development* of intelligence.

If one entity just peeks the world and records and "thinks"
about what it sees without any interaction or external demonstration
of it, then I can say that it is doing world or situation modeling.
This situation modeling can be done with that rock, with a computer
(with the current software technology) or with (a lot) of paper and
pencil. This is far from what I want to call to be intelligent.

Regards,
Sergio Navega.


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net