Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@ibm.net>
Subject: On Reasonable Innate Knowledge
Date: 16 Apr 1999 00:00:00 GMT
Message-ID: <37177b04@news3.us.ibm.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 16 Apr 1999 18:01:40 GMT, 166.72.21.123
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

Most of the cognitive scientists of today consider that we
humans are born with a reasonable quantity of innate knowledge.
To explain how a learning mechanism could go on presenting
reasonable performance in learning, they try to postulate
native versions of elemental forms of this or that kind of
knowledge, in order to ease the working of certain proposed
computational models of our brain.

In particular, Universal Grammar as posed by Chomsky and
others, seems today to be the theory of choice of the majority
of the cognitivists, with the noble exception of the
connectionists and some neuroscientists.

I'm collecting a lot of arguments against innateness of
language. In fact, I've seen a lot of evidences which weaken
significantly the propositions of any kind of innate knowledge.

But although I'm against most forms of innateness, in this post I
will not discuss the defenses of my vision. It will raise the
*degree* in which I believe innateness is a reasonable proposition.

I start with a brief analysis of an organism without any kind of
initial knowledge, put to live in this world of ours. The sensors
of this organism must be such that it does a plain transduction:
it transform several physical properties such as light, pressure,
temperature, etc., into another (fixed) kind.

Lets suppose that it is some kind of electrical signal that
encodes the variation of the corresponding property. The brain
of that creature will be in charge of analyzing it. We usually
spend our time thinking about what this brain must do in order
to process satisfactorily these signals. This is not my concern
in this text.

It is here that I will discuss the first step where some kind of
innate knowledge will have to be present. Without any kind of knowledge
as I have proposed in my contrived example, this creature will have
to have sensors covering all the spectrum of physical factors able to
influence it, because it does not have an a priori way of knowing
what are the factors nor the sensing range that will be important
to its survival. This information is something that the creature
will discover only with time, certainly by a process of learning and
probably by tentative and error.

The organism will have to be sensitive to acoustic pressure variations
ranging, for instance, from 0.1 hz to more than 1 million hz. It will
have to receive light from infrared to ultraviolet. It will have to
be sensitive to microwaves, it will have to be sensitive to small
variations in gravitation, etc. There's no a priori way to restrict
the kind of sensing that will be necessary, for the organism doesn't
have any kind of knowledge about its environment.

This is obviously a very unreasonable demand. So the first level
of innate knowledge appears to be this one: the organism have eyes
whose sensitivity covers from red to violet. Air vibrations ranging
from 20 hz to a bit more than 18 khz, and so on with other parameters
like temperature, chemical sensitivity (taste, olfaction), etc, but
not microwaves and a bunch of others. Who told the organism that
these are the most interesting things to look at?

This level is clearly determined by natural selection. The result
of this process is an organism whose sensors are dedicated to the
sensing of special, predetermined bands of the spectrum, exactly
those that represent a competitive advantage for survival. The
knowledge that these kinds of sensing devices and the bands that
they are sensitive are the first level that we must call innate.

Few would question this level of innateness. The issue muddles
when we inquire, does it stops there?

For some (like Steven Pinker) this idea goes as deep as influencing
the human ability to acquire language. It is posed that selective
pressures devised mechanisms in our brain such that they are
"prepared" to acquire and use language effectively. For others
(like Massimo Piatelli-Palmarini) this goes even deeper so as to
make learning almost insignificant. But this matter of language
I shall leave to another post.

Our question, then, is to determine with a little bit more precision
where the boundary of innate stops and where the learning begins.
This is obviously important to everybody looking for learning
algorithms, for it can point us to the aspects that we should
fit inside and the ones that we may leave outside of the algorithms.
Considering the organism as a purely "learning machine" may allow
us to think about theoretical foundations, but will certainly
slow down our path to initial practical implementations.

The point here is that AI should not concern itself with the
creation of machines with logical and mathematical reasoning
abilities, as was the intent since the early 1960's. This is the
route that have been explored for decades, which left us facing
almost insurmountable problems.

Neither should we concern (for now) with machines able to survive
and learn in inhospitable environments, unknown to us. Nature
spent billions of years perfecting the necessary sensory devices
and associated circuitry to a useful degree. It is not a problem
we'll have conditions to solve easily.

What I think AI should concern with during this "initial phase" is
the development of flexible learning machines that start from a
point which will ease its implementation. In this regard, it helps
if we assume a somewhat contradictory set of goals: it seems that
this machine will have to understand (by means of learning) some
of our humanly concepts in order to communicate effectively with
us but somehow they will also have to do that based on a
different architecture and implementation structure, perhaps
different enough in a way as to prevent them to acquire
some of our more "exquisite" visions of the world.

Rather than giving any additional thoughts on the location of that
divisory line (learned/native), I'll leave the question open. This
is an obvious indication that I'm still searching for a
reasonable answer. What I know beforehand is that the correct
establishment of this line may affect profoundly our attempts
to construct artificially intelligent devices.

Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: On Reasonable Innate Knowledge
Date: 19 Apr 1999 00:00:00 GMT
Message-ID: <371b226e@news3.us.ibm.net>
References: <37177b04@news3.us.ibm.net> <3718BA7D.AE92D2E5@clickshop.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 19 Apr 1999 12:32:46 GMT, 200.229.240.232
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

Seth Russell wrote in message <3718BA7D.AE92D2E5@clickshop.com>...
>Sergio Navega wrote:
>
>> Our question, then, is to determine with a little bit more precision
>> where the boundary of innate stops and where the learning begins.
>
>>  What I know beforehand is that the correct
>> establishment of this line may affect profoundly our attempts
>> to construct artificially intelligent devices.
>
>Interesting questions, but I'm not so sure you will ever find a single
>precise boundary.  I'm reaching here, but I think that we would need to ask
>a question something like: "A precise boundary relative to what?".     If
>intelligence layers like an onion, there might be many many boundaries.
>Some of these boundaries might not even be contained inside the organism
>you are studying.  So what is the criteria that allows you to choose one
>boundary and designate it as *the* boundary?
>

Seth, I agree that the wish to find a precise boundary may not be
achievable, but our effort to pretend it can be found is worthwhile.
I think that what can help us in that goal is to establish the point
in which learning starts to occur. Before that point, we have some
"limitations" of our organism that cannot be changed without using
artificial equipments. One of such limitations is the frequency
spectrum of our vision.

Obviously, I'm not interested here in the precision of that boundary.
I'm interested in the outcomes of having one boundary defined, for
it will influence the starting point of our learning algorithms.

>Another way to look at this might be to arbitrarily designated some
>boundary and precisely define it as a retina.  So that all mechanisms and
>patterns inside that boundary are *defined* as learned and not innate
>*relative* to that boundary.  You then move inside that defined boundary
>and see if you can define (or find) another such retina.
>

Yes, that can be a good starting point. But I also want to generalize
these results to encompass the more "high-level" knowledge. For example,
language knowledge, natural thing concepts, etc. This could, for instance,
influence our development of an artificial system by specifying a
"minimum", innate ontology.

That's one of the questions I'm trying to address: what is the
smallest ontology that our system should have in order to learn all the
remaining entries from experience?

Now for a more practical question, one related to your area. What could
be the smallest PDKB starting ontology such that all other MELD formulas
could be learned by interaction with a human operator? Well, I've got
to add that I don't know the answer to this question.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: On Reasonable Innate Knowledge
Date: 19 Apr 1999 00:00:00 GMT
Message-ID: <371b7821@news3.us.ibm.net>
References: <37177b04@news3.us.ibm.net> <3718BA7D.AE92D2E5@clickshop.com> <371b226e@news3.us.ibm.net> <371B67C1.8445D0D1@clickshop.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 19 Apr 1999 18:38:25 GMT, 166.72.29.149
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

Seth Russell wrote in message <371B67C1.8445D0D1@clickshop.com>...
>Sergio Navega wrote:
>
>> Seth Russell wrote in message <3718BA7D.AE92D2E5@clickshop.com>...
>> >Sergio Navega wrote:
>> >
>> >> Our question, then, is to determine with a little bit more precision
>> >> where the boundary of innate stops and where the learning begins.
>
>> >Another way to look at this might be to arbitrarily designated some
>> >boundary and precisely define it as a retina.  So that all mechanisms and
>> >patterns inside that boundary are *defined* as learned and not innate
>> >*relative* to that boundary.  You then move inside that defined boundary
>> >and see if you can define (or find) another such retina.
>>
>> Yes, that can be a good starting point. But I also want to generalize
>> these results to encompass the more "high-level" knowledge. For example,
>> language knowledge, natural thing concepts, etc. This could, for instance,
>> influence our development of an artificial system by specifying a
>> "minimum", innate ontology.
>
>I think there is a way to generalize our concept of "retina" to apply to any
>interface from the highest ontological concepts down to the lowest sensors. I
>think that structure (which I am calling a retina) is simply a formal system
>less the ontology and the logical commitments.  Let's start with Alonzo Church's
>definition of "the logistic system":
>
>   "... the primitive symbols of a logistic system, the rules by which certain
>formulas are determined as well-formed (following Carnap let us call them the
>formation rules of the system), [the rules of inference, and the axioms of the
>system] ..."
>

We have here a big fork ahead of us. One side conducts us to the traditional
way of seeing AI, one which explores the output of sensors as symbols
belonging to a formal system. This has been the subject of some of Anders
Weinstein's recent posts and draws upon some ideas of Pylyshyn. It is the
traditional way of seeing intelligence, on the lines of Simon and Newell's
physical symbol system hypothesis.

But what if this hypothesis is wrong? (and so far it is a hypothesis, no
neuroscientist is able today to show significant evidences that this is the
way our eyes and brain work).

The second side of that fork is that of Bill Modlin's. It is the way of
interpreting the output of the sensors as signals that carry not only
information, but also a lot of noise. The information carried by these
signals is not apparent from a first level analysis. It is necessary to
derive it from successive application of statistical processes. The
outcome of such an analysis are the invariant and high level aspects
of the world the organism is immersed.

My suspicion is that what Pylyshyn (and fellows) are modeling is just the
"highest level" of this structure. One question for the symbolicist guys
is then to think about the extent in which these high levels (taken in
isolation) are valid to the goal of obtaining intelligent behavior in an
uncertain and vague world. Past history of AI shows that this is
problematic.

What I propose is a third path in that fork, an alternative which looks
for another level to model in computers. This level will concern with some
statistical aspects but would also use symbolic techniques. It will rely
heavily on learning and will obey the same aspect that we appear to have
in our brains: that the first levels (closer to sensory inputs) are
noisy and statistical and that the higher levels are essentially
symbolic.

>I would cut out everything in the square brackets and define the rest as a
>retina.  In other words we can define a retina anywhere we can identify a
>vocabulary and a set of syntax rules on that vocabulary.  We can do that at the
>real retina of the human eye and we can do that on a Cyc like ontology.  It is
>also nice and convenient that we can use that same definition to define a
>language,  any language.   Can't we make your concept of the boundary between
>the "innate" and the "learned" precisely any such retina that we can discover,
>design, or construct?
>

This is an idea to explore, although then I wouldn't be willing to use retinas
as our reference. Retinas and ears and olfaction bulbs are all things that
interact directly with the real world. Their outputs have a lot of that
noisy influence, something that would render any symbolic system worthless.
If the AI system must interact directly with the real world, there's no
way to do it using only symbolic techniques (well, it may be very, very
hard).

>> That's one of the questions I'm trying to address: what is the
>> smallest ontology that our system should have in order to learn all the
>> remaining entries from experience?
>
>I see two types of ai surfaces, retinas (input) and effectors (output). Our ai
>system is trapped inside some set of those surfaces.  We could then say that the
>smallest ontology our system should have would be the syntax rules and the
>vocabulary of the surfaces that enclose it and of course some set of  babbling
>(learning) strategies to get things started.
>

This is something that I see realizable under the conventional symbolic
systems. What is discussable is the breadth of performance that such
a system will be able to produce. Much have been said on the lines that
such systems will fail even on the most simple perceptual tasks.

>> Now for a more practical question, one related to your area. What could
>> be the smallest PDKB starting ontology such that all other MELD formulas
>> could be learned by interaction with a human operator?
>
>Well I don't think human operators should ever have to talk in MELD type
>formulas to teach an ontology.   That kind of spoon feeding is, what I think it
>was you, who pointed out to me,  will not work to produce a robust ai.

Yes, that's right, but I was not suggesting this. I was suggesting that
the system itself derived its MELD formulas using some sort of interaction
with human operators. This interaction could start with a simple subset of
English, enough to "bootstrap" the learning of more complex structures.
More about this follows.

> But
>people understand natural language and are motivated to talk to anybody for
>whatever reason in it.  So our minimal retina would be the vocabulary of a
>natural language and its syntax rules.  I think we could easily start with
>WordNet and any of the good NLP parsers that are now available.
>

I like very much WordNet but I don't see a way of putting it directly in
the core of such a system. It must be able to learn each concept in
WordNet by interaction with an operator. A good way to see this process
is trying to "teach" the system a concept such as "boat". Each thing you
say to the system will reveal other concepts that the system does not know.
If you say "boats are vehicles that float in the water", the system will
ask you what is water, vehicle, float. You'd have to define these concepts
in other terms until you reach what I call *innate artificial concepts*.
This is part of one idea that I call "artificial symbol grounding" and that
could (I said "could") be a good alternative to develop a symbolic system.

I'm evaluating the effect of having primitive concepts defined this way,
through a series of "innate" anchors. My idea is that this will be enough
to allow the construction of a grounded symbolic system, one that could
support knowledge engineering systems with much more confidence than
today's alternatives.

>Just to set the record straight, the PDKB is not *my* project.  It is a mailing
>list started by David Whitten under the premise that CyCorp will be releasing
>some more of its common knowledge to the public and that the public can extend
>it.  Whether that premise will become true sometime in the future is anybody's
>guess.
>

Well, that's right, but I was referring to PDKB as more "yours" than mine.
One day, who knows, I may be tempted to join forces with you guys. We've got
to do something practical or else we'll be chasing our tails with theoretical
aspects that will not put us closer to the goal of having an intelligent
companion. This is one of the reason I give kudos to PDKB.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: On Reasonable Innate Knowledge
Date: 19 Apr 1999 00:00:00 GMT
Message-ID: <371b2278@news3.us.ibm.net>
References: <37177b04@news3.us.ibm.net> <371bd926.2122125@news.demon.co.uk>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 19 Apr 1999 12:32:56 GMT, 200.229.240.232
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

Oliver Sparrow wrote in message <371bd926.2122125@news.demon.co.uk>...
>"Sergio Navega" <snavega@ibm.net> wrote:
>
>>Few would question this level of innateness. The issue muddles
>>when we inquire, does it stops there?
>
>I think that one needs to be careful of categories. "Learning" is just
>a word, meaning the acquisition of lasting changes that are in so em
>way adaptive. The genome, the social interaction of the species and
>the individual all learn, in this sense, and this learning changes
>what happens next. There is all sort s of point in asking what kinds
>of system are needed by a particular aggregate under consideration for
>it to self-modify adaptively. But see the late Stuart Brand on 'How
>Buildings Learn' for an example.

I agree that additional focus on what is meant by learning is indeed
necessary. In particular, social interactions is something that can
fit the bill for learning, over a wide time span (a community may
"learn" what are the most adequate forms of social interaction in
centuries of experiences).

I was trying to see one single organism and its world. This kind of
organism can change throughout time, with evolutionary
pressures, which could be faced as a kind of "learning". But a
specific organism (which is what AI should intend to build, initially)
will have only the kind of learning in which the organism is
put in an environment and alters its knowledge as the result of
interacting with it. This organism have innate constraints of
the sort we're already well aware, and it will have learned things,
as it lives its life experiences. This latter aspect is the one
I'm trying to focus. What are the things that nature left for
our personal experiences to learn and what are the ones which we
don't need to learn, because they are built-in. Knowing more about
this can give us a clue about what is the "knowledge" that we should
put *previously* inside one AI system. It is much less than what
the CYC guys think. But it is more than what the connectionist
guys are thinking. This is the concept that I'm trying to improve.

Interesting your remark on "How buildings learn". I'll try to find
that book.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: On Reasonable Innate Knowledge
Date: 20 Apr 1999 00:00:00 GMT
Message-ID: <371c7778@news3.us.ibm.net>
References: <37177b04@news3.us.ibm.net> <371bd926.2122125@news.demon.co.uk> <371b2278@news3.us.ibm.net> <371f35b0.4449416@news.demon.co.uk>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 20 Apr 1999 12:47:52 GMT, 166.72.29.9
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

Oliver Sparrow wrote in message <371f35b0.4449416@news.demon.co.uk>...
>"Sergio Navega" <snavega@ibm.net> wrote:
>
>>But a
>>specific organism (which is what AI should intend to build, initially)
>>will have only the kind of learning in which the organism is
>>put in an environment and alters its knowledge as the result of
>>interacting with it.
>
>A lot of people - such as Hofstadter, in his latest book (Fluid
>concepts and fruitful analogies, or some such, but also Bill Calvin,
>sometime contributor to c.a.p - see the mind, or a fruitful AI, as a
>nest of competing structures, which battle for interpretation.
>CopyCat, Hofstadter's system, fires of hundreds of random agents which
>try variations on a theme upon a problem, with those which make
>progress passing on their characteristics to subsequent generations.
>Genetic algorithms are also used in simulations of protein folding,
>whereby archetype solutions to particular types of problem (identified
>probabalistically) battle with each other to solve the problem, with
>the best solution often being a hybrid between several approaches.
>Much the same may go on in our heads as we strive to percieve, as we
>reach for something, as we choose a word. Lots of rival solutions do
>unarmed combat; and the result is Buddha in mediation.

I find this very appropriate. In a way, this seems to suggest an
interesting thing, the separation of perception from "thinking".
Although both issues are intrinsically connected, sometimes it
is clear that perception refinement is a predominant activity of
the learning phase and that competition among several fronts is
an activity that appears to be more relevant during thinking
(which can be seen as the process of producing an answer to a
new problem).

When Hofstadter proposed Copycat I don't think he wanted to do
something biologically plausible, although I think he got good
results in that regard. On the other side of the spectrum are
Calvin's theories, essentially motivated by neurological
plausibility. It is interesting to find common points between
both approaches.

These lines of exploration are very important to me and reflect
a kind of thinking that I find missing in other approaches.
Some approaches trust too much on the "magical emergence" of
these aspects when a low level connectionist architecture
evolves from zero. Although that may happen, I think it
is improbable. There are too many possibilities for this
natural emergence, and the great majority of them are very
far from what we want (intelligence).

That's why I insist in also thinking about higher level
architectures. After all, our brain can be usefully analyzed
this way. When we discover that the hippocampus is a
"pattern associator" we are discovering an exquisite
architectural structure in our brain, something that can
give us a lot of clues about the way we should be building
our artificial mechanisms.

Summing up, AI is said to be multidisciplinary, but there are
a lot of AI practitioners who refuse to study neuroscience
or cognitive psychology, insisting on mathematical and logical
approaches. What I've clear on my mind is that the problem of
intelligence is way beyond the complexity that we can imagine
initially and we've got to get as many inspirations from
correlated areas as possible.

Regards,
Sergio Navega.


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net