Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: What I Think HLUTs Mean
Date: 23 Mar 1999 00:00:00 GMT
Message-ID: <36f79fbf@news3.us.ibm.net>
References: <7d0g12$nmi@ux.cs.niu.edu> <7d68o0$ka8$1@usenet01.srv.cis.pitt.edu> <7d74ip$m2@ux.cs.niu.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 23 Mar 1999 14:05:51 GMT, 200.229.240.123
Organization: SilWis
Newsgroups: comp.ai.philosophy

Neil Rickert wrote in message <7d74ip$m2@ux.cs.niu.edu>...
>andersw+@pitt.edu (Anders N Weinstein) writes:
>
>>So you will have to explain what more is involved in your conception of
>>"teaching" if it is not the ordinary concept, such that teaching does
>>not in fact take place when "x taught y to play chess" is in order
>>in ordinary language.
>
>I suggest we treat the word "teaching" as metaphorical.  The teacher
>carries out certain activities, which might be very helpful in aiding
>the student's learning.  But what the teacher does is neither
>necessary nor sufficient for the student to learn.  Moreover what the
>student actually learns may not be what the teacher intended -- the
>student, having learned from the teacher, might then argue with the
>teacher, claiming that the teacher actually has all of the facts
>wrong.
>

This is one of those paragraphs that should be put in a frame and
nailed in our walls. Having this clear in one's mind is a good
step to understand the importance of perception in human cognition
and a good indication of what learning really means. I could even
say that while AI people don't understand what's behind this text,
we won't have intelligent computers.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: What I Think HLUTs Mean
Date: 15 Mar 1999 00:00:00 GMT
Message-ID: <36ed7e0a@news3.us.ibm.net>
References: <7ch9rg$a2g$1@nnrp1.dejanews.com> <36ed2b15@news3.us.ibm.net> <7cjkvu$9q4@ux.cs.niu.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 15 Mar 1999 21:39:22 GMT, 129.37.182.6
Organization: SilWis
Newsgroups: comp.ai.philosophy

Neil Rickert wrote in message <7cjkvu$9q4@ux.cs.niu.edu>...
>"Sergio Navega" <snavega@ibm.net> writes:
>
>>Lets take Isaac Newton, for example. When he proposed his law:
>>                         f = m . a
>>he was reasoning within a formal model. Anything he concluded about this
>>was something that could be verified experimentally.
>
>I am going to disagree with this.
>
>In my view, Newton was not working within a formal model.  Rather, he
>was proposing a formal model to be used in future work, and was
>proposing how that formal model was to be applied.  In particular, he
>propose that f, m, and a are the quantities that need to be measured,
>and that where we can measure any two of them, we can use f=ma to
>determine the other.
>
>We have to keep in mind that mass (m) was a relatively new concept.
>It grew out of Galileo's work with inertia (resistance to
>acceleration).  Before Galileo's time, there was weight, but there
>was no concept of mass distinct from that of weight.
>
>We should see Newton as completing what had begun under Galileo, and
>that was the construction of a more effective way of formalizing
>reality than had been used previously.
>

I agree. My intention with that example was showing the kind of
concept introduced by Newton at that time kept accuracy as an
"obtainable ideal". Then, when quantum physics came, new concepts
were introduced but also the point that arbitrary accuracy was no
longer obtainable, due to indeterminacy. HLUTs *need* that accuracy
to be reasonable.

>Here is my rough outline of how we relate to the world:
>
>  Step 1:  Generate a system for mapping raw reality into a suitably
>    chosen formal model.
>  Step 2:  Use logic and/or computation within the formal model, to
>    give an answer within that model.
>  Step 3:  Interpret the answer with respect to reality.
>
>Those such as Daryl and Jim, who support the HLUT claims, take the
>position that steps 1 and 3 are essentially trivial, although they
>might still be hard in practice.  As a result they conclude that
>everything important to intelligence/cognition is in step 2.
>
>By contrast, I take the position that for most of human decision
>making, it is step 2 that is trivial, and that most of the
>intelligence goes into steps 1 and 3.
>

I practically subscribe to a similar model, perhaps with different
wording and with just some details added to step 1. That additional
step is the establishment of a method to obtain a candidate list
of formal models to choose from or to test. Testing of the candidate
models can be done "inside" one's head or through interaction (test
in the world). You will certainly jump over my jugular now, but
that additional step that I'm proposing is very dependent on induction.

>My argument against the HLUT is that it leaves out steps 1 and 3,
>relegating them as something to be done by fixed sensory/motor
>hardware that we need not concern ourselves with.  In my view, they
>are thereby leaving out most of what is important to intelligence and
>learning.
>

I agree. It is easy to see that this is the reason why
we're having so much difficulty in automating an absolutely
straightforward (for humans!) task: visual object recognition.
Natural Language Processing fits, in my opinion, in the same case,
followed by all cognitive functions we can think of.

Regards,
Sergio Navega.

From: rickert@cs.niu.edu (Neil Rickert)
Subject: Re: What I Think HLUTs Mean
Date: 15 Mar 1999 00:00:00 GMT
Message-ID: <7ck1ie$ahb@ux.cs.niu.edu>
References: <7cjkvu$9q4@ux.cs.niu.edu> <36ed7e0a@news3.us.ibm.net>
Organization: Northern Illinois University
Newsgroups: comp.ai.philosophy

"Sergio Navega" <snavega@ibm.net> writes:
>Neil Rickert wrote in message <7cjkvu$9q4@ux.cs.niu.edu>...

>>Here is my rough outline of how we relate to the world:

>>  Step 1:  Generate a system for mapping raw reality into a suitably
>>    chosen formal model.
>>  Step 2:  Use logic and/or computation within the formal model, to
>>    give an answer within that model.
>>  Step 3:  Interpret the answer with respect to reality.

>>Those such as Daryl and Jim, who support the HLUT claims, take the
>>position that steps 1 and 3 are essentially trivial, although they
>>might still be hard in practice.  As a result they conclude that
>>everything important to intelligence/cognition is in step 2.

>>By contrast, I take the position that for most of human decision
>>making, it is step 2 that is trivial, and that most of the
>>intelligence goes into steps 1 and 3.

>I practically subscribe to a similar model, perhaps with different
>wording and with just some details added to step 1. That additional
>step is the establishment of a method to obtain a candidate list
>of formal models to choose from or to test. Testing of the candidate
>models can be done "inside" one's head or through interaction (test
>in the world). You will certainly jump over my jugular now, but
>that additional step that I'm proposing is very dependent on induction.

It depends on what you mean by "induction".  I have no doubt that we
make use of empirical experience.  My criticism of 'induction', is
that it specifies a particular way in which experience should be
used.

As usually stated, induction starts with concepts that are already
well formed, and specific experiences with those concepts is used to
judge the truth of a general proposition connecting those concepts.
As I see it, step 1 may require adjusting concepts to better fit
experience.  If that is so, then the experience with the old concepts
could not give inductive evidence about the truth of propositions
that use the new concepts, although it might give evidence about the
former conceptualization which suggest the changes we then make in
our concepts.  Then, after adjustment of the concepts, it might turn
out that some propositions are now logically necessary truths when
applied to the newly adjusted concepts.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: What I Think HLUTs Mean
Date: 16 Mar 1999 00:00:00 GMT
Message-ID: <36ee6c84@news3.us.ibm.net>
References: <7cjkvu$9q4@ux.cs.niu.edu> <36ed7e0a@news3.us.ibm.net> <7ck1ie$ahb@ux.cs.niu.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 16 Mar 1999 14:36:52 GMT, 166.72.21.154
Organization: SilWis
Newsgroups: comp.ai.philosophy

Neil Rickert wrote in message <7ck1ie$ahb@ux.cs.niu.edu>...
>"Sergio Navega" <snavega@ibm.net> writes:
>>Neil Rickert wrote in message <7cjkvu$9q4@ux.cs.niu.edu>...
>
>>>Here is my rough outline of how we relate to the world:
>
>>>  Step 1:  Generate a system for mapping raw reality into a suitably
>>>    chosen formal model.
>>>  Step 2:  Use logic and/or computation within the formal model, to
>>>    give an answer within that model.
>>>  Step 3:  Interpret the answer with respect to reality.
>
>>>Those such as Daryl and Jim, who support the HLUT claims, take the
>>>position that steps 1 and 3 are essentially trivial, although they
>>>might still be hard in practice.  As a result they conclude that
>>>everything important to intelligence/cognition is in step 2.
>
>>>By contrast, I take the position that for most of human decision
>>>making, it is step 2 that is trivial, and that most of the
>>>intelligence goes into steps 1 and 3.
>
>>I practically subscribe to a similar model, perhaps with different
>>wording and with just some details added to step 1. That additional
>>step is the establishment of a method to obtain a candidate list
>>of formal models to choose from or to test. Testing of the candidate
>>models can be done "inside" one's head or through interaction (test
>>in the world). You will certainly jump over my jugular now, but
>>that additional step that I'm proposing is very dependent on induction.
>
>It depends on what you mean by "induction".  I have no doubt that we
>make use of empirical experience.  My criticism of 'induction', is
>that it specifies a particular way in which experience should be
>used.
>
>As usually stated, induction starts with concepts that are already
>well formed, and specific experiences with those concepts is used to
>judge the truth of a general proposition connecting those concepts.
>As I see it, step 1 may require adjusting concepts to better fit
>experience.  If that is so, then the experience with the old concepts
>could not give inductive evidence about the truth of propositions
>that use the new concepts, although it might give evidence about the
>former conceptualization which suggest the changes we then make in
>our concepts.  Then, after adjustment of the concepts, it might turn
>out that some propositions are now logically necessary truths when
>applied to the newly adjusted concepts.
>

I guess our way of looking at induction is what explains our different
visions in its importance. Induction as a philosophical, high-level
construct is subject to so many problems, that I can accept freely
that most of them as valid criticisms. But what I propose is
induction at a low level, helping in the process of discovery of
potentially useful things. At this level, I see induction as
absolutely indispensable.

Eventually, I should be able to present a good example of what
I mean by this "low level induction". For now, I'll have to keep
with this one:

----
Humanization of a putative process in the mind of a child:

I see now that object that my mom calls a cup. It has all sorts
of edges and round borders similar to the ones I've seen before.
It has an open side, which is where it receives liquid (by the way
liquids can be coffee, milk, water). All cups I've seen so far
had that open side. All cups I've seen so far were used primarily
to carry some liquid.

That weird guy is asking me if a salad bowl is a cup. Well, to
the extent of everything I've seen so far, I think I can say
that it is pretty much a cup. It has an open side and I may
carry liquids in it.
----

I have seen no computer program capable of such a simple and
childish inductive reasoning.

Regards,
Sergio Navega.

From: rickert@cs.niu.edu (Neil Rickert)
Subject: Re: What I Think HLUTs Mean
Date: 19 Mar 1999 00:00:00 GMT
Message-ID: <7cv5ru$mne@ux.cs.niu.edu>
References: <7ck1ie$ahb@ux.cs.niu.edu> <36ee6c84@news3.us.ibm.net>
Organization: Northern Illinois University
Newsgroups: comp.ai.philosophy

"Sergio Navega" <snavega@ibm.net> writes:

>I guess our way of looking at induction is what explains our different
>visions in its importance. Induction as a philosophical, high-level
>construct is subject to so many problems, that I can accept freely
>that most of them as valid criticisms. But what I propose is
>induction at a low level, helping in the process of discovery of
>potentially useful things. At this level, I see induction as
>absolutely indispensable.

That there is an ability to discover potentially useful things, I
have no doubt.  What I am questioning, is whether "induction" is the
proper term for what happens when such discoveries occur.

>Eventually, I should be able to present a good example of what
>I mean by this "low level induction". For now, I'll have to keep
>with this one:

>----
>Humanization of a putative process in the mind of a child:

>I see now that object that my mom calls a cup. It has all sorts
>of edges and round borders similar to the ones I've seen before.
>It has an open side, which is where it receives liquid (by the way
>liquids can be coffee, milk, water). All cups I've seen so far
>had that open side. All cups I've seen so far were used primarily
>to carry some liquid.

>That weird guy is asking me if a salad bowl is a cup. Well, to
>the extent of everything I've seen so far, I think I can say
>that it is pretty much a cup. It has an open side and I may
>carry liquids in it.

Your little story does two thing.  (a) it describes what might be the
observed behavior of a child in a hypothetical learning situation.
(b) it suggests what the child's thoughts were in that situation.  I
agree that (a) is a reasonable illustration of what we observe with a
child's learning.  I disagree with (b) -- I think you have ascribed
adult-like thoughts to an immature child.

Your description (b) suggests that the mechanism is something similar
to the classical definition of induction.  I think that what is going
on internally is significantly different from what you suggest.  I
would say that the child is finding ways of categorizing the world,
and that in his formation of categories, he is influenced by adult
behavior (use of the word "cup", for example).  He forms a category
which happens to include salad bowls as well as cups, and then the
child straightforwardly applies the word "cup" to things within this
category.

I doubt that there is any conscious reasoning of the type you
suggested.  If a parent corrects the child, and says that it is a
salad bowl and not a cup, the child might well go on calling it a cup
anyway, although he will later stop calling it a cup after further
refinement of his categories.

>----

>I have seen no computer program capable of such a simple and
>childish inductive reasoning.

I agree that we do not have convincing demonstrations of machine
learning that are anything near the effectiveness of a child's
learning.  My only concern is whether the term "induction"
misdescribes the processes that are going on in the child's brain
and/or mind.

From: "Sergio Navega" <snavega@ibm.net>
Subject: What is the place for Induction?
Date: 20 Mar 1999 00:00:00 GMT
Message-ID: <36f3b9cc@news3.us.ibm.net>
References: <7ck1ie$ahb@ux.cs.niu.edu> <36ee6c84@news3.us.ibm.net> <7cv5ru$mne@ux.cs.niu.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 20 Mar 1999 15:07:56 GMT, 166.72.29.87
Organization: SilWis
Newsgroups: comp.ai.philosophy

I changed the title of the post because it had diverted from the
HLUT thing.

Neil Rickert wrote in message <7cv5ru$mne@ux.cs.niu.edu>...
>"Sergio Navega" <snavega@ibm.net> writes:
>
>>I guess our way of looking at induction is what explains our different
>>visions in its importance. Induction as a philosophical, high-level
>>construct is subject to so many problems, that I can accept freely
>>that most of them as valid criticisms. But what I propose is
>>induction at a low level, helping in the process of discovery of
>>potentially useful things. At this level, I see induction as
>>absolutely indispensable.
>
>That there is an ability to discover potentially useful things, I
>have no doubt.  What I am questioning, is whether "induction" is the
>proper term for what happens when such discoveries occur.
>

Maybe using the word "induction" is counterproductive. This word
comes preloaded with so many negative aspects that it would be
wise to use another. Unfortunately, I haven't found a more
proper one (I'm open to suggestions).

>>Eventually, I should be able to present a good example of what
>>I mean by this "low level induction". For now, I'll have to keep
>>with this one:
>
>>----
>>Humanization of a putative process in the mind of a child:
>
>>I see now that object that my mom calls a cup. It has all sorts
>>of edges and round borders similar to the ones I've seen before.
>>It has an open side, which is where it receives liquid (by the way
>>liquids can be coffee, milk, water). All cups I've seen so far
>>had that open side. All cups I've seen so far were used primarily
>>to carry some liquid.
>
>>That weird guy is asking me if a salad bowl is a cup. Well, to
>>the extent of everything I've seen so far, I think I can say
>>that it is pretty much a cup. It has an open side and I may
>>carry liquids in it.
>
>Your little story does two thing.  (a) it describes what might be the
>observed behavior of a child in a hypothetical learning situation.
>(b) it suggests what the child's thoughts were in that situation.  I
>agree that (a) is a reasonable illustration of what we observe with a
>child's learning.  I disagree with (b) -- I think you have ascribed
>adult-like thoughts to an immature child.
>

That's correct, I made up an "adult-like" description of processes that,
according to my hypothesis, are totally unconscious in the child's mind
and so can't have such a high-level description. I contrived that
description so as to ease my explanation, but in fact I may have given
the impression that this was the way I was thinking the process happened,
when in fact it is not.

>Your description (b) suggests that the mechanism is something similar
>to the classical definition of induction.  I think that what is going
>on internally is significantly different from what you suggest.  I
>would say that the child is finding ways of categorizing the world,
>and that in his formation of categories, he is influenced by adult
>behavior (use of the word "cup", for example).  He forms a category
>which happens to include salad bowls as well as cups, and then the
>child straightforwardly applies the word "cup" to things within this
>category.
>

I agree entirely. But I add one item to the process: to be able to
be influenced by adult suggestion (such as names of objects) the
child must recognize something in the exemplars being presented
such as to be able to receive a "name". I can't name a thing if
I don't have a way to perceive it (by looking to the world or
looking to our own thoughts).

This can explain in part why children are prone to errors when
identifying precisely the category of things in relation to
words. They will do it right only when they gain enough perceptual
discrimination about that object in such a way as to perceive
the invariant (and often high-level) aspects of that object.

A wrong hypothesis on the part of the child will result in a
wrong discrimination. Fortunately, this mechanisms evolves easily
and improves with time, and so the perception of the child.

>I doubt that there is any conscious reasoning of the type you
>suggested.  If a parent corrects the child, and says that it is a
>salad bowl and not a cup, the child might well go on calling it a cup
>anyway, although he will later stop calling it a cup after further
>refinement of his categories.
>

I agree entirely (that seems to be the sort of knowledge that only
parents have :-) Because of this I provide another requirement to
be a good AI scientist: have a child.

>
>>I have seen no computer program capable of such a simple and
>>childish inductive reasoning.
>
>I agree that we do not have convincing demonstrations of machine
>learning that are anything near the effectiveness of a child's
>learning.  My only concern is whether the term "induction"
>misdescribes the processes that are going on in the child's brain
>and/or mind.
>

I'll accept that the word induction may be inadequate to provide
the meaning I'm trying to concoct. I'll try to come up with another
example, this time in a lower level (this is another thing I find
important, that this process occurs in any level in our cognition,
in the very same way). I must warn that you have just started my
rambling generation process.

This is my hypothetical explanation for the way I see the appearance
of the "ground" in which cognition is planted (what follows is
incomplete, because I don't mention anything about motor processes,
to keep the post short).

Suppose that a baby is exposed to the world through vision just
after being born. My "story" for the processes that happens at
that time goes on like this:

The baby does not have any kind of "knowledge". What she have is
a series of innate detectors able to process visual inputs and
automatically derive (recognize) some primitive aspects, such as
lines (edges), movement, uniform color areas, etc. The big question
is how does the concept of a "cup" appears with so few low level
aspects? How it evolves? I have a hunch.

I think that the only way to explain this is through inductive
generalization. The baby starts to see things in which a range of
their edge detectors, at some specific time, fire all together.
Lets assume that this happens when she is looking to a cup and
those edges are the vertical lines comprising one of the sides
of the cup. When she sees the window of her room, her brain will
again notice the firing of another bunch of vertical edge
detectors (among a zillion of other things). This is the time
where induction takes place: her brain should expect to see other
vertical edges in the future and because of this repetition,
all the neural circuitry that treats vertical edges become more
"specialized" in the several forms and lengths.

When she sees those edges often enough, then it is time to *keep*
that thing (bunch of vertical edges firing together) as a thing
worthy of remembering, and so she gives it a "name" (footnote:
here I must clarify what I mean by a "name": it is a code, like
the firing of one neuron that only fires when an edges appear or,
apparently more likely, an ensemble of neurons that oscillates
synchronously when such an edge appears; anyway, it could be
summarized as an internal symbol like "sym2432" for computational
purposes).

Lots of things get perceived that way and inductive aspects get
noticed (for instance, it is common to see a vertical edge finding
an horizontal edge somewhere; it is common to see diagonal lines
meeting a vertical or horizontal line; a window's perimeter constitute
a rectangle, a figure that have some invariant properties found
in a number of other objects, etc).

Now, some months later, the child hears the word "cup" referring
to that object. When she sees that object, a *series* of previously
devised feature detectors fires (comprising not only the innate ones,
but *also* the inductively learned ones such that vertical/horizontal
combination, so common).

As the child does not have a clear understanding of what a cup is,
the detectors which fired will represent a vague idea of what that
object is (for instance, the vertical edge, round detector of the
top, horizontal of the bottom, etc). She will *associate* the
simultaneous occurence of *those groups* of detectors with
the phonological sequence for "cup". As new exemplars of cups
are shown, so the model of the child improves (with suppression
of characteristics that are not fixed, introduction of new ones
that were not perceived in the previous exemplars, by suggestion
of characteristics made by the use of language from her parents, etc).

Now the child sees a jar. When she sees it, most (if not all) of
those detectors of a cup will fire, suggesting to the child that
this is a "cup". The father of the child will eventually say that
that is not a cup, it is a jar. The child will not have very much
of a reason to *displace* its previous association in favor of
the new. She is not able to *discriminate* between these exemplars.
She will stubbornly continue to call that as a "cup".

But after some time, somebody will tell her (or she will use
analogical reasoning, but I'll let this subject to another post)
what is(are) the differece(s) between a jar and a cup. Maybe it is
the size, or some features of the shape or kind of "mouth". Anyway,
the child will, when finally convinced, "copy" the previous
definition of a cup (including its set of feature detection factors)
into a *new* definition that will have some *additional* features
along with *altered* definitions in such a way as to recognize
correctly a jar. The important thing here is that she used
*previous knowledge* as a basis to create a *new* perceptual
detector. That's efficient! I propose we, adults, use this method
(unconsciously) very often.

From that moment on, the child will be able to distinguish between
a cup and a jar just by visual inspection, in a process that is
very, very efficient, because it is done by millions of neurons
in parallel, that "learned" how to do it (important footnote:
many, many times, the feature detectors of the cup will *also*
fire when she looks at a jar; this is the time when a competition
occurs, and the winner will be the set of detectors who happens to
get most "associates" by synchronous oscillation; say the jar is
over a gigantic table and that only big bears are drinking: that
looks more like a cup, even if the object is a jar; the context may
make the difference in object recognition, so although feature
detectors are strong, they must *compete* in a "mental environment"
together with the current context).

My hypothesis goes on in a way to see *any* kind of "concept" as
derived from identifying factors in a lower level. We are not
machines of thinking, we are machines of recognizing.

This means that I find it reasonable to look at cognition as a
*very* hierarchical pyramid in which each level is created by
inductive generalizations from things that happen in the levels
just below. These generalizations are temporary, and subject to
constant revision (through interaction, or language, for instance)
up to the time when they are solid enough to constitute a fundamental
high level feature detector or what I like to call a "perceptual
specialist".

We seem to have such a thing for human faces, for instance (other
footnote: we seem to have some innate detectors that help a lot
with the discrimination of low level features that a human face have).
The interesting thing is that I believe that all mathematicians
(Penrose included) when they think about the most abstract and
high-level mathematical concepts, they are using the same kind
of "tentative induction" and perceptual recognizers that a child
uses to understand its world. I see no difference between children
and adults, other than the presence of much more perceptual
specialists in the latter.

The implications of this hypothesis are interesting: when one
learns a new subject, he/she will be an expert in that subject only
when he manages to *ground* the new informations in lower level
aspects. If one don't do that, then this guy will eventually be
able to talk about that subject, reciting all he learned. But he
*will not* be able to *think* about it, deriving new knowledge
from previous ones. The idea of this paragraph came to me from
Neil, some months ago, and it came like a thunder.

Just to finish, when I see AI being made through formalist methods,
I think that what's being built is the tip of one iceberg, floating
in the air, with *nothing* below it. Without the lower levels,
grounding each logic technique of the system in a set of perceived
aspects, I think that that system will not be able to *think* about
it. And then, I don't believe it will be intelligent.

My final idea is the thought that, from simple notions such as
counting, successor, predecessor, presence, absence, and a
mechanism able to perceive similarity in things, generate inductions,
use analogies and a few other things more, one could build a very
impressive "mechanical" mathematician, one that even Penrose
wouldn't complain.

Regards,
Sergio Navega.

From: rickert@cs.niu.edu (Neil Rickert)
Subject: Re: What is the place for Induction?
Date: 22 Mar 1999 00:00:00 GMT
Message-ID: <7d6apv$shl@ux.cs.niu.edu>
References: <7cv5ru$mne@ux.cs.niu.edu> <36f3b9cc@news3.us.ibm.net>
Organization: Northern Illinois University
Newsgroups: comp.ai.philosophy

"Sergio Navega" <snavega@ibm.net> writes:

>I think that the only way to explain this is through inductive
>generalization. The baby starts to see things in which a range of
>their edge detectors, at some specific time, fire all together.
>Lets assume that this happens when she is looking to a cup and
>those edges are the vertical lines comprising one of the sides
>of the cup. When she sees the window of her room, her brain will
>again notice the firing of another bunch of vertical edge
>detectors (among a zillion of other things). This is the time
>where induction takes place: her brain should expect to see other
>vertical edges in the future and because of this repetition,
>all the neural circuitry that treats vertical edges become more
>"specialized" in the several forms and lengths.

The basic description you are giving might be approximately right.
But I am having trouble understanding what you mean by "inductive
generalization." You are describing neural processes which become
more specialized over time, and that suggests that we would better
call it "specialization."

You are describing a child as initially identifying a rather broad
category, which contains cups but also other things.  In other words,
the child starts with only a general idea of "cup".  Then, over time,
the child refines its categorizations, and as a result, the idea of
"cup" becomes more specific.  So where is the generalization?

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: What is the place for Induction?
Date: 22 Mar 1999 00:00:00 GMT
Message-ID: <36f6c1b6@news3.us.ibm.net>
References: <7cv5ru$mne@ux.cs.niu.edu> <36f3b9cc@news3.us.ibm.net> <7d6apv$shl@ux.cs.niu.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 22 Mar 1999 22:18:30 GMT, 129.37.182.55
Organization: SilWis
Newsgroups: comp.ai.philosophy

Neil Rickert wrote in message <7d6apv$shl@ux.cs.niu.edu>...
>"Sergio Navega" <snavega@ibm.net> writes:
>
>>I think that the only way to explain this is through inductive
>>generalization. The baby starts to see things in which a range of
>>their edge detectors, at some specific time, fire all together.
>>Lets assume that this happens when she is looking to a cup and
>>those edges are the vertical lines comprising one of the sides
>>of the cup. When she sees the window of her room, her brain will
>>again notice the firing of another bunch of vertical edge
>>detectors (among a zillion of other things). This is the time
>>where induction takes place: her brain should expect to see other
>>vertical edges in the future and because of this repetition,
>>all the neural circuitry that treats vertical edges become more
>>"specialized" in the several forms and lengths.
>
>The basic description you are giving might be approximately right.
>But I am having trouble understanding what you mean by "inductive
>generalization." You are describing neural processes which become
>more specialized over time, and that suggests that we would better
>call it "specialization."
>

"Progressive specialization" is something that seems to capture part
of what I meant. But I think there's something that this name doesn't
seem to capture.

>You are describing a child as initially identifying a rather broad
>category, which contains cups but also other things.  In other words,
>the child starts with only a general idea of "cup".  Then, over time,
>the child refines its categorizations, and as a result, the idea of
>"cup" becomes more specific.  So where is the generalization?
>

I think part of our "problem" with the word induction stems on the
"dark side" of it, mostly because of its philosophical implications.
I'll try to come with another example in which the generalization
aspects that I propose becomes more clear.

Suppose we show a child an apple and that the child is told that
that object is an "apple". When another exemplar of an apple is
shown to the child she may conclude, from the comparison with
her previous experience, that apples are round objects with a size
comparable to her mother's hand (she does that because what she
had developed so far, for instance, were "perceptors" for those
characteristics, and those characteristics were the ones that
*remained constant* among the exemplars shown).

Now if we show the child an orange, she may answer "apple". That
would be equivalent to the following "reasoning": All the round
objects with the size of my mother's hands that I have seen so
far are "apples". I am seeing now a round object with size comparable
to my mother's hand. So, I'm seeing an apple. This is the induction
I mentioned.

When her mother tells that that object is an "orange", the child
becomes confused. She may refuse to accept that or, after some time,
notice what is different among the exemplars (in this case, color).
Obviously, I'm not proposing that children notice color for last,
but that whatever she chooses to categorize an object, she will
inductively generalize that in an attempt to cover future cases.

This will not only refine her concept, but propose another one:
All the round objects, the size of my mother's hand, and with
yellowish color is called orange. All round objects, that size,
and red is called apple.

What is interesting to explore is when the child chooses the *wrong*
distinguishing factor: that will provoke some erroneous conclusions
(sometimes very funny, as any parent is able to report). What is
impressive is the revision ability, which allows children to
"roll back" and start again the progressive refinement of their
concepts of things following a different line.

Now if we extend that same situation to edible things (and we watch
how difficult it is for children to distinguish what is edible from
what is not), maybe we can find other similar examples, although
on a higher level of difficulty. This suggests that no matter the
level, we seem to work using the very same principles.

My interest in children learning is to grasp, from the huge amount of
reports of cognitive psychologists, what is the "mechanic" of the
categorization mechanism that children use in order to find out what
would be a good mechanism to simulate it (by the way, this is the
greatest difference between my approach and Bill Modlin's: I'm
trying to find the mechanism starting from the desired results
we want to obtain backward to discovering what are its fundamental
principles).

In a largely similar way, that's what I think that happens with our
high level "adult" concepts that we learn. We seem to progressively
grasp the meaning of sophisticated concepts like "justice", "honor",
but instead of supporting those concepts directly in sensory aspects,
we seem to support them in other concepts and these in others until
we finally get to the "root" of the question: sensorimotor patterns,
such as those which grounded the concept of apple for the children.

My suspicion is that inductive generalization starts as a vague,
highly revisable process that, throughout consecutive experiences,
turns up giving a solid, structured network of interrelated patterns
that represents what a person knows.

Obviously, this process is just one hypothesis, but so far I didn't
find nothing strong enough to dissuade me of thinking this way.

Regards,
Sergio Navega.

From: ohgs@chatham.demon.co.uk (Oliver Sparrow)
Subject: Re: What is the place for Induction?
Date: 25 Mar 1999 00:00:00 GMT
Message-ID: <36fdf398.3417917@news.demon.co.uk>
Content-Transfer-Encoding: 7bit
X-NNTP-Posting-Host: chatham.demon.co.uk:158.152.25.87
References: <7cv5ru$mne@ux.cs.niu.edu> <36f3b9cc@news3.us.ibm.net> <7d6apv$shl@ux.cs.niu.edu> <36f6c1b6@news3.us.ibm.net>
Content-Type: text/plain; charset=us-ascii
X-Complaints-To: abuse@demon.net
X-Trace: news.demon.co.uk 922351225 nnrp-03:17693 NO-IDENT chatham.demon.co.uk:158.152.25.87
MIME-Version: 1.0
Newsgroups: comp.ai.philosophy

On Mon, 22 Mar 1999 19:10:38 -0300, "Sergio Navega" <snavega@ibm.net>
wrote:

>My suspicion is that inductive generalization starts as a vague,
>highly revisable process that, throughout consecutive experiences,
>turns up giving a solid, structured network of interrelated patterns
>that represents what a person knows.

I do not disagree with you. It is almost tautological that this is
what happens if there are not predefined slots (in the manner of
supposed hard-wired linguistic predispositions) into which concepts
are dropped.

Two points, however. The first is that categories do not fall out of
thin air. Under 'hot neurology' I have described two paper which have
a bearing on this. In the one, supposedly motor-dedicated areas of the
brian are shown to be involved in high order cognitive tasks. That is,
the elements that underpin a category are strongly predisposed to act
in certain ways and this predisposition must influence what is a
category and what we have it do for us. In the other, the same field -
colour vision - is collapsed into quite distinct categories from those
of Western eyes by an isolated PNG tribe. That we categories and how
we categorise may be thought of as the assembly of an attractor field,
with some attractors in place through physiological and other
predispositions; and others created as we go along, either
individually or culturally.

The second point is that one can think of categories (and items) as
points and clusters in a space of many dimensions, or perhaps more
accurately as several spaces between which there is weak mapping. When
a child learns, it is both erecting the space ("that is actually two
orthogonal dimensions, not one: 'Dadda' and 'Other nice-looking men'")
and also populating it. The act of populating (and of social and
operational feedback) challenges both clustering and the
dimensionality. Thus we learn.

This may be an over-abstract model, and is anyway only a
representations of what is happening and has no bearing on the
mechanism that conduct this process. This said, the N space idea, in
which categories are abutting or distant bubbles, in which search
consists of an activation vector on a subset of the N dimensions that
is created by dedicated tissue activation, but which points to the
correct category, (which itself then comes alight and quenches rival
categories) seems to be a helpful one. Learning and memory are then
carried out in dedicated tissue as primitives, but assemble into an
ensemble through just such a mapping. The vectors that span the N
space are the level of activation of the dedicated elements, the
clustering the learned links along which this activation spreads. A
memory 'hit' or a thing learned consists of a focus of activation
where these independents tendrils of activation come together, in what
can also be seen as an N space in which each tendril (or lack of
tendril) is a vector along an axis.
_______________________________

Oliver Sparrow

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: What is the place for Induction?
Date: 25 Mar 1999 00:00:00 GMT
Message-ID: <36fa2d6a@news3.us.ibm.net>
References: <7cv5ru$mne@ux.cs.niu.edu> <36f3b9cc@news3.us.ibm.net> <7d6apv$shl@ux.cs.niu.edu> <36f6c1b6@news3.us.ibm.net> <36fdf398.3417917@news.demon.co.uk>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 25 Mar 1999 12:34:50 GMT, 200.229.240.218
Organization: SilWis
Newsgroups: comp.ai.philosophy

Oliver Sparrow wrote in message <36fdf398.3417917@news.demon.co.uk>...
>On Mon, 22 Mar 1999 19:10:38 -0300, "Sergio Navega" <snavega@ibm.net>
>wrote:
>
>>My suspicion is that inductive generalization starts as a vague,
>>highly revisable process that, throughout consecutive experiences,
>>turns up giving a solid, structured network of interrelated patterns
>>that represents what a person knows.
>
>I do not disagree with you. It is almost tautological that this is
>what happens if there are not predefined slots (in the manner of
>supposed hard-wired linguistic predispositions) into which concepts
>are dropped.
>
>Two points, however. The first is that categories do not fall out of
>thin air. Under 'hot neurology' I have described two paper which have
>a bearing on this. In the one, supposedly motor-dedicated areas of the
>brian are shown to be involved in high order cognitive tasks. That is,
>the elements that underpin a category are strongly predisposed to act
>in certain ways and this predisposition must influence what is a
>category and what we have it do for us.

Thank you, Oliver, for raising such interesting points. I guess that by
now you should see how much I care for this subject.

I have no doubts that the mistery of concept/category formation is
understandable once we understand what are all the factors that affect
its creation. Our predispositions to mechanically act in certain ways
(such as inspecting things using our hands to move objects) is definitely
something that influence our categorization. When an architect imagines
a buiding and "sees" it turning in his mind's eye, I wouldn't be surprised
if motor areas of his brain were activated, just like if he used his
hands to turn the edifice. The pattern nature of concepts and categories
seem to use patterns also from sensory and motor areas of our brain. It
is from that starting point that I try to assemble a model for
intelligence.

> In the other, the same field -
>colour vision - is collapsed into quite distinct categories from those
>of Western eyes by an isolated PNG tribe. That we categories and how
>we categorise may be thought of as the assembly of an attractor field,
>with some attractors in place through physiological and other
>predispositions; and others created as we go along, either
>individually or culturally.
>

I mostly agree with this, as long as you wanted to say, when referring
to physiological, not to innate constraints, but because of changes
derived by interaction with the world. The influence of culture is
definitely strong, but I again see this as something relative to the
interaction of the entity with its environment.

The only aspect I seem to accept about innate things are specialized
mechanisms for processing difficult things at the level of initial
sensory processing. Several neurophysiologists call this as feature
detectors. Evolution did the work of finding the best mechanism to
extract that first (and very difficult) "level" of input processing.
Feature detectors seem, then, specific and domain specialized
mechanisms, directly tied to one kind of sensory signal (visual,
auditory, olfactory, etc).

From the output of those feature detectors, I'm hypothesizing, all
we have are generic mechanisms, which may have some small differences
(such as the visual cortex if compared with the auditory cortex) but
that work under the very same fundamental principles. I'm eagerly
after these fundamental principles, because once we discover them,
any computer implementation will be able to extract intelligence
from *any kind of signal*.

>The second point is that one can think of categories (and items) as
>points and clusters in a space of many dimensions, or perhaps more
>accurately as several spaces between which there is weak mapping. When
>a child learns, it is both erecting the space ("that is actually two
>orthogonal dimensions, not one: 'Dadda' and 'Other nice-looking men'")
>and also populating it. The act of populating (and of social and
>operational feedback) challenges both clustering and the
>dimensionality. Thus we learn.
>

I mostly agree with this vision.

>This may be an over-abstract model, and is anyway only a
>representations of what is happening and has no bearing on the
>mechanism that conduct this process. This said, the N space idea, in
>which categories are abutting or distant bubbles, in which search
>consists of an activation vector on a subset of the N dimensions that
>is created by dedicated tissue activation, but which points to the
>correct category, (which itself then comes alight and quenches rival
>categories) seems to be a helpful one. Learning and memory are then
>carried out in dedicated tissue as primitives, but assemble into an
>ensemble through just such a mapping. The vectors that span the N
>space are the level of activation of the dedicated elements, the
>clustering the learned links along which this activation spreads. A
>memory 'hit' or a thing learned consists of a focus of activation
>where these independents tendrils of activation come together, in what
>can also be seen as an N space in which each tendril (or lack of
>tendril) is a vector along an axis.

I have to study more carefully what you said. One thing that I'm in
doubt is the possibility, in your vision, of more than one "activation
wave". One activation vector originary from sensory input, should
fire several spreading fronts and then (a la Calvin) one competition will
choose the winner(s).

The important point is that the terrain of categories seem to be the
soil where thoughts run. Depending if we go into a mountain or a cliff,
so our thought settles or bounces back, looking for one place to grow
roots.

Regards,
Sergio Navega.

From: ohgs@chatham.demon.co.uk (Oliver Sparrow)
Subject: Re: What is the place for Induction?
Date: 25 Mar 1999 00:00:00 GMT
Message-ID: <36fd4179.23358682@news.demon.co.uk>
Content-Transfer-Encoding: 7bit
X-NNTP-Posting-Host: chatham.demon.co.uk:158.152.25.87
References: <7cv5ru$mne@ux.cs.niu.edu> <36f3b9cc@news3.us.ibm.net> <7d6apv$shl@ux.cs.niu.edu> <36f6c1b6@news3.us.ibm.net> <36fdf398.3417917@news.demon.co.uk> <36fa2d6a@news3.us.ibm.net>
Content-Type: text/plain; charset=us-ascii
X-Complaints-To: abuse@demon.net
X-Trace: news.demon.co.uk 922371588 nnrp-06:2328 NO-IDENT chatham.demon.co.uk:158.152.25.87
MIME-Version: 1.0
Newsgroups: comp.ai.philosophy

"Sergio Navega" <snavega@ibm.net> wrote:

> Several neurophysiologists call this as feature
>detectors. []Feature detectors seem, then, specific and domain specialized
>mechanisms, directly tied to one kind of sensory signal (visual,
>auditory, olfactory, etc).
>From the output of those feature detectors, I'm hypothesizing, all
>we have are generic mechanisms,

Yes. We know that this happens. My suggestion is that the output of any one
feature detector (or set of feature detectors) can be seen as a vector in an N
space, where things recognised, recalled, evoked or imagined can be thought of
as points in such a space. (Of course, what they are 'really' - or as well -
is other bits of tissue, which are activated by the concerted 'pointing' of
all of the individual vectors that have picked them out.) The feature
detectors are likely to (are known to) associate in hierarchies, so that one
has higher order feature detectors that are built out of subsidiary feature
detectors. Quite where the N space comes about depends on what you want to
describe. Equally, whether a 'category' is seen as a dimension or as a bubble
containing clusters of points within the N space, or as an attractor is
semantic. A category is defined by the uses to which it is put. There are no
'real' sets, only semantic ones(?)

>The important point is that the terrain of categories seem to be the
>soil where thoughts run. Depending if we go into a mountain or a cliff,
>so our thought settles or bounces back, looking for one place to grow
>roots.

I agree. However, be careful about this division between 'thoughts' and
'categories'. Are they sufficiently distinct ideas to allow you to build a
strong model on the difference between them, or are they mildly different
manifestations of the same processes and agencies?
_______________________________

Oliver Sparrow

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: What is the place for Induction?
Date: 25 Mar 1999 00:00:00 GMT
Message-ID: <36fa6dfc@news3.us.ibm.net>
References: <7cv5ru$mne@ux.cs.niu.edu> <36f3b9cc@news3.us.ibm.net> <7d6apv$shl@ux.cs.niu.edu> <36f6c1b6@news3.us.ibm.net> <36fdf398.3417917@news.demon.co.uk> <36fa2d6a@news3.us.ibm.net> <36fd4179.23358682@news.demon.co.uk>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 25 Mar 1999 17:10:20 GMT, 200.229.243.8
Organization: SilWis
Newsgroups: comp.ai.philosophy

Oliver Sparrow wrote in message <36fd4179.23358682@news.demon.co.uk>...
>"Sergio Navega" <snavega@ibm.net> wrote:
>
>> Several neurophysiologists call this as feature
>>detectors. []Feature detectors seem, then, specific and domain specialized
>>mechanisms, directly tied to one kind of sensory signal (visual,
>>auditory, olfactory, etc).
>>From the output of those feature detectors, I'm hypothesizing, all
>>we have are generic mechanisms,
>
>Yes. We know that this happens. My suggestion is that the output of any one
>feature detector (or set of feature detectors) can be seen as a vector in
an N
>space, where things recognised, recalled, evoked or imagined can be thought
of
>as points in such a space. (Of course, what they are 'really' - or as
well -
>is other bits of tissue, which are activated by the concerted 'pointing' of
>all of the individual vectors that have picked them out.) The feature
>detectors are likely to (are known to) associate in hierarchies, so that
one
>has higher order feature detectors that are built out of subsidiary feature
>detectors. Quite where the N space comes about depends on what you want to
>describe. Equally, whether a 'category' is seen as a dimension or as a
bubble
>containing clusters of points within the N space, or as an attractor is
>semantic. A category is defined by the uses to which it is put. There are
no
>'real' sets, only semantic ones(?)
>

I remember reading something you wrote last year (I think it was in the
beginning of 1998) in which you posed something on the lines of that idea
of a n-dimensional space. At that time, I didn't have a clue what you
were talking about. Today I believe I'm in a much better condition,
although I still have some doubts.

What comes to my mind when I read this is the figure of Kohonen's
SOM (Self-organizing maps), but apparently you seem to extend it in
more than 2 dimensions. It seems to be an exciting possibility.

>>The important point is that the terrain of categories seem to be the
>>soil where thoughts run. Depending if we go into a mountain or a cliff,
>>so our thought settles or bounces back, looking for one place to grow
>>roots.
>
>I agree. However, be careful about this division between 'thoughts' and
>'categories'. Are they sufficiently distinct ideas to allow you to build a
>strong model on the difference between them, or are they mildly different
>manifestations of the same processes and agencies?
>

I'm not sure what's your opinion about these options, but in my case
I was thinking of categories as the mountains and cliffs and the
thoughts as a series of vehicles traversing that scenery. The
mountains and cliffs are often molded by some "collisions" with
those vehicles, essentially during the process of "learning". However,
often mountains and cliffs seem to just direct the flux of the vehicles.

Regards,
Sergio Navega.

From: modlin@concentric.net
Subject: Re: What is the place for Induction?
Date: 25 Mar 1999 00:00:00 GMT
Message-ID: <7ddotg$51a@journal.concentric.net>
References: <7cv5ru$mne@ux.cs.niu.edu> <36f3b9cc@news3.us.ibm.net> <7d6apv$shl@ux.cs.niu.edu> <36f6c1b6@news3.us.ibm.net> <36fdf398.3417917@news.demon.co.uk> <36fa2d6a@news3.us.ibm.net>
Organization: Concentric Internet Services
Reply-To: modlin@concentric.net
Newsgroups: comp.ai.philosophy

In <36fa2d6a@news3.us.ibm.net>, "Sergio Navega" <snavega@ibm.net>
writes, responding to Oliver Sparrow:

>The only aspect I seem to accept about innate things are specialized
>mechanisms for processing difficult things at the level of initial
>sensory processing. Several neurophysiologists call this as feature
>detectors. Evolution did the work of finding the best mechanism to
>extract that first (and very difficult) "level" of input processing.
>Feature detectors seem, then, specific and domain specialized
>mechanisms, directly tied to one kind of sensory signal (visual,
>auditory, olfactory, etc).
>
>From the output of those feature detectors, I'm hypothesizing, all
>we have are generic mechanisms, which may have some small differences
>(such as the visual cortex if compared with the auditory cortex) but
>that work under the very same fundamental principles. I'm eagerly
>after these fundamental principles, because once we discover them,
>any computer implementation will be able to extract intelligence
>from *any kind of signal*.

While some feature detection is supported by specialized evolved
structures, treating feature detectors as innate "givens" is a
misleading complication.

Generic mechanisms are known to be capable of developing most of the
feature detection functions that have been analyzed in detail,
especially for visual processing.  So for purposes of understanding
we don't have to treat them as special cases, even though from a
practical performance standpoint we might want to do so.

We don't need to postulate anything innate at all, and I suggest that
we will make faster progress by not doing so, by thinking about the
problem in much the sort of abstract terms Oliver described.

As I remarked in the recent brief exchange with Minsky, the Society of
Mind is constructed mindlessly by mechanisms responsive only to
covariance among unlabelled events, collectively shaping the space in
which consciousness occurs.

I'm almost done with my new summary of how those mechanisms work...

Bill Modlin

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: What is the place for Induction?
Date: 25 Mar 1999 00:00:00 GMT
Message-ID: <36fa7b60@news3.us.ibm.net>
References: <7cv5ru$mne@ux.cs.niu.edu> <36f3b9cc@news3.us.ibm.net> <7d6apv$shl@ux.cs.niu.edu> <36f6c1b6@news3.us.ibm.net> <36fdf398.3417917@news.demon.co.uk> <36fa2d6a@news3.us.ibm.net> <7ddotg$51a@journal.concentric.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 25 Mar 1999 18:07:28 GMT, 166.72.29.74
Organization: SilWis
Newsgroups: comp.ai.philosophy

modlin@concentric.net wrote in message
<7ddotg$51a@journal.concentric.net>...
>In <36fa2d6a@news3.us.ibm.net>, "Sergio Navega" <snavega@ibm.net>
>writes, responding to Oliver Sparrow:
>
>>The only aspect I seem to accept about innate things are specialized
>>mechanisms for processing difficult things at the level of initial
>>sensory processing. Several neurophysiologists call this as feature
>>detectors. Evolution did the work of finding the best mechanism to
>>extract that first (and very difficult) "level" of input processing.
>>Feature detectors seem, then, specific and domain specialized
>>mechanisms, directly tied to one kind of sensory signal (visual,
>>auditory, olfactory, etc).
>>
>>From the output of those feature detectors, I'm hypothesizing, all
>>we have are generic mechanisms, which may have some small differences
>>(such as the visual cortex if compared with the auditory cortex) but
>>that work under the very same fundamental principles. I'm eagerly
>>after these fundamental principles, because once we discover them,
>>any computer implementation will be able to extract intelligence
>>from *any kind of signal*.
>
>While some feature detection is supported by specialized evolved
>structures, treating feature detectors as innate "givens" is a
>misleading complication.
>
>Generic mechanisms are known to be capable of developing most of the
>feature detection functions that have been analyzed in detail,
>especially for visual processing.  So for purposes of understanding
>we don't have to treat them as special cases, even though from a
>practical performance standpoint we might want to do so.
>
>We don't need to postulate anything innate at all, and I suggest that
>we will make faster progress by not doing so, by thinking about the
>problem in much the sort of abstract terms Oliver described.
>
>As I remarked in the recent brief exchange with Minsky, the Society of
>Mind is constructed mindlessly by mechanisms responsive only to
>covariance among unlabelled events, collectively shaping the space in
>which consciousness occurs.

Bill, I have no doubt that, in terms of completeness, your way of
seeing innate mechanisms seems more appropriate. But in terms of
practicality, we should think carefully about it.

When nature provided about 17,000 hair cells with sound pressure and
frequency specialization or when it used receptive fields in several
areas of our vision, I guess "she" was trying to simplify the design
(in reality, this design resulted in an organism better suited to
survive or with a competitive advantage over other organisms).

This is, essentially, a practical consideration, an implementation
aspect. I'm trying to understand *why* nature did that, instead of
finding a process in which raw transduction of the signals were
processed directly by neural circuitry. In principle, it seems a
good idea to follow that principle, if that can alleviate some of
the *huge* work we have in front of us, that of implementing
intelligent machines. In other way of seeing it, it can be an
indication that some kind of initial, *specific* processing may be
required according to the kind of signal (olfactory, auditory,
visual, etc) to put the problem into the class of the solvables
with the available neural power (brain size).

So in a sense I think innate mechanisms are a *simplification* that
nature did, for the sake of "getting the job done". One of the
hypothesis that scares me is the one in which, without innate
frequency specialization in hair cells, for instance, the problem
of auditory perception could become "almost" intractable.

But never mind, I don't care so much about innate mechanisms. As I
said elsewhere, I don't like the word "innate" (specially in regard
to language).

>
>I'm almost done with my new summary of how those mechanisms work...
>

Please, don't interrupt your work to answer my comments above.

Regards,
Sergio Navega.

From: modlin@concentric.net
Subject: Re: What is the place for Induction?
Date: 25 Mar 1999 00:00:00 GMT
Message-ID: <7deqma$528@journal.concentric.net>
References: <7cv5ru$mne@ux.cs.niu.edu> <36f3b9cc@news3.us.ibm.net> <7d6apv$shl@ux.cs.niu.edu> <36f6c1b6@news3.us.ibm.net> <36fdf398.3417917@news.demon.co.uk> <36fa2d6a@news3.us.ibm.net> <7ddotg$51a@journal.concentric.net> <36fa7b60@news3.us.ibm.net>
Organization: Concentric Internet Services
Reply-To: modlin@concentric.net
Newsgroups: comp.ai.philosophy

Re: <36fa7b60@news3.us.ibm.net> by Sergio Navega:

(SERGIO)
> The only aspect I seem to accept about innate things are specialized
> mechanisms for processing difficult things at the level of initial
> sensory processing. Several neurophysiologists call this as feature
> detectors. Evolution did the work of finding the best mechanism to
> extract that first (and very difficult) "level" of input processing.
> Feature detectors seem, then, specific and domain specialized
> mechanisms, directly tied to one kind of sensory signal (visual,
> auditory, olfactory, etc).

> From the output of those feature detectors, I'm hypothesizing, all
> we have are generic mechanisms, which may have some small
> differences (such as the visual cortex if compared with the auditory
> cortex) but that work under the very same fundamental principles.
> I'm eagerly after these fundamental principles, because once we
> discover them, any computer implementation will be able to extract
> intelligence from *any kind of signal*.

(MODLIN)
> While some feature detection is supported by specialized evolved
> structures, treating feature detectors as innate "givens" is a
> misleading complication.

> Generic mechanisms are known to be capable of developing most of the
> feature detection functions that have been analyzed in detail,
> especially for visual processing.  So for purposes of understanding
> we don't have to treat them as special cases, even though from a
> practical performance standpoint we might want to do so.

> We don't need to postulate anything innate at all, and I suggest
> that we will make faster progress by not doing so, by thinking about
> the problem in much the sort of abstract terms Oliver described.
> [snip]

(SERGIO)
> Bill, I have no doubt that, in terms of completeness, your way of
> seeing innate mechanisms seems more appropriate. But in terms of
> practicality, we should think carefully about it.

Sergio, all I did here was agree with you, and try to emphasise the
importance of what you said:

   > I'm eagerly after these fundamental principles, because once we
   > discover them, any computer implementation will be able to
   > extract intelligence from *any kind of signal*.

So don't back off from that with reservations that make it sound as
though you disagree.

It is important NOT to think about the innate mechanisms right now,
since they are a terribly seductive distraction from those fundamental
principles we both want to figure out.

In practice we'll probably give our robots as powerful a set of
sensors and pre-processing feature detectors as we can manage.
No argument there.

But designing eyes and ears and specific feature detectors doesn't get
us any closer to understanding how all those signals from them come to
be assembled into a perception of the world.

That happens in the domain of what you called "generic" processing,
using algorithms that are independent of the particular kinds of data
being processed.

If we get this level right, it doesn't really matter what sort of
inputs the robot-engineers are able to cobble together... we'll be
ready to deal with it.  But if we try to mix the two discussions, if
we think of our inputs as specific "feature" signals, it is very hard
not to think about ways to process those specific kinds of signals and
forget that we are looking for NON-specific methods.

(MODLIN)
> I'm almost done with my new summary of how those mechanisms work...

(SERGIO)
> Please, don't interrupt your work to answer my comments above.

It isn't an interruption, it is part of the work.

At least half of the job is figuring out how to say things in ways
that people will understand, so conversations like this are useful, if
a bit daunting... it is amazing how many ways words can be
misunderstood.

The other part of the task is figuring out the parts I don't
understand myself, yet.  That just takes time, and I might as well
spend it working on the first part.

Bill Modlin

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: What is the place for Induction?
Date: 26 Mar 1999 00:00:00 GMT
Message-ID: <36fbc979@news3.us.ibm.net>
References: <7cv5ru$mne@ux.cs.niu.edu> <36f3b9cc@news3.us.ibm.net> <7d6apv$shl@ux.cs.niu.edu> <36f6c1b6@news3.us.ibm.net> <36fdf398.3417917@news.demon.co.uk> <36fa2d6a@news3.us.ibm.net> <7ddotg$51a@journal.concentric.net> <36fa7b60@news3.us.ibm.net> <7deqma$528@journal.concentric.net>
X-Notice: should be reported to postmaster@ibm.net
X-MIMEOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 26 Mar 1999 17:52:57 GMT, 200.229.243.252
Organization: SilWis
Newsgroups: comp.ai.philosophy

modlin@concentric.net wrote in message
<7deqma$528@journal.concentric.net>...
>Re: <36fa7b60@news3.us.ibm.net> by Sergio Navega:

>
>> Bill, I have no doubt that, in terms of completeness, your way of
>> seeing innate mechanisms seems more appropriate. But in terms of
>> practicality, we should think carefully about it.
>
>Sergio, all I did here was agree with you, and try to emphasise the
>importance of what you said:
>
>   > I'm eagerly after these fundamental principles, because once we
>   > discover them, any computer implementation will be able to
>   > extract intelligence from *any kind of signal*.
>
>So don't back off from that with reservations that make it sound as
>though you disagree.
>
>It is important NOT to think about the innate mechanisms right now,
>since they are a terribly seductive distraction from those fundamental
>principles we both want to figure out.
>

I keep strong my wish of finding those basic mechanisms to process
any kind of signal. However, I'm worried on the complexity
of that task. I think that, to find what are those principles, we have
no other way than doing a lot of exploratory experiments. I don't
think we can come up with them just by thinking.

To do that experiments, I believe we should simplify our initial
desire and use some sort of focusing. In a large part, this thought
is derived from what nature appear to have done with us: our eyes
and ears are the result of billions of years of quatrillions of
experiments through natural selection. Sophisticated brains are more
"recent" things. It seems to me that a significant part of the
problem is concentrated (and solved) by those innate mechanisms
and this suggests that if we can avoid that complexity initially,
we could get a better chance of doing those initial exploratory
experiments, in a simple enough fashion so as to do it within
one's lifetime. Recall that my purpose with these experiments is not
to build a fully working intelligence, but just to experiment and
*learn* what those fundamental principles really are.

But please, don't let this make you think I'll insist on the innate
stuff. What's important is to focus on those principles.

>In practice we'll probably give our robots as powerful a set of
>sensors and pre-processing feature detectors as we can manage.
>No argument there.
>
>But designing eyes and ears and specific feature detectors doesn't get
>us any closer to understanding how all those signals from them come to
>be assembled into a perception of the world.
>

That's right, but their presence can simplify our first approach to
practical experiments. Note, however, that I'm *not* proposing the
construction of artificial ears and eyes (that would be *very*
complicated), but just to use the *idea* of a simplification of
the incoming signal as if it had been processed by such devices.

For instance, if we think about auditory signals, why not use
something functionally similar to what hair cells in the cochlea
do? I wonder if that could simplify a bit our task (even if what
we use is *only* basic informations such as the number of relevant
inputs that a neural network should have to process auditory
signals and the coding of frequency and/or pressure level).
This can be an important "first guess" clue in order to help
dimensioning our test-bed.

>(SERGIO)
>> Please, don't interrupt your work to answer my comments above.
>
>It isn't an interruption, it is part of the work.
>
>At least half of the job is figuring out how to say things in ways
>that people will understand, so conversations like this are useful, if
>a bit daunting... it is amazing how many ways words can be
>misunderstood.
>
>The other part of the task is figuring out the parts I don't
>understand myself, yet.  That just takes time, and I might as well
>spend it working on the first part.
>

I'm sure that if we can cope with occasional noise in c.a.p, some
threads may elicit some important suggestions, even when they
come accidentally, from "radical" viewpoints.

Regards,
Sergio Navega.


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net