Selected Newsgroup Message

Dejanews Thread

From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: Neurons and consciousness
Date: 15 Jul 1999 00:00:00 GMT
Message-ID: <7mkqir$1lu0@edrn.newsguy.com>
References: <37790B77.CF1B5B3C@travellab.com> <g9ae3.2900$c5.749565@news1.usit.net> <3779420F.B83925B6@travellab.com> <378928aa.275952835@netnews.worldnet.att.net> <3789F5D6.5D1041AC@sandpiper.net> <7mdsnj$dn4@dfw-ixnews10.ix.netcom.com> <378A7FE3.54AB8622@sandpiper.net> <378b3b7a@news3.us.ibm.net> <378B967B.FFF8643@sandpiper.net> <378ba2d9@news3.us.ibm.net> <7miesm$oud@edrn.newsguy.com> <378cd954@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy,sci.physics

Sergio says...
>
>Daryl McCullough wrote
>>However, I really don't think that the concept of "induction"
>>sheds any light whatsoever on the question. Induction just
>>means (as far as I understand the term):
>>
>>    1. Notice that all cases so far having propery P also have
>>    property Q.
>>
>>    2. Propose the general law: P implies Q.
>>
>>This informal description doesn't explain anything about where
>>hypotheses come from. The real question is where do the properties
>>P and Q come from? It certainly is not the case that people generate
>>all possible descriptions, and then see which ones apply and which
>>ones don't. So where do descriptions come from?
>>
>
>
>Descriptions come from judgements of similarity.

I still don't think that explains anything. To say
that X is similar to Y simply *means* that there
is some general description that applies to both.
How do we notice similarity without having a
notion of relevant descriptions to begin with?

>Using your definition, here's my way to see it inductively:
>
>1. All instances of object P that I've seen so far had property Q.
>
>2. I will assume that all objects P have property Q.

My point is that, in order to even *formulate* the
inductive hypothesis (that all Ps have property Q),
you must *already* have properties P and Q around.
Where do these concepts come from? That's the question
of interest for the science of thinking, not induction.
Induction is pretty trivial once you have a supply of
Ps and Qs.

>Now imagine this process being done *automatically* in our
>unconscious, with hundreds of parallel threads, each one
>reinforcing or weakening links, fed by the evidences captured
>by the senses.

I still think that you are missing a crucial part of
the puzzle. Yes, if we have a supply of properties
P, Q, R, S, etc. then I think it is easy to see how
experience can gather statistics about co-occurrences
and use these to formulate inductive hypotheses. But
where do the P, Q, R, S come from?

>This is very close to what happens with the
>learning process of children. Induction, in this case, is
>important, but is only part of a more exquisite mechanism
>that controls all these threads. I'm after this mechanism.

Well, I agree that the engineering problem of how to control
the threads of multiple simultaneous inductions is interesting,
but I don't think it addresses the question of where concepts
come from, in the first place. My inclination is to think that
the structure of our brains is hardwired to notice certain
things and not others---I just don't think it is possible to
build a "pattern noticer" that will notice an *arbitrary*
pattern.

>Neural nets are inductive by excellence, but there are several
>problems. One of the more important problems is the difficulty
>that NN have of doing rule-like, symbolic generalizations and
>also the ability to "copy" generalizations to other domains.

But I think that people are not very good at noticing
such symbolic generalizations, either. That's why, while
just about all humans can learn to walk, only a tiny fraction
learn advanced mathematics.

>My guess is that the mechanism in our brain is hybrid, with the
>statistical power of NN but with pattern and rule-like capacities
>in which symbolic systems excel.

I think that's right, but I think that for symbolic reasoning,
humans really don't have much of an edge over current machines.
It is in the nonsymbolic pattern recognition that we excel compared
with the best computer programs.

My belief is that, while symbolic reasoning is completely
open-ended---you can apply logic and mathematics to absolutely
anything---the nonsymbolic reasoning that we are so good at
is limited to particular *types* of reasoning, such as spatial
reasoning.

Daryl McCullough
CoGenTex, Inc.
Ithaca, NY

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Neurons and consciousness
Date: 15 Jul 1999 00:00:00 GMT
Message-ID: <378e40df@news3.us.ibm.net>
References: <37790B77.CF1B5B3C@travellab.com> <g9ae3.2900$c5.749565@news1.usit.net> <3779420F.B83925B6@travellab.com> <378928aa.275952835@netnews.worldnet.att.net> <3789F5D6.5D1041AC@sandpiper.net> <7mdsnj$dn4@dfw-ixnews10.ix.netcom.com> <378A7FE3.54AB8622@sandpiper.net> <378b3b7a@news3.us.ibm.net> <378B967B.FFF8643@sandpiper.net> <378ba2d9@news3.us.ibm.net> <7miesm$oud@edrn.newsguy.com> <378cd954@news3.us.ibm.net> <7mkqir$1lu0@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 15 Jul 1999 20:13:19 GMT, 129.37.182.82
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

Daryl McCullough wrote in message <7mkqir$1lu0@edrn.newsguy.com>...
>Sergio says...
>>
>>Daryl McCullough wrote
>>>However, I really don't think that the concept of "induction"
>>>sheds any light whatsoever on the question. Induction just
>>>means (as far as I understand the term):
>>>
>>>    1. Notice that all cases so far having propery P also have
>>>    property Q.
>>>
>>>    2. Propose the general law: P implies Q.
>>>
>>>This informal description doesn't explain anything about where
>>>hypotheses come from. The real question is where do the properties
>>>P and Q come from? It certainly is not the case that people generate
>>>all possible descriptions, and then see which ones apply and which
>>>ones don't. So where do descriptions come from?
>>>
>>
>>
>>Descriptions come from judgements of similarity.
>
>I still don't think that explains anything. To say
>that X is similar to Y simply *means* that there
>is some general description that applies to both.
>How do we notice similarity without having a
>notion of relevant descriptions to begin with?
>

Thank you for asking this. That is one of the roots of the
question and I believe I have a good suggestion to solve it.
But I'll do it later in this text.

>>Using your definition, here's my way to see it inductively:
>>
>>1. All instances of object P that I've seen so far had property Q.
>>
>>2. I will assume that all objects P have property Q.
>
>My point is that, in order to even *formulate* the
>inductive hypothesis (that all Ps have property Q),
>you must *already* have properties P and Q around.
>Where do these concepts come from? That's the question
>of interest for the science of thinking, not induction.
>Induction is pretty trivial once you have a supply of
>Ps and Qs.
>

Yes, I agree, the fundamental problem is where Ps and Qs
come from.

>>Now imagine this process being done *automatically* in our
>>unconscious, with hundreds of parallel threads, each one
>>reinforcing or weakening links, fed by the evidences captured
>>by the senses.
>
>I still think that you are missing a crucial part of
>the puzzle. Yes, if we have a supply of properties
>P, Q, R, S, etc. then I think it is easy to see how
>experience can gather statistics about co-occurrences
>and use these to formulate inductive hypotheses. But
>where do the P, Q, R, S come from?
>

I'll delve into this below.

>
>>This is very close to what happens with the
>>learning process of children. Induction, in this case, is
>>important, but is only part of a more exquisite mechanism
>>that controls all these threads. I'm after this mechanism.
>
>Well, I agree that the engineering problem of how to control
>the threads of multiple simultaneous inductions is interesting,
>but I don't think it addresses the question of where concepts
>come from, in the first place. My inclination is to think that
>the structure of our brains is hardwired to notice certain
>things and not others---I just don't think it is possible to
>build a "pattern noticer" that will notice an *arbitrary*
>pattern.

You're getting close to the view I'll espouse below. I also
don't think that we can design a fully built, fixed pattern
noticer, from first principles design, applicable to any
circumstance.

But that's not the problem we've got to solve. We've got to
design a mechanism that *evolves*, by itself, to be an almost
universal pattern matcher, given initial constraints. That's
sensibly (and importantly) different. More in the end of this post.

>
>>Neural nets are inductive by excellence, but there are several
>>problems. One of the more important problems is the difficulty
>>that NN have of doing rule-like, symbolic generalizations and
>>also the ability to "copy" generalizations to other domains.
>
>But I think that people are not very good at noticing
>such symbolic generalizations, either. That's why, while
>just about all humans can learn to walk, only a tiny fraction
>learn advanced mathematics.
>

I agree that only a few "gifted" manage to get a good knowledge
of advanced mathematics. But this *is not* what I was referring
as symbolic generalizations. Contrast the usual, textbook
mathematical proof with, say, natural language. Natural language is
full of recursive expressions. People generate language
effortlessly, with complex noun phrases, embedded clauses and
more. Any four year old child understand complex phrases like
"I told you not to play with Tim's puppy before the lunch".
To understand that, the child must have a lot of generic
symbolic abilities, far ahead of most AI programs, because this
can be the first time this child hears such a phrase. This is
where I kiss Chomsky's hands.

>>My guess is that the mechanism in our brain is hybrid, with the
>>statistical power of NN but with pattern and rule-like capacities
>>in which symbolic systems excel.
>
>I think that's right, but I think that for symbolic reasoning,
>humans really don't have much of an edge over current machines.
>It is in the nonsymbolic pattern recognition that we excel compared
>with the best computer programs.
>
>My belief is that, while symbolic reasoning is completely
>open-ended---you can apply logic and mathematics to absolutely
>anything---the nonsymbolic reasoning that we are so good at
>is limited to particular *types* of reasoning, such as spatial
>reasoning.
>

I work with a different hypothesis. I propose that even our
high-level "apparently symbolic" reasoning works with a mechanism
that is a "scaled up" version of lower level mechanisms, with few
changes. It is not purely statistical, neither purely symbolic.
But I'm still working on that. For now, let me try to write
where do I think Ps and Qs come from.

On a global scale, we can divide our brain in two parts:

a) Innate things,
such as receptive fields in vision, auditory cortex, the
optic chiasm, the sensorimotor cortex and things like that.

b) The learned things
Everything that happens in the brain (biochemically and
morphologically) as a consequence of sensing and acting in
the world.

It is clear that a baby is born with very few things in its
head: he/she has the innate things only. Who designed these
innate things? Evolutionary pressures, obviously. Then,
evolution has determined what were the *important things* for
the brain to sense. The frequency spectrum of our audition,
the way occipital lobe divides the processing of vision in
areas specialized in color, movement, and textures, and so on.

It is, thus, reasonable to think that evolutionary pressures
sculpted not only our senses, but also the *initial levels*
of the cortex that process incoming signals. In vision,
for example, there is a thing called "lateral inhibition"
that eases the emergence of feature detectors such as
edges.

Now think about this: an innate structure that is able to
rapidly detect edges. Close to these structures, we can
have groups of neurons that start noticing *similarity*
among edges: it is a simple comparison of simultaneous
activation of neurons, a conventional hebbian mechanism can
do that, although we can propose other methods too.

What happens, then, if this same "similarity checking"
(which is, in other words, temporal coincidence detectors)
starts to work over the previously detected similarities: we
will have the beginning of a *hierarchy* of similarities that
will produce dozens, or hundreds, or even thousands of "things
similar to other things". These are the initial levels of
"hypotheses" that I'm talking about: those that repeat
with time, will get reinforced, those that occurred a few
times only are destroyed. Reminds us of induction, no?

It is, obviously, very far from high level propositions, such
as that "apples are red". But *this level* is achievable
once you traverse that "tree" of levels that is below it: all
the hundreds or thousands, or hundreds of thousands of learned
similarities that categorize "redness" and "appleness", which
are things that were learned, which stem on another bunch of
learned things, until one finally reaches the level of things
that were constructed because of *elemental* similarites, those
identified by the very nature of our senses. That's it.

Regards,
Sergio Navega.

From: rickert@cs.niu.edu (Neil Rickert)
Subject: Re: Neurons and consciousness
Date: 15 Jul 1999 00:00:00 GMT
Message-ID: <7mlv6f$80l@ux.cs.niu.edu>
References: <7mkqir$1lu0@edrn.newsguy.com> <378e40df@news3.us.ibm.net>
Organization: Northern Illinois University
Newsgroups: comp.ai.philosophy

"Sergio Navega" <snavega@ibm.net> writes:

>Yes, I agree, the fundamental problem is where Ps and Qs
>come from.

I'm glad you said that.  It saved me from having to challenge your
earlier article where you took the Ps and Qs for granted.

---------

>But that's not the problem we've got to solve. We've got to
>design a mechanism that *evolves*, by itself, to be an almost
>universal pattern matcher, given initial constraints. That's
>sensibly (and importantly) different. More in the end of this post.

Why does it have to be a universal pattern matcher?  What exactly is
a pattern (apart from it being what we decide to call a pattern)?  Do
you have completely objective non-parametric definition of
"pattern".

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Neurons and consciousness
Date: 16 Jul 1999 00:00:00 GMT
Message-ID: <378f3859@news3.us.ibm.net>
References: <7mkqir$1lu0@edrn.newsguy.com> <378e40df@news3.us.ibm.net> <7mlv6f$80l@ux.cs.niu.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 16 Jul 1999 13:49:13 GMT, 200.229.240.76
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

Neil Rickert wrote in message <7mlv6f$80l@ux.cs.niu.edu>...
>"Sergio Navega" <snavega@ibm.net> writes:
>
>>Yes, I agree, the fundamental problem is where Ps and Qs
>>come from.
>
>I'm glad you said that.  It saved me from having to challenge your
>earlier article where you took the Ps and Qs for granted.
>

That is probably the age-old question that we must solve
one way or another.

>
>>But that's not the problem we've got to solve. We've got to
>>design a mechanism that *evolves*, by itself, to be an almost
>>universal pattern matcher, given initial constraints. That's
>>sensibly (and importantly) different. More in the end of this post.
>
>Why does it have to be a universal pattern matcher?  What exactly is
>a pattern (apart from it being what we decide to call a pattern)?  Do
>you have completely objective non-parametric definition of
>"pattern".
>

This last question is thought-provoking. I still don't have a formal
qualification of what I mean by "pattern", although I guess I'll find
one someday. But I have some ideas, and what follows is some of them.
Needless to say, all this is highly speculative.

The "universal pattern detector" is the expression I want to comment
first. Recall that I didn't say exactly this, I said "almost" universal.
That is my "way out" of the universality problem. I think brains are
almost universal pattern handlers. The almost is because it is
someway limited (and constrained) to process the kinds of patterns
that our senses are able to provide.

This is my first point in this story: there's an important message
in the division of what is innate and what's not. Language, as
we've seen before here in c.a.p., is not a candidate for innately
present aspect. But movement detection, texture processing,
initial auditory processing, all those aspects are innately
determined. This subdivision of the "work to be done" is
meaningful, IMO. It carries a message.

So it is a good bet to look at the brain as an almost generic
pattern processor, adapted and "customized" to the kinds of
patterns that our senses provide.

Now that still leave us without a clear definition of what a
pattern is. The usual sense, that a pattern is a structured
repetition of elemental components (like 'abcabcabcabc') may not
help us much here. I'm searching for a slightly different way of
seeing things, which is based on judgement of similarity, but
with subtle characteristics.

We don't know much about how groups of neurons work, although
there are good models for the behavior of a single neuron.
Computational neuroscience developed reasonable models, but
they are complex to be extended to multiple neurons. I'm
following some of the hypotheses centered around the behavior
of populations of neurons (for example, Wolf Singer), in which
what is important is not the neurons that are active or not,
but the *synchronism* among several neurons.

This synchronism provides a substrate to detect and generate
a special kind of pattern: a temporal sequence of spikes.
Although one may see this "temporal aspect" as the meaningful
thing, I'm trying to see this as the dynamic equivalent of a
"static" situation: the group of neurons act dynamically, but
what they're doing is just the *representation* of a repeatable,
fixed kind of event.

Put a monkey to see vertical bars. Peeking into the occipital
lobe of this monkey with microelectrodes, we'll see a lot of
activity going on. But under a certain interpretation of this
data, we'll see a repeating sequence of temporal events, much
like another kind of representation of the static figure the
monkey is looking at, only that this is done throughout an
apparently dynamic process.

This is the pattern I'm referring to. It is a pattern (and
also a *coding method*) with some peculiarities: one pattern
may be found to be similar to another according to a special
criterion (note: I still don't know exactly what is this criterion).

Also, this pattern may "synchronize" (or generate) others which
*subsume* some important characteristics of the former,
establishing a way to build a *hierarchy*, in which each
level is "recognizing" *invariant aspects* among two or more
patterns (maybe dozens). But lets return to the comparison
of only two.

Maybe two patterns are compared using a "bell shaped" curve,
(or something like that) in which they would be said to be
equal *only* if both were made of the exact same sequence (and
temporal distribution) of spikes (which I find to be very rare).

More often, they could be said to be *similar* if the sequence
of spikes differed slightly, with the degree of similarity
being equivalent to that bell shaped path (this means, we rarely
have exactly equal patterns, but only similar). We can see this
as the criterion for similarity, that I propose is the same used
throughout the whole spectrum of our cognition, when we're put
to compare things. As we hardly are put to compare exactly equal
things, this mechanism seems to be valuable in our daily life.

So where do hypotheses come from? They come from that assessment
of similarity that is embedded in this mechanism. Our senses
capture a new object, this object (after a lot of processing,
done by previous levels of that hierarchy, in which edges,
textures, colors are resolved) generates a specific sequence of
spikes and this sequence of spikes are the ones that will be
compared with, say, a memory of *another* object.

(note: this "comparison" may be different than we usually
suppose, it can be seen as the *recall* of a similar object(s)
because of synchronism, like the resonance of a tuning fork
when close to something vibrating with similar frequency)

If this comparison/recall happened before, we could have a
reinforcement of that similarity (which means, the group of
neurons that oscillated before will receive a "reinforcement",
via LTP or hebbian mechanisms, to oscillate more easily given
similar future excitation conditions). This could be said to
be the root of the inductive process, something that happens
because of reinforcement.

Well, tell me about hypotheses generation! That's a bunch of
them, most of my current activity is trying to find supporting
evidences for each step of these ideas. Obviously, I have a
lot of work ahead. That's the fun of it.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Neurons and consciousness
Date: 16 Jul 1999 00:00:00 GMT
Message-ID: <378fb69f@news3.us.ibm.net>
References: <7mlv6f$80l@ux.cs.niu.edu> <378f3859@news3.us.ibm.net> <7mo3i4$bcd@ux.cs.niu.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MIMEOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 16 Jul 1999 22:47:59 GMT, 166.72.29.140
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

Neil Rickert wrote in message <7mo3i4$bcd@ux.cs.niu.edu>...
>"Sergio Navega" <snavega@ibm.net> writes:
>>Neil Rickert wrote in message <7mlv6f$80l@ux.cs.niu.edu>...
>>>"Sergio Navega" <snavega@ibm.net> writes:
>
>>>>But that's not the problem we've got to solve. We've got to
>>>>design a mechanism that *evolves*, by itself, to be an almost
>>>>universal pattern matcher, given initial constraints. That's
>>>>sensibly (and importantly) different. More in the end of this post.
>
>>>Why does it have to be a universal pattern matcher?  What exactly is
>>>a pattern (apart from it being what we decide to call a pattern)?  Do
>>>you have completely objective non-parametric definition of
>>>"pattern".
>
>>This last question is thought-provoking. I still don't have a formal
>>qualification of what I mean by "pattern", although I guess I'll find
>>one someday. But I have some ideas, and what follows is some of them.
>>Needless to say, all this is highly speculative.
>
>Your theory (quoted below) is an interesting one.  However, I am
>inclined to disagree with it.  In working out your theory, you seem
>to be taking 'similarity' as something we can innately judge.  In my
>view 'similarity' is a rather complex and sophisticated concept,
>certain not something I would consider primitive or innate.
>

In my hypothesis, I need one innate mechanism for judging similarity,
in order to "bootstrap" the whole process. This is a requirement
for the development of the remainder of the cognition of the
organism. However, in principle, I don't fight the idea that
other criteria of similarity could be developed because of the
interaction (and learning) activities of the organism.
If this is the correct interpretation of your idea, I'm inclined
to agree (although I still fail to see how it can be accomplished).

I cannot see any other way other than having at least one initial
comparison criterion established. But the remainder of the learning
can be done with the same "techniques" or with other, evolved ones.

What I can try to justify today, however, is the same criterion
for all cognition. Notice that this does not imply in any innate
knowledge, but only innate comparison and representation strategy.
I concede that the scalability of this level can be debated, but
I'm working with the idea that it is scalable.

>My more basic disagreement, however, is with this whole notion of a
>world full of patterns such that we only have to use some sort of
>induction to be able to discover them.  As an alternative, I suggest
>that we live in a world which is utterly devoid of patterns, except
>for those patterns that we have created ourselves.
>
>Take an example.  It is often said that there is a pattern in that
>the sun rises in the east, moves from east to west, and sets in the
>west.  But, since Copernicus, we have known that it is not a pattern
>in the world, but in the Ptolemaic way of looking at the world.
>
>I want to suggest that everything we call a pattern is a pattern in
>our representations of the world, rather than a pattern in the world
>which is independent of our representations.  When I earlier asked
>"Do you have completely objective non-parametric definition of
>pattern," I was trying to get at this.  My term "non-parametric" was
>perhaps ill chosen, but I was trying to ask for a notion of pattern
>independent of our representations.  I don't believe that we have
>such a notion.

I agree. In fact, my ideas are not incompatible with this way of
reasoning. Patterns are always personal interpretations of the
things that get into our senses. But the question is that there
must be something in common among all humans, other than the
fact that we live in the same world. I believe that this common
point is the initial level of our cognition, the level that
has the most "fixed" parts of our brain. This is the level determined
by evolution, it is the "coding strategy" used by our senses to
communicate "data" to the corresponding areas of the cortex.

This coding strategy (information representation, if you will) must
be the same among us (and I'd say even among most mammals).
From this common starting point (which means, from this comparable
coding strategy) I propose also common "similarity judgment
criterion", at least in this initial level. From this level, we
start to assemble personal visions of the world, different in
content (because they are a function of personal experiences)
but often aligned enough to be translatable to public terms like
language.

>
>You make the claim that language is not innate.  I go further, and
>argue that none of our external* representation systems is innate.
>We, or our brains, must devise suitable representation systems to
>allow us to deal with the world.  There may be many different ways
>that the world could be represented.  But what will appear to be a
>pattern in one representation system may not appear to be a pattern
>in another representation system.  For example the epicycles were
>patterns in the Ptolemaic system, but they disappeared in the
>Copernican system, to be replaced by stronger and more useful
>patterns.
>
>   [note on 'external' above.  To survive, we require systems that
>   monitor blood oxygen content, blood sugar, temperature, and other
>   metabolic indicators.  I expect that we have a well developed
>   innately specified representation system for internal conditions
>   such as these.]
>

The patterns I'd discussed in my previous text are more elemental
than the ones you cite here. There is something that appears to
constantly deceive us all: it is the nature of our unconscious.
What we describe as patterns in our conscious may well be the
result of a huge number of interactions of those "elemental
patterns" (that are the object of my hypothesis). The conscious
patterns (such as those we can report through language and even
language itself) are usually more complex than things like edges
and textures of objects. The latter are more primitive and,
unfortunately, much more hidden.

>I see the role of pattern as one of economy.  If the brain mechanisms
>can devise a representation system such that strong patterns appear
>in the representations, then this patterning of the representations
>leads to great economy in processing (acting upon) those
>representations.
>

This is perhaps one of the most important points. There are two
concepts that appear to be involved here: that of economy of
processing and that of economy of representation.

I fully agree that our brain looks for ways of reducing the
processing complexity (in fact, this is exactly the opposite of
what AI usually does: the more data an AI system has, the slower
it becomes). Our brain is an impressive example of the
"more data, faster processing" aspect. This is amazing.

The second aspect is that of economy of representation. I have
no doubts that this economy is another significant and determinant
factor in the activity of our cognition. But there is another
important detail that we can't overlook.

Some time ago, I have been asked to provide a difference
between our brain (seen as an information "compressor") and WinZip,
the data compressor program. Besides the idea that our brain may
adjust its compression strategy "on the fly", I was tempted to
say that we try to reduce patterns also based on the interactive
activities that we develop in the world. In this situation, what
is important is *not only* the amount of compression obtained, but
also the relevance of this compression to the interactive feedbacks
that the organism obtains from its world. This can produce "not
so compressed" patterns, but meaningful, if what is important is
the "cost/benefit" of the operation.

>It is in this sense that I claim that patterns are our creations
>rather than something that occurs naturally.

I definitely agree with this. Perhaps I haven't left this position
clear previously.

> And for the same reason
>I consider inductionism (the search for naturally occurring patterns
>in the world) to be a wild goose chase.  I see learning as a highly
>constructive process (consistent with Piaget's view), where the
>neural system tries to construct suitable ways of representing the
>world which are economical (hence generate patterned
>representations), and which are effective (put food in our
>stomachs).
>

I have nothing against this too. I think I must say that everytime
I proposed induction, what I was trying to say is that this should
be applied to the representations that the entity build internally,
as the result of its tentative to "fit" its internal model to the
evidences it captures through its senses. Any inductive generalization
that does not follow what the senses inform, should be discarded (or
at least, put under the "dubious" category).

However, it is necessary to see this process I'm proposing as
something that happens in a very elemental level. Very often these
ideas don't scale to high level concepts, not because the ideas
aren't valid, but because what we "see" at this high level are only
small segments of greater structures, that remain hidden in lower
levels. Any manipulation of these high level structures should,
one way or another, also carry the supporting structures.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Neurons and consciousness
Date: 16 Jul 1999 00:00:00 GMT
Message-ID: <378f8123@news3.us.ibm.net>
References: <37790B77.CF1B5B3C@travellab.com> <g9ae3.2900$c5.749565@news1.usit.net> <3779420F.B83925B6@travellab.com> <378928aa.275952835@netnews.worldnet.att.net> <3789F5D6.5D1041AC@sandpiper.net> <7mdsnj$dn4@dfw-ixnews10.ix.netcom.com> <378A7FE3.54AB8622@sandpiper.net> <378b3b7a@news3.us.ibm.net> <378B967B.FFF8643@sandpiper.net> <378ba2d9@news3.us.ibm.net> <7miesm$oud@edrn.newsguy.com> <378cd954@news3.us.ibm.net> <7mkqir$1lu0@edrn.newsguy.com> <378e40df@news3.us.ibm.net> <7mndbv$am@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 16 Jul 1999 18:59:47 GMT, 200.229.240.49
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

Daryl McCullough wrote in message <7mndbv$am@edrn.newsguy.com>...
>Sergio says...
>
>>>Well, I agree that the engineering problem of how to control
>>>the threads of multiple simultaneous inductions is interesting,
>>>but I don't think it addresses the question of where concepts
>>>come from, in the first place. My inclination is to think that
>>>the structure of our brains is hardwired to notice certain
>>>things and not others---I just don't think it is possible to
>>>build a "pattern noticer" that will notice an *arbitrary*
>>>pattern.
>>
>>
>>You're getting close to the view I'll espouse below. I also
>>don't think that we can design a fully built, fixed pattern
>>noticer, from first principles design, applicable to any
>>circumstance.
>>
>>But that's not the problem we've got to solve. We've got to
>>design a mechanism that *evolves*, by itself, to be an almost
>>universal pattern matcher, given initial constraints. That's
>>sensibly (and importantly) different. More in the end of this post.
>
>Evolution as a mechanism has the same conceptual problems
>that I was pointing out were inherent in induction. Evolution
>works by (1) mutation (and recombination of genes through
>sex) creating a new pattern, and (2) natural selection
>weeding out those patterns that don't work. It's a lot
>like induction, where we hypothesize patterns, and then
>reject those that fail to agree with our observations.
>But where do the patterns produced by evolution come from?
>Ultimately, evolution is limited to making small, random
>changes to patterns that already exist. A complex pattern
>cannot spring fully-blown from nowhere (well, the probability
>of that happening is negligibly small).
>

The probability may be small, but is not zero. Take our eyes,
for instance. A pretty complex device, entirely produced by
evolutionary means from a very simple starting point. But when
I used the word "evolve" in my text above, I wasn't referring
to darwinian evolution: I was trying to see the problem from
a wider point of view, that of self-organizing systems.

But I did perceive one interesting point in your text: what
could be the more "fundamental" principle behind the selection
of best candidates in a bunch of randomly suggested ones.

I propose that a possible candidate for this fundamental
principle is that of information economy. This principle can
be found under a lot of different names such as MDL (Minimum
Description Length), Occam's razor, maximum information
compression, cognitive economy, etc. This is thoroughly
explored by Gerry Wolff:
http://saturn.sees.bangor.ac.uk/~gerry/sp_summary.html

>>>
>>I agree that only a few "gifted" manage to get a good knowledge
>>of advanced mathematics. But this *is not* what I was referring
>>as symbolic generalizations. Contrast the usual, textbook
>>mathematical proof with, say, natural language. Natural language is
>>full of recursive expressions. People generate language
>>effortlessly, with complex noun phrases, embedded clauses and
>>more. Any four year old child understand complex phrases like
>>"I told you not to play with Tim's puppy before the lunch".
>>To understand that, the child must have a lot of generic
>>symbolic abilities, far ahead of most AI programs, because this
>>can be the first time this child hears such a phrase. This is
>>where I kiss Chomsky's hands.
>
>I don't agree that there *is* such a thing as "generic
>symbolic abilities". It seems to me that learning language
>would have to work like any other act of induction: the
>child can only "notice" grammatical patterns that the
>child is possible of generating, in the first place.
>

Well, here I can say that I strongly disagree. Prior to
generation, we must have recognition. Lots of studies
in children learning point to the development of perceptual
mechanisms months before generation.

As a recent example, in January of this year a report on the
Science magazine showed young babies being trained in patterns of
artificial utterances in the form "wo fe fe" and then being able
to recognize similarities in the pattern "de ko ko" and noticing
differences in patterns such as "li na li" (the recognized
patterns were of the form ABB and the ones which the baby
perceived difference was in the form ABA). This shows that
there is a pattern recognition mechanism able to detect some
kinds of regularities without the ability to generate it previously.

(this report, in fact, started an interesting "war" between two
opposing visions of how this process works, one hypothesizing
"algebraic symbolic mechanisms" (Marcus and Pinker) and others
explaining with connectionist methods (Elman and McClelland);
I can report more about this interesting debate in another
message).

>>
>>What happens, then, if this same "similarity checking"
>>(which is, in other words, temporal coincidence detectors)
>>starts to work over the previously detected similarities: we
>>will have the beginning of a *hierarchy* of similarities that
>>will produce dozens, or hundreds, or even thousands of "things
>>similar to other things". These are the initial levels of
>>"hypotheses" that I'm talking about: those that repeat
>>with time, will get reinforced, those that occurred a few
>>times only are destroyed. Reminds us of induction, no?
>
>I'm a little uncomfortable with this kind of discussion
>without having some mathematics that I can sink my teeth
>into. It's plausible that the "pattern recognizer" for
>edges might be applied to recognize higher-order patterns,
>but it just don't see how it would work.
>

I'm not prepared yet to put mathematical formulations into
the game, I'm still in the "fishing" phase. Some time ago I wrote
a small program to detect patterns in symbolic streams and I had
written most of the "rationale" behind this program in a short
note:

http://www.intelliwise.com/reports/paper2.htm

The process is computationally intensive, but the idea that's
behind it is to develop the maximum compression of the pattern
and keep the "compression paths" that are confirmed by other
experiences and interaction with the world. A lot of cognitively
interesting things (like rule formation, analogical reutilization
of concepts, etc) become clear with this approach.

>>It is, obviously, very far from high level propositions, such
>>as that "apples are red". But *this level* is achievable
>>once you traverse that "tree" of levels that is below it: all
>>the hundreds or thousands, or hundreds of thousands of learned
>>similarities that categorize "redness" and "appleness", which
>>are things that were learned, which stem on another bunch of
>>learned things, until one finally reaches the level of things
>>that were constructed because of *elemental* similarites, those
>>identified by the very nature of our senses. That's it.
>
>What I would like to see is a more careful characterization
>of what sorts of patterns we can learn easily and what
>sorts we can't. I return to the example of prime numbers.
>Imagine a language in which a sentence is only grammatical
>if it has a prime number of words in it. I doubt very
>seriously if any but the most brilliant children would
>ever catch on to such a grammar rule. The sorts of rules
>that kids are capable of learning from mere exposure is
>limited. I don't think that our language abilities make
>use of completely *generic* pattern recognition abilities.
>

My hypothesis is exactly the opposite. I think that children
can learn impressively sophisticated patterns, detecting
an extraordinary amount of regularities and rules.

Unfortunately, this process appears to happen unconsciously,
which means, we cannot "peek" in it throughout introspection.
Another difficulty is that the patterns that children appear
to perceive *are not* restricted to the surface aspects of
language (as Chomsky insisted): it involves *deep* semantic
relations, often with things that go all the way down to
sensorimotor associations. But I think cognitive science is
advancing on the behavioral aspects of this process, and
together with reasonable findings from neuroscience, we
may soon find a way to formalize the mechanism.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Neurons and consciousness
Date: 17 Jul 1999 00:00:00 GMT
Message-ID: <3790a1c7@news3.us.ibm.net>
References: <37790B77.CF1B5B3C@travellab.com> <g9ae3.2900$c5.749565@news1.usit.net> <3779420F.B83925B6@travellab.com> <378928aa.275952835@netnews.worldnet.att.net> <3789F5D6.5D1041AC@sandpiper.net> <7mdsnj$dn4@dfw-ixnews10.ix.netcom.com> <378A7FE3.54AB8622@sandpiper.net> <378b3b7a@news3.us.ibm.net> <378B967B.FFF8643@sandpiper.net> <378ba2d9@news3.us.ibm.net> <7miesm$oud@edrn.newsguy.com> <378cd954@news3.us.ibm.net> <7mkqir$1lu0@edrn.newsguy.com> <378e40df@news3.us.ibm.net> <7mndbv$am@edrn.newsguy.com> <378f8123@news3.us.ibm.net> <7mog98$1grh@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 17 Jul 1999 15:31:19 GMT, 166.72.21.8
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

Daryl McCullough wrote in message <7mog98$1grh@edrn.newsguy.com>...
>Sergio says...
>>>...I return to the example of prime numbers.
>>>Imagine a language in which a sentence is only grammatical
>>>if it has a prime number of words in it. I doubt very
>>>seriously if any but the most brilliant children would
>>>ever catch on to such a grammar rule. The sorts of rules
>>>that kids are capable of learning from mere exposure is
>>>limited. I don't think that our language abilities make
>>>use of completely *generic* pattern recognition abilities.
>>>
>>
>>My hypothesis is exactly the opposite. I think that children
>>can learn impressively sophisticated patterns, detecting
>>an extraordinary amount of regularities and rules.
>
>You didn't answer my question, though. Do you believe
>that children exposed to language in which only a prime
>number of words is "grammatical" would ever catch on?
>

You're right! It slipped through my fingers. Let me try again.

This is a *very* tough question. My first impression is that
it may eventually be possible, although the time to learn
such a thing may be greater than just childhood, advancing
into adulthood. We will never know. But I have no background to
support my impression. Then, for the sake of discussing further,
lets assume that children *could not* acquire such a grammar.

If we assume this, we should be cautious to infer that
"something else" must be present in children in order to
acquire natural language, if we consider that the grammar
of natural languages could be said to be even more difficult
than that of prime numbers.

The main question is that, while some scientists say that
our brain have been evolutionarily adapted to handle language,
I prefer to side myself with the ones that say that *our
languages* were the result of the kind of parsing and
recognition abilities that our brains have. This is
a very different position.

Under this vision, language should be considered as the result
of a dynamic process, evolved throughout interaction of humans
in a complex social environment. This is the idea behind Jeff
Elman's "Language as a dynamical system", and also subscribed
by some researchers that study artificial communities of
intelligent agents (Luc Steels). Language "appeared" naturally
among these agents.

Reversing the problem (conceiving a language and asking if
the brain could "follow it") may, then, be an inappropriate
way to see the question of emergence of language in children.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Neurons and consciousness
Date: 19 Jul 1999 00:00:00 GMT
Message-ID: <37936631@news3.us.ibm.net>
References: <37790B77.CF1B5B3C@travellab.com> <g9ae3.2900$c5.749565@news1.usit.net> <3779420F.B83925B6@travellab.com> <378928aa.275952835@netnews.worldnet.att.net> <3789F5D6.5D1041AC@sandpiper.net> <7mdsnj$dn4@dfw-ixnews10.ix.netcom.com> <378A7FE3.54AB8622@sandpiper.net> <378b3b7a@news3.us.ibm.net> <378B967B.FFF8643@sandpiper.net> <378ba2d9@news3.us.ibm.net> <7miesm$oud@edrn.newsguy.com> <378cd954@news3.us.ibm.net> <7mkqir$1lu0@edrn.newsguy.com> <378e40df@news3.us.ibm.net> <7mndbv$am@edrn.newsguy.com> <378f8123@news3.us.ibm.net> <7mog98$1grh@edrn.newsguy.com> <3790a1c7@news3.us.ibm.net> <7mvc3f$27d4@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 19 Jul 1999 17:53:53 GMT, 166.72.21.242
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

Daryl McCullough wrote in message <7mvc3f$27d4@edrn.newsguy.com>...
>Sergio says...
>>
>>Daryl McCullough wrote
>>>You didn't answer my question, though. Do you believe
>>>that children exposed to language in which only a prime
>>>number of words is "grammatical" would ever catch on?
>
>>You're right! It slipped through my fingers. Let me try again.
>>
>>This is a *very* tough question. My first impression is that
>>it may eventually be possible, although the time to learn
>>such a thing may be greater than just childhood, advancing
>>into adulthood. We will never know. But I have no background to
>>support my impression. Then, for the sake of discussing further,
>>lets assume that children *could not* acquire such a grammar.
>>
>>If we assume this, we should be cautious to infer that
>>"something else" must be present in children in order to
>>acquire natural language, if we consider that the grammar
>>of natural languages could be said to be even more difficult
>>than that of prime numbers.
>>
>>The main question is that, while some scientists say that
>>our brain have been evolutionarily adapted to handle language,
>>I prefer to side myself with the ones that say that *our
>>languages* were the result of the kind of parsing and
>>recognition abilities that our brains have. This is
>>a very different position.
>
>I agree, they are different, although it is a bit
>difficult to come up with empirical evidence that
>teases out the difference.
>
>My point was not that humans have a built-in ability
>to do language, but that our built-in abilities are
>*not* perfectly general---some things come easily,
>and others do not.
>

The question is really convoluted. One can really find several
situations which suggest that our cognition is specialized in
certain ways and this seems to be the dominant position among
cognitive scientists. My position is sided with the minority.

One of the things that raise my suspicion is that aspect of
plasticity. Brains are impressively plastic, they adapt to
handle the kind of impulses it receives. This means that baby
brains can "mold" themselves to the kind of "complexity" of
the things it receives. If a baby brain received only phrases
with a prime number of words in it, I wouldn't be surprised
if they "discovered" this by themselves. Obviously, this would
be limited to a certain "depth" (say, the first 5 or 10 primes).

(important note: it is not necessary to "know" what primes are
in order to detect the "clustering" of grammatical sentences
around these numbers; a child may discover this grammaticality
*without* "knowing" what primes are; in fact, a good deal of
the knowledge of grammar of common lay men is stored in that
way: they perform correctly but they can't "explain" why).

The burden of showing evidences are on the side of those that
propose specific mechanisms in our brain that are genetically
determined. An adult does have specialized modules, but that
should be seen as the result of self-organizational processes
of the brain, driven by the experiential and interactive activities
of the organism.

>>Under this vision, language should be considered as the result
>>of a dynamic process, evolved throughout interaction of humans
>>in a complex social environment. This is the idea behind Jeff
>>Elman's "Language as a dynamical system", and also subscribed
>>by some researchers that study artificial communities of
>>intelligent agents (Luc Steels). Language "appeared" naturally
>>among these agents.
>>
>>Reversing the problem (conceiving a language and asking if
>>the brain could "follow it") may, then, be an inappropriate
>>way to see the question of emergence of language in children.
>
>It is appropriate in that it rules out some possibilities.
>It rules out the possibility that humans have completely
>generic pattern recognition abilities.
>

What seems to be happening today is the opposite: theories
raised by those who support nativism and domain-specific
mechanisms are constantly being shown to be fragile when seen
in light of adequate evidences.

While anti-nativists don't have much evidences suggesting
generic, non-native modules, they have plenty of them
suggesting that there aren't innate modules. This is a
more confortable position to falsify nativists theories.
It seems to be a better scientific position, until the
matter finally settles down.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Neurons and consciousness
Date: 20 Jul 1999 00:00:00 GMT
Message-ID: <379488b7@news3.us.ibm.net>
References: <37790B77.CF1B5B3C@travellab.com> <g9ae3.2900$c5.749565@news1.usit.net> <3779420F.B83925B6@travellab.com> <378928aa.275952835@netnews.worldnet.att.net> <3789F5D6.5D1041AC@sandpiper.net> <7mdsnj$dn4@dfw-ixnews10.ix.netcom.com> <378A7FE3.54AB8622@sandpiper.net> <378b3b7a@news3.us.ibm.net> <378B967B.FFF8643@sandpiper.net> <378ba2d9@news3.us.ibm.net> <7miesm$oud@edrn.newsguy.com> <378cd954@news3.us.ibm.net> <7mkqir$1lu0@edrn.newsguy.com> <378e40df@news3.us.ibm.net> <7mndbv$am@edrn.newsguy.com> <378f8123@news3.us.ibm.net> <7mog98$1grh@edrn.newsguy.com> <3790a1c7@news3.us.ibm.net> <7mvc3f$27d4@edrn.newsguy.com> <37936631@news3.us.ibm.net> <7n04ce$rog@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 20 Jul 1999 14:33:27 GMT, 166.72.29.190
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

Daryl McCullough wrote in message <7n04ce$rog@edrn.newsguy.com>...
>Sergio says...
>>One of the things that raise my suspicion is that aspect of
>>plasticity. Brains are impressively plastic, they adapt to
>>handle the kind of impulses it receives. This means that baby
>>brains can "mold" themselves to the kind of "complexity" of
>>the things it receives. If a baby brain received only phrases
>>with a prime number of words in it, I wouldn't be surprised
>>if they "discovered" this by themselves. Obviously, this would
>>be limited to a certain "depth" (say, the first 5 or 10 primes).
>
>If the baby cannot learn to generate primes that he has never
>encountered before, then he hasn't learned the pattern.
>
>>(important note: it is not necessary to "know" what primes are
>>in order to detect the "clustering" of grammatical sentences
>>around these numbers;
>
>I think that's wrong. If a child only learns that
>2, 3, 5, 7, and 11 are "good" and 1, 4, 6, 8, 9, 10 are
>"bad", that *doesn't* mean that he has learned the
>rule "Primes good, composites bad".
>

I disagree. The difference here is knowing to recognize based
on experience, as compared to knowing the formal structure of
the grammar. One can be able to generate grammatical structures
only because of recognition abilities, without formal knowledge.
That's the message: recognition *precedes* generation.

Almost all of our cognitive abilities is based in this
characteristic: we know how to do but we don't know how it works.
The prime number grammar that you propose is equivalent to our
*formal* natural language grammar: a child may know how to
generate grammatical phrases *without* having knowledge of
nouns, adjectives, adverbials, etc. But they can *recognize*
nouns in the middle of adjectives.

That's a big thing: competence without formal knowledge.
(as an aside: I think mathematical performance, even of great
theoreticians, also work in that way, recognition of relevant
situations being *more important* than formal knowledge, as
often one may have difficulties in explaining the line of
thought; but I don't think you'll accept this freely ;-)

Now take any child who as been educated with the "prime number"
grammar and *explain* to her what a prime number is (the way
children are told about noun phrases, etc, during high-school)
and you would have the desired level: competence in
recognition/generation *plus* formal knowledge of the structure
of the grammar. So in that regard I don't see any difference
between prime number grammars and natural grammars.

>>a child may discover this grammaticality
>>*without* "knowing" what primes are; in fact, a good deal of
>>the knowledge of grammar of common lay men is stored in that
>>way: they perform correctly but they can't "explain" why).
>
>Well, that's exactly what I'm talking about. *Some* tasks
>can be performed by people without knowing why, and some
>can't. Nobody can generate prime numbers without knowing
>the rule for prime numbers, but people can generate grammatically
>correct sentences without knowing the rules of grammar.
>

What you say is suggestive and is certainly one of the points
that made Chomsky, Fodor and others to consider innate knowledge
of grammar. The big issue is that this seems to be so if we
analyze just the *surface* level of languages. But this is
misleading, because syntax alone is not enough to present
the huge amount of regularities that stands *below* the surface
(note: prime numbers don't have this "deep" level to help).

I guess the error Chomsky, Fodor and other innatists commit is to
start analyzing language by its syntactical appearance. Language
acquisition by children does *not* start at this level. Children
perceive grammatical categories well before they utter their
first phrase (the difference between nouns and verbs is so great
that the human brain appears to process them in different positions,
according to some recent research).

When mothers talk to their babies, they are feeding a lot of
knowledge that is not easily translated into words: things like
pointing to toys, moving them in front of their eyes, hiding it,
giving it to somebody else. These are notions that will help the
child in its future task of acquisition of language.

>>The burden of showing evidences are on the side of those that
>>propose specific mechanisms in our brain that are genetically
>>determined.
>
>I don't think that's true. In the absence of evidence,
>we just don't *know* what (if anything) is innate, and
>what is not.
>

This is really discussable. I see this way: "they" say that
there are innate organs specialized in the processing of domain
specific activities. I'm skeptical. The burden of showing evidence
that this is so appears to be on their hands.

>>An adult does have specialized modules, but that
>>should be seen as the result of self-organizational processes
>>of the brain, driven by the experiential and interactive activities
>>of the organism.
>
>I'm not sure if I understand the distinction you are making.
>Sure, the body puts itself through self-organization of some
>sort. But in normal-functioning humans, we end up with two
>arms, two legs, two eyes, etc. There are limits to variability
>that imposed by genetics and physics. I think the same applies
>to the brain.

This appears to be mostly a question of degree. Brains and senses
do have lots of innate things (we blink automatically when some
object is thrown toward our eyes; this is innate reaction). But
when one studies plasticity of the brain, one can see that few things
are exactly predetermined.

During initial childhood (1 - 5 years old), synaptic plasticity
is at its peek. It has been shown that this initial plasticity
is a function of the "enrichment" of the environment into which
the organism is immersed. The richer the environment, the more
synaptic connections one ends up with. This affects *profoundly*
the future performance of the adult.

This is coherent with what happens with late blind people: their
visual cortex (not used by vision anymore) starts to be used by
sensory (touch, mostly fingers because of Braille reading) and
auditory cortex expansion. This again demonstrates plasticity as
a function of environmental conditions.

>
>>What seems to be happening today is the opposite: theories
>>raised by those who support nativism and domain-specific
>>mechanisms are constantly being shown to be fragile when seen
>>in light of adequate evidences.
>>
>>While anti-nativists don't have much evidences suggesting
>>generic, non-native modules, they have plenty of them
>>suggesting that there aren't innate modules.
>
>Such as?

a) Innatists say that one couldn't learn language without
some kind of innate predisposition, because there is no
computational method capable of doing this. This is false,
there are some connectionist experiments that obtain phonological,
morphological and grammatical learning with no prior knowledge
(Elman, Christiansen, Seidenberg, McClelland, Plunket, and others)

b) Innatists say that only humans can acquire and use structured
language constructs. So only humans have "the language module".
Experiences with Bonobo apes show that they are able, up to a
certain level, to acquire and use grammar.

c) Broca's and Wernicke's areas are exemplars of what innatists
say could be considered "language organs". Damage to these areas
impair language-related abilities. But children who had left
hemispherectomy (removal of one hemisphere, because of epileptic
seizures) were able to develop language in the right (the remaining)
hemisphere, showing that these "modules" are just self-organizing
constructs, driven by experience.

d) If grammar was "inscribed" in DNA to be transferred by genes,
we would surely have cases of people with normal cognition *except*
for language (this would be a result of random mutations). What we
have today are examples of whole families with language disabilities
but *also* with other cognitive disabilities. Language disability
alone was never found. (note: this is not exactly an "evidence",
it only augments our suspicion that no such thing as innate modules
exist).

I have more about this, but mostly suggestions that bend the balance
to the non-nativist position.

>
>>This is a more confortable position to falsify nativists theories.
>>It seems to be a better scientific position, until the
>>matter finally settles down.
>
>I disagree completely. A completely plastic model of brain would
>imply that if you raise a dog like a human child, then he will
>learn to behave just like a huma (well, a human without opposable
>thumbs, I guess). The evidence that there is a genetic component
>to behavior in the animal kingdom is just overwhelming. If you
>are proposing that humans alone are exempt from genetic influence,
>I think that you are the one with a huge burden of proof.
>

Oh, I didn't say that we don't have innate things, neither that a
lot of our behavior is only explainable through genetic influences.
Nature/nurture is still a hot area of discussion even today.

What I'm saying is that the more "high-level" the ability, the
less innate it appears to be. Sexual preference, aggressivity,
personal habits, even intelligence (to a degree) are things highly
influenced by genetic traits. But language is very, very recent in
the history of animals on earth (about 30K years). It could not have
affected profoundly our DNA in a so short time.

In relation to dog's brains, even with plasticity, if trained as
a child, it would not develop human-like intelligence because it
does not have the necessary "volume of cortex" in relation to the
remainder of the brain. Plasticity is not sufficient, it is just
necessary.

Now if you know about Kanzi, the bonobo ape educated since "baby"
by Sue Savage-Rumbaugh, you would see that it is able not only
to "understand" symbol-object relations, but also to produce
phrases (by pointing to a 256 keyboard with simbols and
connectors) that are completely new, demonstrating productive
abilities. Kanzi is a strong suggestion that a little bit more
of plastic cortex would make it to perform like humans.

>I don't exactly what you mean by "modules". Are you just meaning
>different parts of the brain that are used for different purposes?
>If so, are you really making the claim that *all* of these differences
>between different parts of the brain are due to "self-organization"?
>*None* of it is genetic? That seems awfully bizarre to me.
>

No, I'm not so radical!
The occipital lobe (close to the "nape" of the neck) is surely
innately determined to process vision. The temporal lobe carries
the area that processes auditory signals, and so on. I think that
the levels close to sensory inputs are strongly innately determined.
But as one goes far away from sensory inputs (which means, areas
that should be occupied with "high-level" constructs) then the
effect of innateness is extremely reduced, and the environment
rules. But I must grant that this is still one hypothesis.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Neurons and consciousness
Date: 20 Jul 1999 00:00:00 GMT
Message-ID: <3794f187@news3.us.ibm.net>
References: <37790B77.CF1B5B3C@travellab.com> <g9ae3.2900$c5.749565@news1.usit.net> <3779420F.B83925B6@travellab.com> <378928aa.275952835@netnews.worldnet.att.net> <3789F5D6.5D1041AC@sandpiper.net> <7mdsnj$dn4@dfw-ixnews10.ix.netcom.com> <378A7FE3.54AB8622@sandpiper.net> <378b3b7a@news3.us.ibm.net> <378B967B.FFF8643@sandpiper.net> <378ba2d9@news3.us.ibm.net> <7miesm$oud@edrn.newsguy.com> <378cd954@news3.us.ibm.net> <7mkqir$1lu0@edrn.newsguy.com> <378e40df@news3.us.ibm.net> <7mndbv$am@edrn.newsguy.com> <378f8123@news3.us.ibm.net> <7mog98$1grh@edrn.newsguy.com> <3790a1c7@news3.us.ibm.net> <7mvc3f$27d4@edrn.newsguy.com> <37936631@news3.us.ibm.net> <7n04ce$rog@edrn.newsguy.com> <379488b7@news3.us.ibm.net> <7n2785$14v6@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 20 Jul 1999 22:00:39 GMT, 166.72.29.111
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

Daryl McCullough wrote in message <7n2785$14v6@edrn.newsguy.com>...
>Sergio says...
>
>>>If the baby cannot learn to generate primes that he has never
>>>encountered before, then he hasn't learned the pattern.
>>>
>>>>(important note: it is not necessary to "know" what primes are
>>>>in order to detect the "clustering" of grammatical sentences
>>>>around these numbers;
>>>
>>>I think that's wrong. If a child only learns that
>>>2, 3, 5, 7, and 11 are "good" and 1, 4, 6, 8, 9, 10 are
>>>"bad", that *doesn't* mean that he has learned the
>>>rule "Primes good, composites bad".
>
>>I disagree. The difference here is knowing to recognize based
>>on experience, as compared to knowing the formal structure of
>>the grammar.
>
>Somehow, my point is not getting through.
>I agree that a child might learn that the
>numbers 2, 3, 5, 7 and 11 are "grammatical".
>But that doesn't mean that the child has
>recognized the pattern that prime numbers
>are grammatical---it could instead be that
>the child has just memorized those specific
>numbers. To show that he has grasped the
>pattern, the child should be able to go
>on and guess that 13, 17, 19 and 23 are
>grammatical.
>
>With natural languages, that does happen.
>When a child learns the grammar of a language,
>he becomes able to generate *novel* grammatically
>correct sentences. A person doesn't know the
>grammar of a language if they only know 5
>correct sentences.
>

I understand what you're proposing, but it is not
a valid comparison. Think about everything that is
"behind" the concept of prime numbers. You must
know what is counting, adding, multiplication,
division, you must know what is integer division
with no remainder. These aren't concepts with direct,
real world (sensory) examples in which the child
could stem its perception. There isn't nothing on daily
experiences that gives children clues about these
concepts. In other words, there isn't nothing that
*builds a path* from sensorimotor experiences to
prime numbers.

On the other hand, languages are symbolic structures
that are built *over* daily concepts. Nouns are solid
things, or names of persons, or things that you can
touch and point and see. Adjectives are things that
add specifics to nouns. Verbs express "actions", which
are things related to movement, power, transferrence, etc.
When a child generalizes over language, he/she is generalizing
over all these deep notions that are very simple and directly
related to the senses and to her muscles. These notions
repeat, and repeat constantly.

To make the example a bit more fair, we could propose
a grammar in which all grammatical phrases would be
those with an odd number of words. This can be easily
correlated with sensory material.

>>One can be able to generate grammatical structures
>>only because of recognition abilities, without formal knowledge.
>>That's the message: recognition *precedes* generation.
>
>I'm talking about the *mechanism* of recognition. How
>is recognition possible without a pattern against which
>to compare each instance, and a measure of closeness?
>The pattern and the measure of closeness must already
>exist.
>

As I said in a previous post, the initial level of
closeness is innately present. It is like if our brain
could recognize digits and detect the similarity of streams
of numbers. Then, what is left is the recognition that
this pattern:

21212121212121

is "similar" to this one:

68686868686868

This is an easy task that, as I've said earlier, have
been done several times with children, the most recent
example I cited was that paper (Gary Marcus et al.,
"Rule learning in seven-month-old infants", Science
283, 77-80).

>>Almost all of our cognitive abilities is based in this
>>characteristic: we know how to do but we don't know how it works.
>>The prime number grammar that you propose is equivalent to our
>>*formal* natural language grammar: a child may know how to
>>generate grammatical phrases *without* having knowledge of
>>nouns, adjectives, adverbials, etc. But they can *recognize*
>>nouns in the middle of adjectives.
>
>My point is that if a child is shown the prime numbers
>2,3,5,7,13 and told that these numbers are "good", and
>then the child is shown one more number, 23, and is
>asked whether that is "good", also, they won't be able
>to do it. That's different from the case with grammars
>of natural languages.
>

Daryl, try to put natural language in computers without
bodies (which means, without senses). You'll see that
computers will be subject to *the same* incapacity
of understanding and generalizing than the children with
prime numbers. In order to understand natural language,
it is necessary to use deep knowledge about the world
(this is "THE" problem of AI so far: the lack of attention
to the importance of the senses in human cognition).

In order to "see" what is a prime number, it is necessary
to know "deep" knowledge about numbers, division,
remainders, etc. Without that knowledge, it is very difficult
for any "system" (not only children, but also computers) to
grasp the main idea behind that "grammar".

>>That's a big thing: competence without formal knowledge.
>
>I think this discussion is drifting off in a direction
>that is irrelevant to my point. What I was talking
>didn't have anything to do with formal knowledge. I
>was only talking about competence. Children cannot
>learn to be competent about recognizing prime numbers
>that they've never seen before. They can learn to
>be competent about recognizing grammatically correct
>sentences they've never seen before.
>

They do that because they perceive invariant things in
the world. They perceive what repeats over and over when
her mom says something about its teddy bear. Much before
uttering a single phrase, the baby is an expert in such
notions of his world. If it was possible to imagine
a world where the important supporting concepts of
prime numbers where apparent to the child, I have no
doubt that it would notice the patterns and generate
extensions to the grammar.

>
>>Now take any child who as been educated with the "prime number"
>>grammar and *explain* to her what a prime number is (the way
>>children are told about noun phrases, etc, during high-school)
>>and you would have the desired level: competence in
>>recognition/generation *plus* formal knowledge of the structure
>>of the grammar. So in that regard I don't see any difference
>>between prime number grammars and natural grammars.
>
>The difference is that children can become competent
>in natural language *without* formal rules, but they
>cannot become competent at recognizing prime numbers.
>

Prime numbers don't occur in nature in a perceptible manner.
Want to see an even harder problem? Perceive the movement
of a predator *behind* the leaves of trees. This is a *very
difficult* problem, in terms of visual processing, in my
opinion much more difficult than prime numbers (zillions of
changing bit patterns which are processed very fast by
our vision). We're able of successful performance in this
regard because of two things: innate predisposition (motion
detectors in our vision apparatus) and high level visual
processing, resultant of learning.

>>I guess the error Chomsky, Fodor and other innatists commit is to
>>start analyzing language by its syntactical appearance. Language
>>acquisition by children does *not* start at this level. Children
>>perceive grammatical categories well before they utter their
>>first phrase (the difference between nouns and verbs is so great
>>that the human brain appears to process them in different positions,
>>according to some recent research).
>
>That sounds like evidence in favor of Chomsky, to me.
>

No, quite the contrary: this is related to the fact that sensorimotor
activities of our brain *push* verbs and nouns to different parts
of the brain. It is the *result* of that self-organization.

>>>>The burden of showing evidences are on the side of those that
>>>>propose specific mechanisms in our brain that are genetically
>>>>determined.
>>>
>>>I don't think that's true. In the absence of evidence,
>>>we just don't *know* what (if anything) is innate, and
>>>what is not.
>>>
>>
>>This is really discussable. I see this way: "they" say that
>>there are innate organs specialized in the processing of domain
>>specific activities. I'm skeptical.
>
>No, you're not skeptical. You have stated that you believe
>the opposite. If you were skeptical, you wouldn't be taking
>sides.
>

That's true, I'm siding with non-nativists. But this is good
news to connectionists and bad news to symbolicists. I'm more
symbolicist than connectionist, so I should do the opposite.
I should be a defensor of innate knowledge, but I can't find
nothing really convincing.

>
>>During initial childhood (1 - 5 years old), synaptic plasticity
>>is at its peek. It has been shown that this initial plasticity
>>is a function of the "enrichment" of the environment into which
>>the organism is immersed. The richer the environment, the more
>>synaptic connections one ends up with. This affects *profoundly*
>>the future performance of the adult.
>
>That's interesting, but I don't see what is supposed to follow
>from it. I thought that the issue was whether there were innate
>functions in the brain. I don't see how this evidence of plasticity
>bears on that. What might be interesting evidence (although I
>can't imagine an ethical way to do this experiment) is if you
>replaced a human brain by, say, a dog's brain. If the human
>learned to act human in spite of having a dog's brain, that
>would indeed be good evidence that human intelligence is
>learned, and not innate.
>

This would not work, because dog's brains lack *processing power*.
The size of the cortex of humans, in relation to the rest of the
brain, is the main factor.

>>This is coherent with what happens with late blind people: their
>>visual cortex (not used by vision anymore) starts to be used by
>>sensory (touch, mostly fingers because of Braille reading) and
>>auditory cortex expansion. This again demonstrates plasticity as
>>a function of environmental conditions.
>
>That's interesting, but I just don't see what it is supposed
>to tell us about innateness. To call something the "visual
>cortex" is to assume that it is for the purpose of processing
>vision. Why would there *be* a distinct area for visual processing,
>if the brain were completely plastic? Why would there be a cortex,
>an occipetal lobe, a cerebrum, a cerebellum?
>

These are obviously innate things. The occipital lobe is innate.
Our vision works by integrating areas that are specialized in
color, another in movements and another in textures. These are
innate things! What happens *from this level up* is the important
thing. For example, 3D object reconstruction from 2D drawings
do not appear to be innate, but learned. Think about the
complexity of reconstructing a 3D view of a photograph. It
demands processing power (absent in other mammals) that is
comparable to things like language.

>I just don't see how your plasticity hypothesis makes any sense
>whatsoever. And, once again, you *aren't* being a skeptic, you
>are being a partisan. So am I.
>
>>>>While anti-nativists don't have much evidences suggesting
>>>>generic, non-native modules, they have plenty of them
>>>>suggesting that there aren't innate modules.
>>>
>>>Such as?
>>
>>a) Innatists say that one couldn't learn language without
>>some kind of innate predisposition, because there is no
>>computational method capable of doing this. This is false,
>>there are some connectionist experiments that obtain phonological,
>>morphological and grammatical learning with no prior knowledge
>>(Elman, Christiansen, Seidenberg, McClelland, Plunket, and others)
>
>How is that evidence that there are no innate modules?
>

These are evidences *suggesting* that there aren't innate modules.
In this particular case, we're falsifying their hypothesis.

>
>>c) Broca's and Wernicke's areas are exemplars of what innatists
>>say could be considered "language organs". Damage to these areas
>>impair language-related abilities. But children who had left
>>hemispherectomy (removal of one hemisphere, because of epileptic
>>seizures) were able to develop language in the right (the remaining)
>>hemisphere, showing that these "modules" are just self-organizing
>>constructs, driven by experience.
>
>Self-organization doesn't contradict innateness. Maybe we
>need to be clearer about what we are talking about. According
>to the science of embryology, human cells start out identical,
>and then later differentiate themselves based on their local
>environment. Does that mean that there is nothing innate about
>having hands, feet, head, etc.? Yes, of course it's innate---*each*
>cell has the set of possibilities programmed into it by DNA.
>(Well, actually, it is DNA + the laws of physic together).
>

As I said, lots of structures on the brain are innate. The
question is another: in what sense we have innate things
determining the kind of high-level processing that we do?

>If you remove the queen from a hive of bees, one of the worker
>bees will turn into a queen. The specific *way* that a plan
>unfolds may be contingent upon the environment, but the plan
>itself is (at least partially) genetic.
>
>>d) If grammar was "inscribed" in DNA to be transferred by genes,
>>we would surely have cases of people with normal cognition *except*
>>for language (this would be a result of random mutations).
>
>There is a reverse case: people who have excellent language
>ability but who are otherwise incapable of reasoning.
>
>>What we have today are examples of whole families with language
>>disabilities but *also* with other cognitive disabilities.
>
>Well, certainly there are cases of brain damage that affect
>only particular linguistic abilities. Some kinds of brain
>damage affect the ability to remember nouns, while other
>kinds of thinking seem unaffected.
>

That demonstrates modularity, which is something that can't
be refuted. The question is "does this module was born with
the baby", or then, "is there a piece of the DNA specialized
in developing the brain tissue that processes nouns?". The
answer to both of these questions, based on what we know about
plasticity, is no.

>>Language disability alone was never found.
>
>There is plenty of evidence of brain damage that affects only
>the language abilities. I guess your point is that there is
>no single genetic defect that leads to the same kind of
>inability. But I think that that's just the way genes work.
>There is not a one-to-one correspondence between features of
>a developed body and genes. There is no separate gene controlling
>the size of your left pinky finger, or the color of each hair
>on your head. I guess you could take that as confirmation that
>the human body is put together through self-organization, but
>that doesn't contradict the fact that the human body is *also*
>(at least partially) coded by genes.
>

I'm sure this is true. We have plenty of interesting evidences
from monozygotic twins in this regard. But language does not
appear to be among these innate things.

>>What I'm saying is that the more "high-level" the ability, the
>>less innate it appears to be. Sexual preference, aggressivity,
>>personal habits, even intelligence (to a degree) are things highly
>>influenced by genetic traits. But language is very, very recent in
>>the history of animals on earth (about 30K years). It could not have
>>affected profoundly our DNA in a so short time.
>
>Where did you get this 30K years from? *Written* language
>may not have existed before that, but there is absolutely
>no reason to think that spoken language only appeared that
>recently.
>

About 38.000 years ago, an important change occurred: our
phonological mechanism slided down a bit, which produced a
huge increase in our ability to utter vowels and consonants.
Some anthropologists credit this change as the responsible
for the sudden increase of art, tools and sophistication of
rituals that also happened around that time.

>>In relation to dog's brains, even with plasticity, if trained as
>>a child, it would not develop human-like intelligence because it
>>does not have the necessary "volume of cortex" in relation to the
>>remainder of the brain. Plasticity is not sufficient, it is just
>>necessary.
>
>In that case, take out the human brain, and replace it by
>two, three, or four dog brains. That should be volume enough.
>

The problem would be to connect all this in a reasonable manner.
Other problem would be to support the frequent pisses of that
frankenstein in every corner of our lab ;-)

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Neurons and consciousness
Date: 20 Jul 1999 00:00:00 GMT
Message-ID: <379488b9@news3.us.ibm.net>
References: <37790B77.CF1B5B3C@travellab.com> <g9ae3.2900$c5.749565@news1.usit.net> <3779420F.B83925B6@travellab.com> <378928aa.275952835@netnews.worldnet.att.net> <3789F5D6.5D1041AC@sandpiper.net> <7mdsnj$dn4@dfw-ixnews10.ix.netcom.com> <378A7FE3.54AB8622@sandpiper.net> <378b3b7a@news3.us.ibm.net> <378B967B.FFF8643@sandpiper.net> <378ba2d9@news3.us.ibm.net> <7miesm$oud@edrn.newsguy.com> <378cd954@news3.us.ibm.net> <7mkqir$1lu0@edrn.newsguy.com> <378e40df@news3.us.ibm.net> <7mndbv$am@edrn.newsguy.com> <378f8123@news3.us.ibm.net> <7mog98$1grh@edrn.newsguy.com> <3790a1c7@news3.us.ibm.net> <7mvc3f$27d4@edrn.newsguy.com> <37936631@news3.us.ibm.net> <7n07m5$12l4@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 20 Jul 1999 14:33:29 GMT, 166.72.29.190
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

Daryl McCullough wrote in message <7n07m5$12l4@edrn.newsguy.com>...
>Sergio,
>
>In searching the Web for information relevant to the question
>of the innateness of human reasoning, I found a web site that
>substantially agrees with what I think you have been saying.
>Maybe you are familiar with it?
>
>The owner of the website appears to me to be a crank, meaning
>not that he's not right, but that the tone of his writing is
>so angry and dismissive of his opponents that it's hard for me
>to read it. I prefer gentler revolutionaries like Einstein.
>
>Anyway, here's his website:
>
>http://www.yehouda.com/
>

Thanks for the link, Daryl. I already knew him, but didn't have
time to delve in detail. I too think his "style" is a bit weird
(the typos are *terribly* annoying).

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Neurons and consciousness
Date: 17 Jul 1999 00:00:00 GMT
Message-ID: <3790a1c5@news3.us.ibm.net>
References: <37790B77.CF1B5B3C@travellab.com> <g9ae3.2900$c5.749565@news1.usit.net> <3779420F.B83925B6@travellab.com> <378928aa.275952835@netnews.worldnet.att.net> <3789F5D6.5D1041AC@sandpiper.net> <7mdsnj$dn4@dfw-ixnews10.ix.netcom.com> <378A7FE3.54AB8622@sandpiper.net> <378b3b7a@news3.us.ibm.net> <378B967B.FFF8643@sandpiper.net> <378ba2d9@news3.us.ibm.net> <7miesm$oud@edrn.newsguy.com> <378cd954@news3.us.ibm.net> <7mkqir$1lu0@edrn.newsguy.com> <378e40df@news3.us.ibm.net> <7mndbv$am@edrn.newsguy.com> <378f8123@news3.us.ibm.net> <7mofvj$1g8j@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 17 Jul 1999 15:31:17 GMT, 166.72.21.8
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

Daryl McCullough wrote in message <7mofvj$1g8j@edrn.newsguy.com>...
>Sergio says...A complex pattern
>
>>>I don't agree that there *is* such a thing as "generic
>>>symbolic abilities". It seems to me that learning language
>>>would have to work like any other act of induction: the
>>>child can only "notice" grammatical patterns that the
>>>child is possible of generating, in the first place.
>>>
>>
>>Well, here I can say that I strongly disagree. Prior to
>>generation, we must have recognition.
>
>I'm not talking about generation of sounds, I'm talking
>about generation of *patterns*. How can recognition work,
>other than comparing the new instance with previously
>encountered instances using some pre-existing measure
>of closeness?
>

You're right in questioning this point, as it is the
very beginning of the story. I'll try to focus exactly
on this idea, as my previous post delved also into other
aspects. I'll try a quick answer and a long one.

The quick:
Unknown sequences of impulses are "shallowly" stored
according to several (and at this time innate) criteria.
When received again, these sequences (or similar ones)
reinforce some of these criteria, at the same time that the
bad ones (those that didn't occur often) start to vanish.

The long answer:
We don't know exactly how learning occurs in our brain.
There are some models but most of them rely on LTP (long
term potentiation) and its close idea, that of hebbian
reinforcement. In hebbian terms, if two connected neurons
fire together (even if the firing of one was "provoked" by
a different source than the other), the connection between
them will get reinforced. If they seldom fire together, their
connection will (with time) atrophy.

This aspect is relevant to what we call plasticity:
this connection strength increases or decreases, but also
*new* connections are born (by some still poorly known
mechanism). New synapses are generated as a function of the
demands for *new ways* of correlating unknown impulses. The
brain can be seen to be continually chasing the impulses that
it receives from the senses, to the point of physically
adapting to it.

Now I'll return to your pertinent question: "How can
recognition work without using some previous instance
and without using an innate measure of closeness?"

The first important point is memory. Take a recently born
baby. This baby will listen to utterances from his/her
mother and close ones (this can start at the womb, as some
researchers suggest). Each auditory sequence that is
received provokes an astonishing number of activities in
groups of neurons close to the auditory cortex. There are
a lot of neurons that "coincidentally" fire together, as
a result of the "kind" of impulses received

(note: this is where I put a terrific importance in innate
*preprocessing* of signals; hair cilia, for example, "divide"
the auditory signal in important ways, to ease the subsequent
work of the cortex).

Those neurons that fired together and already had one link
between them, *will have this link reinforced*.

This activity, at first, seems to be mostly a very imprecise
form of "storing" the impulses that were received. Besides
the reinforcement of connection between two neurons that fired
together, according to hebbian rules, they will also "fire each
other" more often (the link between them got reinforced).

What does this mean? This means that we've got a very imprecise,
but useful, kind of "memory", able, among other potentialities,
to "generate" imprecise (but similar) versions of the impulses
that were received in the past. Not exact versions, *but
interestingly similar* ones.

The next time that the same (or similar!!) sequence of spikes
come together, the neurons will fire and reinforce their connection.
Similar, in this case, is related to a "temporal window" in
which hebbian reinforcement occurs (two spikes may reinforce
a connection when distant from one another less than a
specific time t).

The important thing is that this pair of neurons is in the
middle of *hundreds of thousands* of others, each group doing
part of the work of "subsuming" imprecise versions of the
impulses received. A hierarchy of similarities is established!
Sparse and weak, at first, but with potential to become better
and better, as the organism lives new experiences.

The end result of all this process is the evolution of an
impressive *pattern recognizer*, the perceptual mechanism that
will allow the baby to be an expert in identifying the meaningful
aspects of, for example, the phonological sequences of speech
that composes his/her native language. This is directly related
to the future ability of the corresponding adult of understanding
his/her native speech, even in harsh conditions (noisy phone calls,
for instance).

So in my vision, the first aspect that develops in children is
this perceptual apparatus, parts of the mind responsible for
the *recognition* of things.

This recognition aspect is, in my vision, among the very first
steps in the construction of any intelligent cognition. I propose
that any AI system that don't start with a similar concern will
be in risk of repeating the failures of the pioneers of the field.

What is interesting is that this mechanism builds over itself:
developing an interesting perceptual ability will allow the
organism to be *more apt* to grow more "patterns" derived from
its world, which helps the development of *even more* perceptual
apparatuses. It is a self-bootstrapping process that started
with a rudimentar (and innate) storage/comparison mechanism.

Learning, in this regard, can be seen as the progressive
construction of new (or the refinement of old) feature detectors,
which constitute a perceptual edifice. Thinking, in my way of seeing
things, is the spreading of a lot of parallel "activations"
throughout this prebuilt collection of paths, like a lot of ants
walking in a very bushy tree. Obviously, throughout thinking one
can refine (or construct) new perceptual mechanisms, helping to
model even better our external world.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Neurons and consciousness
Date: 17 Jul 1999 00:00:00 GMT
Message-ID: <3790a7c7@news3.us.ibm.net>
References: <37790B77.CF1B5B3C@travellab.com> <g9ae3.2900$c5.749565@news1.usit.net> <3779420F.B83925B6@travellab.com> <378928aa.275952835@netnews.worldnet.att.net> <3789F5D6.5D1041AC@sandpiper.net> <7mdsnj$dn4@dfw-ixnews10.ix.netcom.com> <378A7FE3.54AB8622@sandpiper.net> <378b3b7a@news3.us.ibm.net> <378B967B.FFF8643@sandpiper.net> <378ba2d9@news3.us.ibm.net> <7miesm$oud@edrn.newsguy.com> <378cd954@news3.us.ibm.net> <7mkqir$1lu0@edrn.newsguy.com> <378e40df@news3.us.ibm.net> <3790AC47.F1202596@mediaone.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 17 Jul 1999 15:56:55 GMT, 129.37.182.233
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

robert wrote in message <3790AC47.F1202596@mediaone.net>...
>Sergio,
>
>How important would you say it is for me to know what these various Ps &
>Qs & Ns And nns mean?  i find these postings quite understandable and
>helpful in my own thinking. Am I missing a lot , not having the various
>reference information?
>oz
>

The Ps and Qs you're seeing in recent posts are generic forms of handling
meaningful things of our world. In logic, one may use these generic
terms to simplify writing and focus on what is important at the
moment, the logic relationship among the concepts. For example:

Suppose we want to talk about "apples". Now let Q refer to attribute
of color, and E refer to the attribute of "edibility". We may devise
a knowledge base of apples which should contain (among a bunch of
other definitions) the expression:

V x : (Q(x) = "red") -> E(x)

Which means "for any x, such that the attribute of color of x
is red, then (implication) this x is an edible object)

Q is a short form of "atribute" of an object. We were discussing
where do Qs come from, and I was saying that they come from
a process that uses inductive manipulations and categorization
(grouping of similar things under the same category).

If you're interested to know a bit more about these things (Ps, Qs),
search the web for "propositional calculus" and "first order
predicate calculus".

Regards,
Sergio Navega.


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net