Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@ibm.net>
Subject: Limits of Statistic Learning
Date: 05 Feb 1999 00:00:00 GMT
Message-ID: <36bb57c4@news3.ibm.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 5 Feb 1999 20:42:44 GMT, 166.72.21.224
Organization: SilWis
Newsgroups: comp.ai.philosophy

For quite some time I have been baffled by approaches to
human learning that say that statistics is the only reasonable
way to go. That is apparently obvious to some but carried, for
me, something strangely not plausible.

It seems to be a good idea because it seems to be the only
solution to the problem of deriving some meaning from a bunch
of signals with few apparent correlations offered to our senses
by the complex world we live. It seems not plausible because what
I know of our cognition (to the level of object perception and
language acquisition) don't seem to emerge directly from such
a purely statistical view.

I was browsing recently the fine papers of BBS and found
something that can shed some light into this subject.
The paper, by Andy Clark and Chris Thornton, can be
read at:

http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.clark.html

Clark distinguishes two kinds of learning problems: type-1
and type-2. Type-2 are problems that cannot effectively
be processed by "brute force" methods (he calls this "marginal
statistic regularities"). Type-1 are tractable problems
("robust regularities"). Clark mentions that type-2 problems
are not exceptions, they are usual ones, that biological
organisms must face when living in this world.

So the question posed is this: how did we solve this problem?
If statistical methods are not supposed to find easily
those regularities (and results such as Stephen Judd's
complexity of learning help in granting some merit to
this opinion) how did our brain (and most animals, in
different scales) handle it so effortlessly?

I concur with one of Clark's hypotheses and add my own
speculations to it. It seems to be reasonable that
type-2 processing was taken care by biological evolution
and as a result of this evolution, a series of innate
pre-processing mechanisms developed to reduce the type-2
problems to type-1 ones, which can then be treated by more
conventional methods in our brain (perhaps even dismissing
statistical methods).

The extent in which these innate mechanisms operate is,
obviously, reason for heated debates in the scientific
community. I find that these innate mechanisms tends to
solve the "first level" of problems, those that transform
the signals received by the brain into one (or more) special
"codes" with much of the important statistical features
*already processed*. The remaining task will be to derive
what is necessary from a *much simpler* sequence.

The determination of what is this "code" is, in my
opinion, the most important discovery that we ought
to do as soon as possible. Some recent hypotheses point
toward synchronization methods in populations of neurons
(this is currently my preferred).

With that coding strategy in our hands, lots of speculations
about our cognition will have to be reviewed (including the
dreaded innateness of language). The list of casualties
will be, certainly, very large.

Sergio Navega.

From: rickert@cs.niu.edu (Neil Rickert)
Subject: Re: Limits of Statistic Learning
Date: 05 Feb 1999 00:00:00 GMT
Message-ID: <79g1al$8eu@ux.cs.niu.edu>
References: <36bb57c4@news3.ibm.net>
Organization: Northern Illinois University
Newsgroups: comp.ai.philosophy

"Sergio Navega" <snavega@ibm.net> writes:

>For quite some time I have been baffled by approaches to
>human learning that say that statistics is the only reasonable
>way to go. That is apparently obvious to some but carried, for
>me, something strangely not plausible.

I agree that it is entirely implausible.  Statistical methods are
weak, and could only give weak results.  If you look at a science
which depends mainly on statistical methods, say psychology or
sociology, you find a relatively weak science.  If you look to a
strong science, such as physics, you find relatively little use of
statistical methods.

>It seems to be a good idea because it seems to be the only
>solution to the problem of deriving some meaning from a bunch
>of signals with few apparent correlations offered to our senses
>by the complex world we live.

It "seems to be the only solution", but it isn't any solution at
all.  The reason that it "seems to be the only solution" is that we
have been indoctrinated by philosophers, who have been making such
mistaken assumptions perhaps since the beginnings of philosophy.

>http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.clark.html

>Clark distinguishes two kinds of learning problems: type-1
>and type-2. Type-2 are problems that cannot effectively
>be processed by "brute force" methods (he calls this "marginal
>statistic regularities"). Type-1 are tractable problems
>("robust regularities"). Clark mentions that type-2 problems
>are not exceptions, they are usual ones, that biological
>organisms must face when living in this world.

The mistake is to assume that robust propositions associated with
type-1 problems are of the same type as those associated with type-2
problems.  The type-2 problems are those that fit the philosopher's
infatuation with abstract propositions and with logic.  Type-1
problems, properly understood, are of an entirely different nature,
and their appearance of having to do with abstract propositions is
purely an illusion.  The abstract propositions involved are mostly
analytic or near analytic, so that the propositional content
associated with type-1 problems is near zero.

>So the question posed is this: how did we solve this problem?

We didn't.  We solved very different problems of a highly pragmatic
nature, and constructed a logical formalism to describe the problems
that we solved.  Since the logical formalism is our construct, the
robustness of its propositions is entirely under our control.  That
is what allows us to have the robust type-1 propositions.  The cost
is that these propositions, as propositions, are empirically empty.
Their value to us is not in their empirical content, but in the
pragmatic considerations which led us to construct them.

>If statistical methods are not supposed to find easily
>those regularities (and results such as Stephen Judd's
>complexity of learning help in granting some merit to
>this opinion) how did our brain (and most animals, in
>different scales) handle it so effortlessly?

>I concur with one of Clark's hypotheses and add my own
>speculations to it. It seems to be reasonable that
>type-2 processing was taken care by biological evolution
>and as a result of this evolution, a series of innate
>pre-processing mechanisms developed to reduce the type-2
>problems to type-1 ones, which can then be treated by more
>conventional methods in our brain (perhaps even dismissing
>statistical methods).

The trouble is that you are laying the foundations for creationism.

        The world obviously must result from intelligent design, and
        be the work of a creator.  For otherwise there is no
        explanation as to why the world would be so inexplicably rich
        in type-1 problems, whereas with no creator we would expect
        it to consist mainly of type-2 problems.

>The extent in which these innate mechanisms operate is,
>obviously, reason for heated debates in the scientific
>community. I find that these innate mechanisms tends to
>solve the "first level" of problems, those that transform
>the signals received by the brain into one (or more) special
>"codes" with much of the important statistical features
>*already processed*. The remaining task will be to derive
>what is necessary from a *much simpler* sequence.

And now you are laying the groundwork for the claim that evolution is
impossible, for organisms require innate mechanisms which could only
have arisen by intelligent design.

And all this because philosophers are still caught up in a 2,000 year
self delusion.

>The determination of what is this "code" is, in my
>opinion, the most important discovery that we ought
>to do as soon as possible. Some recent hypotheses point
>toward synchronization methods in populations of neurons
>(this is currently my preferred).

There is no special code, no magic.  It is all a matter of sound
empirical method, something that philosophers have hidden themselves
from since time immemorial.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Limits of Statistic Learning
Date: 06 Feb 1999 00:00:00 GMT
Message-ID: <36bc4bca@news3.ibm.net>
References: <36bb57c4@news3.ibm.net> <79g1al$8eu@ux.cs.niu.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 6 Feb 1999 14:03:54 GMT, 129.37.183.112
Organization: SilWis
Newsgroups: comp.ai.philosophy

Neil Rickert wrote in message <79g1al$8eu@ux.cs.niu.edu>...
>"Sergio Navega" <snavega@ibm.net> writes:
>
>>For quite some time I have been baffled by approaches to
>>human learning that say that statistics is the only reasonable
>>way to go. That is apparently obvious to some but carried, for
>>me, something strangely not plausible.
>
>I agree that it is entirely implausible.  Statistical methods are
>weak, and could only give weak results.  If you look at a science
>which depends mainly on statistical methods, say psychology or
>sociology, you find a relatively weak science.  If you look to a
>strong science, such as physics, you find relatively little use of
>statistical methods.
>

Although weak, I think there are some areas where they're the only
method available. There's no doubt that if one have the possibility
of using stronger methods, statistics should be left aside.
Unfortunately, sometimes this is not the case. I'm trying to see
the brain as one of the problems in which we *don't need*
to think in statistical terms (this is, as a matter of fact,
something that came up recently for me, when Bill Modlin reentered
discussions; I read some of his older postings about this subject
and that prompted me to think about it).

>>It seems to be a good idea because it seems to be the only
>>solution to the problem of deriving some meaning from a bunch
>>of signals with few apparent correlations offered to our senses
>>by the complex world we live.
>
>It "seems to be the only solution", but it isn't any solution at
>all.  The reason that it "seems to be the only solution" is that we
>have been indoctrinated by philosophers, who have been making such
>mistaken assumptions perhaps since the beginnings of philosophy.
>
>>http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.clark.html
>
>>Clark distinguishes two kinds of learning problems: type-1
>>and type-2. Type-2 are problems that cannot effectively
>>be processed by "brute force" methods (he calls this "marginal
>>statistic regularities"). Type-1 are tractable problems
>>("robust regularities"). Clark mentions that type-2 problems
>>are not exceptions, they are usual ones, that biological
>>organisms must face when living in this world.
>
>The mistake is to assume that robust propositions associated with
>type-1 problems are of the same type as those associated with type-2
>problems.  The type-2 problems are those that fit the philosopher's
>infatuation with abstract propositions and with logic.  Type-1
>problems, properly understood, are of an entirely different nature,
>and their appearance of having to do with abstract propositions is
>purely an illusion.  The abstract propositions involved are mostly
>analytic or near analytic, so that the propositional content
>associated with type-1 problems is near zero.
>

I understand what you say, although I don't think that's what
I was thinking when I read Clark's article. I was thinking on the
complexity of, say, processing auditory signal. Two different
people saying exactly the same thing will produce sounds that,
on the lower level, are completely different, with no apparent
points in common. There are statistical methods that solve this,
like principal component analysis and independent component
analysis. These are the methods proclaimed by Modlin and others.

On the other hand, there are suggestions where what is proposed
is the comparison of high level aspects of those signals and
only when these aspects are found, one goes to the "bottom",
looking (or perceiving) more elemental characteristics. This
idea (although I find more attractive than the former) poses
a significant problem: what is offered by the world is a
complex signal with barely perceptible high level aspects.
It is not clear what mechanism could take such a complex signal
and consider only the relevant (or correct) high level features
without a lot of a priori strong assumptions about the nature
of that signal.

What I am suggesting is that this problem, instead of being
solved totally by the brain (using statistical methods a
la Modlin) or being solved by high level perception and
then successive refinement, could be better handled by a
somewhat different method in which a very convenient
pre-processing is applied over the sensory signals and
then the brain could occupy itself with the high level
interpretation of what's left.

>>So the question posed is this: how did we solve this problem?
>
>We didn't.  We solved very different problems of a highly pragmatic
>nature, and constructed a logical formalism to describe the problems
>that we solved.  Since the logical formalism is our construct, the
>robustness of its propositions is entirely under our control.  That
>is what allows us to have the robust type-1 propositions.  The cost
>is that these propositions, as propositions, are empirically empty.
>Their value to us is not in their empirical content, but in the
>pragmatic considerations which led us to construct them.
>

I think I agree with that.

>>If statistical methods are not supposed to find easily
>>those regularities (and results such as Stephen Judd's
>>complexity of learning help in granting some merit to
>>this opinion) how did our brain (and most animals, in
>>different scales) handle it so effortlessly?
>
>>I concur with one of Clark's hypotheses and add my own
>>speculations to it. It seems to be reasonable that
>>type-2 processing was taken care by biological evolution
>>and as a result of this evolution, a series of innate
>>pre-processing mechanisms developed to reduce the type-2
>>problems to type-1 ones, which can then be treated by more
>>conventional methods in our brain (perhaps even dismissing
>>statistical methods).
>
>The trouble is that you are laying the foundations for creationism.
>
> The world obviously must result from intelligent design, and
> be the work of a creator.  For otherwise there is no
> explanation as to why the world would be so inexplicably rich
> in type-1 problems, whereas with no creator we would expect
> it to consist mainly of type-2 problems.
>

I think that here you misunderstood me. We usually commit the
rhetorical error of saying that "nature designed us" just like
if there was a godlike designer behind it. Obviously,
that's not what I meant. I tried to say that my hypothesis is
that darwinian natural selection processes found a way to
produce such a complicated organism like us by splitting the
problem in two: the first part is the evolution of pre-processing
mechanisms, like the hair cell's cilia of the auditory system or
the rod/cones cells in our visual system (and also associated neurons
and mechanisms such as the Lateral Geniculate Nucleous).
These elements, I am supposing, are not merely transducers, but
also "problem space reducers".

The brain, instead of having to handle the sheer complexity
of the original signal (for instance, like the sound of somebody
uttering our name), receives a previously "chewed" signal,
enough to reduce the problem to a more manageable size (like,
for instance, a set of signals with frequencies filtered,
white noise accounted, pattern of intensities, etc). I like
this idea, although, to my current knowledge, the statistical
reduction effect of all this is speculation.

>>The extent in which these innate mechanisms operate is,
>>obviously, reason for heated debates in the scientific
>>community. I find that these innate mechanisms tends to
>>solve the "first level" of problems, those that transform
>>the signals received by the brain into one (or more) special
>>"codes" with much of the important statistical features
>>*already processed*. The remaining task will be to derive
>>what is necessary from a *much simpler* sequence.
>
>And now you are laying the groundwork for the claim that evolution is
>impossible, for organisms require innate mechanisms which could only
>have arisen by intelligent design.
>
>And all this because philosophers are still caught up in a 2,000 year
>self delusion.
>

That was not my intention, I was trying to see nature solving the
problem of survival of humans giving more space in the brain to
perceptual and high-level cognition by allowing some sophistication
in the sensory side of the equation. One of the reasons that drove
me toward thinking this way is our world: it is pretty much the
same for millions of years, with basically the same day/night cycle,
similar temperature range, similar natural sound levels and frequencies,
etc. It would be "wise", then, to "design" fixed mechanisms to collect
sensory inputs even doing some sort of processing, like freezing
some kinds of "information derivation methods" from raw data. The
extent in which this is done is, obviously, very difficult to
determine and, I guess, one of the main areas of dispute between
the nativists and empiricists. I line myself with the latter, but
I think there are some relevant aspects raised by the former.

>>The determination of what is this "code" is, in my
>>opinion, the most important discovery that we ought
>>to do as soon as possible. Some recent hypotheses point
>>toward synchronization methods in populations of neurons
>>(this is currently my preferred).
>
>There is no special code, no magic.  It is all a matter of sound
>empirical method, something that philosophers have hidden themselves
>from since time immemorial.
>

While I concur with some of your reservations about philosophers,
I was suggesting that this "code" (the way neurons or group of
neurons handle information storage/recall/processing/combination/
whatever) is the main point we should focus our interests.
Everything else, for me, will be derived from what we find there.

Regards,
Sergio Navega.


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net