Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@ibm.net>
Subject: VOTE: Your HLUT Opinion is Valuable Here
Date: 12 Mar 1999 00:00:00 GMT
Message-ID: <36e951fb@news3.us.ibm.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 12 Mar 1999 17:42:19 GMT, 166.72.29.166
Organization: SilWis
Newsgroups: comp.ai.philosophy

The debate around HLUTs seem to have brought a chasm
between two factions. Some think that human behavior
(including intelligence) can be completely captured by
taking into account only the discrete set of inputs from
sensory organs coupled with a history of previous entries,
and the other side of the chasm, which advocates that
human behavior and intelligence in particular cannot be
seen this way.

On the former group, the contestants seem to be
Daryl McCullough, Jim Balter and Pierre-Normand Houle,
among others that may appear a little bit undecided.
I will call this group "rationalists" (no offense
implied in any form ;-)

On the latter group, I list myself, Neil Rickert,
Michael Edelman and Anders Weinstein, with varying
degrees of adherence to some claims. I will call
this group the "neoempiricists" (also, sorry if this
is not the best name).

Although I'm a member of the neoempiricists, I may have
some points of disagreement with the other members of
my clan in some related matters, including on the exact
establishment of our critiques against the model proposed
by the rationalists. However, I stick with them in saying
that the dreaded HLUT does not work as proposed.

After a lot of arguments were exposed, each one of us have
had enough time to rethink this whole issue. I would
like, then, to propose to VOTE for some questions regarding
a specific situation. No explanation is necessary, just
the vote. By the way, I invite everybody who reads this
message to also vote (even you, Bloxy!).

Situation: All the discrete inputs from sensory organs
of Albert Einstein together with all possible outputs,
discretized to any desired level of accuracy, controlling
all aspects of muscles, limbs, etc, etc, etc.
The discrete inputs together with the history of past
ones should be used to address a Huge LookUp Table,
assumed to be prepared by one omniscient, almighty
god-like creature. The HLUT will have to present one
output per input entry, even if this output is selected
randomly from a set of slightly randomized ones. It is
not allowed *any other* mechanism or trick in this HLUT
other than this mindless mechanism of retrieval.
It is clear that this HLUT is just a mathematical figure,
that nobody is claiming that it is possible to physically
construct such a beast, we only admit its logical existence.
The last condition is the obedience of the laws of physics
(quantum physics in particular), except for the existence
of the HLUT and our way to get the information from the
table in a timely manner.

Possible answers to each question posed below are:

POSSIBLE  IMPOSSIBLE   LIKELY   UNLIKELY    VERY-VERY-VERY-UNLIKELY

TRUE    FALSE     INDETERMINATE      DON'T KNOW      NONE-OF-ABOVE

GETOUTOFHERE     OFFENSIVE-QUESTION      I'D-FLIP-A-COIN

Questions: In which way a HLUT, constructed properly using
the data from Albert Einstein, will be able to present the
following kinds of behaviors:

a) The exact behavior presented by "our" Albert Einstein everytime
this HLUT is "run"

b) The exact behavior presented by "our" Albert Einstein at
least once in an arbitrarily large (but finite) number of "runs"

c) A behavior very similar to our Albert Einstein in any "run"

d) A behavior considered intelligent in every "run"

e) A behavior considered as human (no regard to intelligence) in
any "run"

f) Does the HLUT have all possible behaviors of Albert Einstein
stored in its entries?

g) Will this HLUT ever be able to discover the Teory of Relativity
(assuming no limit of time)?

h) Will this HLUT discover Relativity (same period of time than
the real Einstein)?

i) Can I say that this HLUT is intelligent?

j) If you had the opportunity to use this HLUT to help you in
the decisions of your life, would you use it?

----------------

I'll be the first to put my neck in the guillotine. Here's
my answers:

a) The exact behavior presented by "our" Albert Einstein everytime
this HLUT is "run"
IMPOSSIBLE

b) The exact behavior presented by "our" Albert Einstein at
least once in an arbitrarily large (but finite) number of "runs"
POSSIBLE

c) A behavior very similar to our Albert Einstein in any "run"
VERY-VERY-VERY-UNLIKELY (although "very similar" is vague)

d) A behavior considered intelligent in every "run"
UNLIKELY

e) A behavior considered as human (no regard to intelligence) in
any "run"
DON'T KNOW (didn't think much about)

f) Does the HLUT have all possible behaviors of Albert Einstein
stored in its entries?
TRUE (that's in fact the definition of such a HLUT)

g) Will this HLUT ever be able to discover the Teory of Relativity
(assuming no limit of time)?
TRUE

h) Will this HLUT discover Relativity (same period of time than
the real Einstein)?
VERY-VERY-VERY-UNLIKELY

i) Can I say that this HLUT is intelligent?
FALSE (according to my concept of intelligence)

j) If you had the opportunity to use this HLUT to help you in
the decisions of your life, would you use it?
I'D-FLIP-A-COIN (it's much, much cheaper)

Sergio Navega.

From: rickert@cs.niu.edu (Neil Rickert)
Subject: Re: VOTE: Your HLUT Opinion is Valuable Here
Date: 12 Mar 1999 00:00:00 GMT
Message-ID: <7cbr1e$37m@ux.cs.niu.edu>
References: <36e951fb@news3.us.ibm.net>
Organization: Northern Illinois University
Newsgroups: comp.ai.philosophy

"Sergio Navega" <snavega@ibm.net> writes:

>                                              I would
>like, then, to propose to VOTE for some questions regarding
>a specific situation. No explanation is necessary, just
>the vote.

I'm not convinced this is useful.  But I'll play the game.

>Questions: In which way a HLUT, constructed properly using
>the data from Albert Einstein, will be able to present the
>following kinds of behaviors:

Since I question the possibility of this (the "in principle"
possibility), you will have to interpret my answers as responses to
the assumption:

   A brilliant computer scientist has presented a machine which he
   claims is an HLUT implementation of Einstein.

>a) The exact behavior presented by "our" Albert Einstein everytime
>this HLUT is "run"

UNLIKELY

>b) The exact behavior presented by "our" Albert Einstein at
>least once in an arbitrarily large (but finite) number of "runs"

UNLIKELY

>c) A behavior very similar to our Albert Einstein in any "run"

UNLIKELY

>d) A behavior considered intelligent in every "run"

UNLIKELY

>e) A behavior considered as human (no regard to intelligence) in
>any "run"

UNLIKELY

>f) Does the HLUT have all possible behaviors of Albert Einstein
>stored in its entries?

This is a tricky one.  On the one hand, it seems that the answer is
YES, by definition of the HLUT.  However, this plays on two very
different meanings for "behavior."  The HLUT is supposedly defined
in terms of i-behavior, the internal relations between received
sensory signals and generated motor signals.  But the ordinary
meaning of 'behavior' is that of e-behavior, the movements and other
actions as seen by other observers.

If we interpret in terms of e-behavior, then I claim that the HLUT
does not have any Einstein behavior in its tables.

>g) Will this HLUT ever be able to discover the Teory of Relativity
>(assuming no limit of time)?

UNLIKELY

>h) Will this HLUT discover Relativity (same period of time than
>the real Einstein)?

UNLIKELY

>i) Can I say that this HLUT is intelligent?

You can say what you like (free speech and all that).

>j) If you had the opportunity to use this HLUT to help you in
>the decisions of your life, would you use it?

It might be useful as a demo of the mistaken assumptions of many
computationalists.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: VOTE: Your HLUT Opinion is Valuable Here
Date: 12 Mar 1999 00:00:00 GMT
Message-ID: <36e97fb2@news3.us.ibm.net>
References: <36e951fb@news3.us.ibm.net> <7cbr1e$37m@ux.cs.niu.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 12 Mar 1999 20:57:22 GMT, 129.37.182.16
Organization: SilWis
Newsgroups: comp.ai.philosophy

Neil Rickert wrote in message <7cbr1e$37m@ux.cs.niu.edu>...
>[please see preceding post for Neil's votes]

Thanks a lot for your votes, Neil. Although we have quite
some differences among our votes, they matched on the most
important aspects, which are related to the invalidity of
HLUTs. I'm eager, now, to read the opinion of the members
of the other side of the chasm.

Regards,
Sergio Navega.

From: Jim Balter <jqb@sandpiper.net>
Subject: Re: VOTE: Your HLUT Opinion is Valuable Here
Date: 12 Mar 1999 00:00:00 GMT
Message-ID: <36E9ADB2.5212E66@sandpiper.net>
Content-Transfer-Encoding: 7bit
References: <36e951fb@news3.us.ibm.net> <7cbr1e$37m@ux.cs.niu.edu>
X-Accept-Language: en-US
Content-Type: text/plain; charset=us-ascii
Organization: Sandpiper Networks, Inc.
Mime-Version: 1.0
Newsgroups: comp.ai.philosophy

> "Sergio Navega" <snavega@ibm.net> writes:

> >a) The exact behavior presented by "our" Albert Einstein everytime
> >this HLUT is "run"

What do you mean by "The exact behavior of Albert Einstein"?
Albert Einstein only ran once, so there is only one behavior
at hand.  If the HLUT presents two different behaviors
on two different runs, then at least one is "wrong".

You seem to have an incoherent notion which allows Albert
Einstein to have different possible behaviors and yet have
an "exact behavior".  When you and Rickert start paying more
attention to the details of your concepts, you might begin
to see where you go wrong.

--
<J Q B>

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: VOTE: Your HLUT Opinion is Valuable Here
Date: 13 Mar 1999 00:00:00 GMT
Message-ID: <36ea8c11@news3.us.ibm.net>
References: <36e951fb@news3.us.ibm.net> <7cbr1e$37m@ux.cs.niu.edu> <36E9ADB2.5212E66@sandpiper.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 13 Mar 1999 16:02:25 GMT, 129.37.183.236
Organization: SilWis
Newsgroups: comp.ai.philosophy

Jim Balter wrote in message <36E9ADB2.5212E66@sandpiper.net>...
>> "Sergio Navega" <snavega@ibm.net> writes:
>
>> >a) The exact behavior presented by "our" Albert Einstein everytime
>> >this HLUT is "run"
>
>What do you mean by "The exact behavior of Albert Einstein"?
>Albert Einstein only ran once, so there is only one behavior
>at hand.  If the HLUT presents two different behaviors
>on two different runs, then at least one is "wrong".
>

I posed that question exactly to reveal the position of those
radicals who accept that the HLUT shall generate the *same*
behavior given a *fixed* input signal and history (in other
words, that the HLUT can present a deterministic behavior
in regard to its input and still be considered intelligent).
It should be obvious by now that I *do not* circumscribe
to such a belief and I gave my reasons in another post.

It seems that *you* have missed the point of the question. By
the way, you should have noticed that my answer to that
question was IMPOSSIBLE.

>You seem to have an incoherent notion which allows Albert
>Einstein to have different possible behaviors and yet have
>an "exact behavior".  When you and Rickert start paying more
>attention to the details of your concepts, you might begin
>to see where you go wrong.
>

No, I don't seem to have that incoherent notion. But it was
a nice try, Balter, you had a way to come up out of this
story without giving *your vote*, and exposing your real
opinions. Care to answer the poll?

Regards,
Sergio Navega.

From: houlepn@ibm.net
Subject: Re: VOTE: Your HLUT Opinion is Valuable Here
Date: 13 Mar 1999 00:00:00 GMT
Message-ID: <7ccv6m$s8c$1@nnrp1.dejanews.com>
References: <36e951fb@news3.us.ibm.net>
X-Http-Proxy: 1.0 x2.dejanews.com:80 (Squid/1.1.22) for client 207.96.163.34
Organization: Deja News - The Leader in Internet Discussion
X-Article-Creation-Date: Sat Mar 13 06:05:12 1999 GMT
Newsgroups: comp.ai.philosophy
X-Http-User-Agent: Mozilla/4.05 [en] (Win95; U)

In article <36e951fb@news3.us.ibm.net>,
  "Sergio Navega" <snavega@ibm.net> wrote:
>
> The debate around HLUTs seem to have brought a chasm
> between two factions. Some think that human behavior
> (including intelligence) can be completely captured by
> taking into account only the discrete set of inputs from
> sensory organs coupled with a history of previous entries,
> and the other side of the chasm, which advocates that
> human behavior and intelligence in particular cannot be
> seen this way.
>
> On the former group, the contestants seem to be
> Daryl McCullough, Jim Balter and Pierre-Normand Houle,
> among others that may appear a little bit undecided.
> I will call this group "rationalists" (no offense
> implied in any form ;-)
>
> On the latter group, I list myself, Neil Rickert,
> Michael Edelman and Anders Weinstein, with varying
> degrees of adherence to some claims. I will call
> this group the "neoempiricists" (also, sorry if this
> is not the best name).
>
> Although I'm a member of the neoempiricists, I may have
> some points of disagreement with the other members of
> my clan in some related matters, including on the exact
> establishment of our critiques against the model proposed
> by the rationalists. However, I stick with them in saying
> that the dreaded HLUT does not work as proposed.
>
> After a lot of arguments were exposed, each one of us have
> had enough time to rethink this whole issue. I would
> like, then, to propose to VOTE for some questions regarding
> a specific situation. No explanation is necessary, just
> the vote. By the way, I invite everybody who reads this
> message to also vote (even you, Bloxy!).
>
> Situation: All the discrete inputs from sensory organs
> of Albert Einstein together with all possible outputs,
> discretized to any desired level of accuracy, controlling
> all aspects of muscles, limbs, etc, etc, etc.
> The discrete inputs together with the history of past
> ones should be used to address a Huge LookUp Table,
> assumed to be prepared by one omniscient, almighty
> god-like creature. The HLUT will have to present one
> output per input entry, even if this output is selected
> randomly from a set of slightly randomized ones. It is
> not allowed *any other* mechanism or trick in this HLUT
> other than this mindless mechanism of retrieval.
> It is clear that this HLUT is just a mathematical figure,
> that nobody is claiming that it is possible to physically
> construct such a beast, we only admit its logical existence.
> The last condition is the obedience of the laws of physics
> (quantum physics in particular), except for the existence

Ok.  Then let the almighty god-like creature use QM to
randomize the outputs correctly and thus avoid a terrible
red herring.

> of the HLUT and our way to get the information from the
> table in a timely manner.
>
> Possible answers to each question posed below are:
>
> POSSIBLE  IMPOSSIBLE   LIKELY   UNLIKELY    VERY-VERY-VERY-UNLIKELY
>
> TRUE    FALSE     INDETERMINATE      DON'T KNOW     NONE-OF-ABOVE
>
> GETOUTOFHERE     OFFENSIVE-QUESTION      I'D-FLIP-A-COIN
>
> Questions: In which way a HLUT, constructed properly using
> the data from Albert Einstein, will be able to present the
> following kinds of behaviors:

> a) The exact behavior presented by "our" Albert Einstein everytime
> this HLUT is "run"

False

> b) The exact behavior presented by "our" Albert Einstein at
> least once in an arbitrarily large (but finite) number of "runs"

True

> c) A behavior very similar to our Albert Einstein in any "run"

Indeterminate

> d) A behavior considered intelligent in every "run"

Very very very likely

> e) A behavior considered as human (no regard to intelligence) in
> any "run"

Very very very likely

> f) Does the HLUT have all possible behaviors of Albert Einstein
> stored in its entries?

True

> g) Will this HLUT ever be able to discover the Teory of Relativity
> (assuming no limit of time)?

Indeterminate

> h) Will this HLUT discover Relativity (same period of time than
> the real Einstein)?

Indeterminate

> i) Can I say that this HLUT is intelligent?

Indeterminate

> j) If you had the opportunity to use this HLUT to help you in
> the decisions of your life, would you use it?

True.

I believe the difference in our votes are due to the fact
that you assume equiprobability of possible outputs thus
ignoring your assumption that the laws of physics must
be obeyed (QM).  Also you are not saying how old Einstein is
when the simulation starts hence my answers to 'g' an 'h'.
My answers would have been consistent with your's if I had
assumed equiprobability.

Regards,
Pierre-Normand Houle

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own   

From: andersw+@pitt.edu (Anders N Weinstein)
Subject: Re: VOTE: Your HLUT Opinion is Valuable Here
Date: 15 Mar 1999 00:00:00 GMT
Message-ID: <7ck4ao$gcq$1@usenet01.srv.cis.pitt.edu>
References: <36e951fb@news3.us.ibm.net>
Organization: University of Pittsburgh
Newsgroups: comp.ai.philosophy

In article <36e951fb@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net> wrote:
>The debate around HLUTs seem to have brought a chasm
>between two factions. Some think that human behavior
>(including intelligence) can be completely captured by
>taking into account only the discrete set of inputs from
>sensory organs coupled with a history of previous entries,
>and the other side of the chasm, which advocates that
>human behavior and intelligence in particular cannot be
>seen this way.
>
>On the former group, the contestants seem to be
>Daryl McCullough, Jim Balter and Pierre-Normand Houle,
>among others that may appear a little bit undecided.
>I will call this group "rationalists" (no offense
>implied in any form ;-)
>
>On the latter group, I list myself, Neil Rickert,
>Michael Edelman and Anders Weinstein, with varying
>degrees of adherence to some claims. I will call
>this group the "neoempiricists" (also, sorry if this
>is not the best name).

I have a slightly different take on it.

I hold a kind of ontological pluralism: I think that reality can be
described and explained at many levels, and in many different terms,
and I think these terms are not all reducible to a common base. I
reject reductive physicalism, for example, as a reasonable constraint
on scientific or everyday discourse. Further, features like
explanation or lawfulness are relative to a level of description.

Now consider the thesis of "computationalism".  As I understand it, the
classical picture of "computationalism" (or 'cognitivism') as
exemplified by work of e.g. Pylyshyn and Fodor and Newell and Simon and
Haugeland, is the idea that there is a quite distinctive level of
explanation, call it the computational level, and that a genuine
explanatory psychological science can be located at this level, roughly
the level of discrete representations and rational rule-governed
transformations over them.

Now as articulated by these worthies, computationalism is a paradigm
that defines a level, and one that may prove fruitless. I.e. it is a
high-level hypothesis that could be shown up as false, or, if you
prefer, could degenerate into a stagnant research program, or be shown
to be the wrong conceptual scheme for good explanations. The genuine
explanatory principles underlying intelligent human behavior might be
found in other terms, or at other levels, or perhaps there will be none
to be found.

It is emphatically *not* something that is taken to be true apriori or
that can be deduced to be true simply in virtue of the applicability to
the human body of fundamental physical law.  (This is very clear in
Fodor, for example, who takes it he must reject a form of reductive
physicalism in favor of the autonomy of the computational level -- see
his "Special Sciences", a chapter in his _Language of Thought_). The
concept of a computer is a *special* one; not everything is usefully
explained using it. It doesn't *have* to be the right one.

Now in his classic _What Computers Can't Do_, Bert Dreyfus long ago
gave many excellent reasons why the assumptions that motivated
enthusiasm for computationalism may be rejected. He refers for example
to the "biological assumption" that neurons function as digital logical
elements, which has long been questioned by those in the know about
neural function [he quotes Von Neumann, e.g.]. Other assumptions he
dubs the epistemological and the ontological assumption, both of which
he points out are questionable and cites some alternatives, drawing on
his reading of phenomeological philosophy.

But these criticisms have never really persuaded many people.  In his
paper "The Nature and Plausibility of Cognitivism" (reprinted in his
_Mind Design_), Haugeland mentioned that cognitivism (=~ GOFAI, =
"computationalism" =~ the Physical Symbol System hypothesis =~ Rules
and Representations =~ Classical cognitive Architecture) gains a lot of
its plausibility from the "what else could it be?" challenge. It appears
that only the development of fairly concrete alternatives could effectively
change people's assumptions.

But in the years since these works originally appeared, it seems to me
that there have emerged several quite definite alternatives to the
"classical" view:  connectionism, ecological psychology, the "dynamical
approach" philosophically championed recently by Tim Van Gelder,
"behavior-based robotics" aka "situated cognition" are all to some
extent alternative paradigms, I believe, with *very different* ideas of
what the explanatory principles are.  Even though neural networks are,
in some sense, computational devices, for example, they are not Newell
and Simon style symbolic inference engines.

The development of living alternatives is so dramatic that Dreyfus
asserts somewhat smugly at the beginning of the most recent edition of
his classic that GOFAI has all the signs of a stagnant or degenerating
research program (Lakotos' phrase, embellishing some Kuhnian ideas) and
treats it in effect as a god that failed. Others will probably assure me
confidently that Newell and Simon style cognitivism is recognized as dead,
and that everyone in AI now is a kind of ecletic practitioner drawing
from a variety of different methods. I can't comment on this myself, but
it does seem to me that many practitioners I encounter are quite hospitable
to the possibility of, e.g. so-called "analog" methods.

Anwway, for all these reasons, I see a real issue as to what the
explanatory principles are, in which [symbolic] "computationalism" is
one program, "connectionism" another, and so on, and in which case it
is very reasonable to view "computationalism" as dubious as an
explanatory level, and as dogmatically adhered to in some quarters by
virtue of certain prejudices that hold the assumptions that it
*must* be true in place.

On this basis, from the point of view of an explanatory level, I think
it makes excellent sense to assert: the explanatory principles behind human
learning are very different those of a Finite-state machine like a HLUT.
You, naturally enough, recoiled at the suggestion that the explanatory
principles are those of a finite automaton -- even though few have
ever proposed Finite state machines as a serious psychological
theory of total human behavior. (They *have* been advanced as models
of specific parts or aspects of human behavior, however. rnd possibly
behaviorism or at least early behaviorist explanations of verbal
behavior can be seen as based on them. but I don't think they have
ever been promulgated as a grand theory in the cognitivist tradition.)

Of course many so-called computationalists would agree 100% with you. Chomsky
held, for example, that the only systematic explanatory principles are
found at the abstract level of linguistic *competence*, the content of
your tacit knowledge of grammar, not verbal performance, which is an
unsystematic mess resulting from many interacting factors.  At the
level of competence Chomsky asserted that finite automata are not
adequate models. But he never asserted the possibility of any computational
model of verbal behavior, much less all human behavior -- he seems to
believe there may be no system to be found here, in part because of free will.

But now we face a problem. The people who assert the existence of a
HLUT can now respond that they are not now and never were interested in
scientifically useful explanations. They are not interested in the
existence of an autonomous explanatory level, or in classifications
under which there is some systematicity to be found in human behavior.
They are not interested in cognitive science at all, really, as a
responsible scientific enterprise involved in the search for powerful
explanatory principles.

Rather, they are evidently willing to defend their thesis when challenged
by shifting to pitching their claim at a different, wholly
non-explanatory, 'atomic" level of explanation.

So, they grant, perhaps brain function *is* best explained not as a
symbolic computer but rather at the level of a phsycial device
operating in accordance with physical law, recognizing patterns by
resonating, for example, learning by performing gradient descent in
weight space, for example, maintaining equilibrium by coupling a
certain dynamical system with the state of the world in an interaction,
for example..., so that the right explanatory principles are not those
of symbolic representations and rational inference rules. Still, they
may crow, a discrete-state machine could *simulate* any of these
dynamical processes to any desired accuracy.

I think the "digital simulability" defense was neatly disposed of long
ago as a defense of "computationalism" by Dreyfus -- for it does not
defend the *interesting* and distinctive thesis of computationalism.
Perhaps it is true, but it is of no consequence if it is, since the
interesting thesis of "computationalism" could still be as false
(inutile) for a human brain functionally conceived as it is for any
other arbitrary physical system.

For that reason, I would mainly want to resist getting sucked into a
debate about the merits of the digital *simulability* defense at some
atomic, non-psychological level of description. If that's the best
"computationalism" has to offer, then I would say computationalism is
useless.  Possibly there is some grand metaphysical issue in which its
truth could be pertinent, perhaps, e.g. a debate about free will, but
it is not interesting to the philosophy of cognitive science.

I rather want to point out the shifting of the ground that can go on.
There is a shift from saying the the explanatory principles underlying
human intelligent behavior are those of rule-governed symbolic
computation -- an interesting and defeasible thesis about an explanatory
level -- to saying that *whatever* the principles are, if any, a
discrete state system could etch out some approximation to the total
system trajectory that is good enough, a boring thesis that fails to
define any interesting explanatory equivalence classes.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: VOTE: Your HLUT Opinion is Valuable Here
Date: 16 Mar 1999 00:00:00 GMT
Message-ID: <36eea6d5@news3.us.ibm.net>
References: <36e951fb@news3.us.ibm.net> <7ck4ao$gcq$1@usenet01.srv.cis.pitt.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 16 Mar 1999 18:45:41 GMT, 200.229.240.44
Organization: SilWis
Newsgroups: comp.ai.philosophy

Anders N Weinstein wrote in message
<7ck4ao$gcq$1@usenet01.srv.cis.pitt.edu>...
>In article <36e951fb@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net>
wrote:
>>The debate around HLUTs seem to have brought a chasm
>>between two factions. Some think that human behavior
>>(including intelligence) can be completely captured by
>>taking into account only the discrete set of inputs from
>>sensory organs coupled with a history of previous entries,
>>and the other side of the chasm, which advocates that
>>human behavior and intelligence in particular cannot be
>>seen this way.
>>
>>On the former group, the contestants seem to be
>>Daryl McCullough, Jim Balter and Pierre-Normand Houle,
>>among others that may appear a little bit undecided.
>>I will call this group "rationalists" (no offense
>>implied in any form ;-)
>>
>>On the latter group, I list myself, Neil Rickert,
>>Michael Edelman and Anders Weinstein, with varying
>>degrees of adherence to some claims. I will call
>>this group the "neoempiricists" (also, sorry if this
>>is not the best name).
>
>I have a slightly different take on it.
>
>I hold a kind of ontological pluralism: I think that reality can be
>described and explained at many levels, and in many different terms,
>and I think these terms are not all reducible to a common base. I
>reject reductive physicalism, for example, as a reasonable constraint
>on scientific or everyday discourse. Further, features like
>explanation or lawfulness are relative to a level of description.
>

I'm very confortable with such interpretation. In fact, the existence
of that many levels of interpretation (and that many number of
followers) is one of the preconditions to a useful evolution of our
intellectual process, if we can think about this process as something
that can also be explained in darwinian ways. So diversity is really
a plus.

>Now consider the thesis of "computationalism".  As I understand it, the
>classical picture of "computationalism" (or 'cognitivism') as
>exemplified by work of e.g. Pylyshyn and Fodor and Newell and Simon and
>Haugeland, is the idea that there is a quite distinctive level of
>explanation, call it the computational level, and that a genuine
>explanatory psychological science can be located at this level, roughly
>the level of discrete representations and rational rule-governed
>transformations over them.
>
>Now as articulated by these worthies, computationalism is a paradigm
>that defines a level, and one that may prove fruitless. I.e. it is a
>high-level hypothesis that could be shown up as false, or, if you
>prefer, could degenerate into a stagnant research program, or be shown
>to be the wrong conceptual scheme for good explanations. The genuine
>explanatory principles underlying intelligent human behavior might be
>found in other terms, or at other levels, or perhaps there will be none
>to be found.
>

I think this is a good picture but I would also add another problem.
In my vision, computationalism goes too fast to that rational, rule-based
level. They seem, because of this, to step aside the huge amount of
perception and recognition tasks that, one way or another, must
be accounted in a useful and comprehensive model we all wish. I guess
they go too prematurely to that level, not that they have it wrong.

>It is emphatically *not* something that is taken to be true apriori or
>that can be deduced to be true simply in virtue of the applicability to
>the human body of fundamental physical law.  (This is very clear in
>Fodor, for example, who takes it he must reject a form of reductive
>physicalism in favor of the autonomy of the computational level -- see
>his "Special Sciences", a chapter in his _Language of Thought_). The
>concept of a computer is a *special* one; not everything is usefully
>explained using it. It doesn't *have* to be the right one.
>
>Now in his classic _What Computers Can't Do_, Bert Dreyfus long ago
>gave many excellent reasons why the assumptions that motivated
>enthusiasm for computationalism may be rejected. He refers for example
>to the "biological assumption" that neurons function as digital logical
>elements, which has long been questioned by those in the know about
>neural function [he quotes Von Neumann, e.g.]. Other assumptions he
>dubs the epistemological and the ontological assumption, both of which
>he points out are questionable and cites some alternatives, drawing on
>his reading of phenomeological philosophy.
>

I'm specially sympathetic to some of Hubert's criticisms and he
managed to update them in the new edition of that book ("What Computers
STILL Can't do", rev. edition 1979). One of the points that are at
stake in this thread is just part of a citation that Dreyfus included
in this edition and that I find so interesting that I will quote
partially:

    "The difference between the mathematical mind (esprit de
    géométrie) and the perceptive mind (esprit de finesse): the
    reason that mathematicians are not perceptive is that they
    do not see what is before them, and that, accustomed to the
    exact and plain principles of mathematics, and not reasoning
    till they have well inspected and arranged their principles,
    they are lost in matters of perception where the principles do
    not allow for such arrangement..."
    Pascal, Pensées

>But these criticisms have never really persuaded many people.  In his
>paper "The Nature and Plausibility of Cognitivism" (reprinted in his
>_Mind Design_), Haugeland mentioned that cognitivism (=~ GOFAI, =
>"computationalism" =~ the Physical Symbol System hypothesis =~ Rules
>and Representations =~ Classical cognitive Architecture) gains a lot of
>its plausibility from the "what else could it be?" challenge. It appears
>that only the development of fairly concrete alternatives could effectively
>change people's assumptions.
>

That makes a lot of sense and it obviously presses us to find something
very different, if one is not comfortable with pure computationalism.

>But in the years since these works originally appeared, it seems to me
>that there have emerged several quite definite alternatives to the
>"classical" view:  connectionism, ecological psychology, the "dynamical
>approach" philosophically championed recently by Tim Van Gelder,
>"behavior-based robotics" aka "situated cognition" are all to some
>extent alternative paradigms, I believe, with *very different* ideas of
>what the explanatory principles are.  Even though neural networks are,
>in some sense, computational devices, for example, they are not Newell
>and Simon style symbolic inference engines.
>

Indeed. I would add the Evolutionary approach of John Holland and
others. It is interesting that Herb Simon espoused later a different
vision, where he adopts some of the principles of complexity theory
(cf. his book "The Sciences of the Artificial").

>The development of living alternatives is so dramatic that Dreyfus
>asserts somewhat smugly at the beginning of the most recent edition of
>his classic that GOFAI has all the signs of a stagnant or degenerating
>research program (Lakotos' phrase, embellishing some Kuhnian ideas) and
>treats it in effect as a god that failed. Others will probably assure me
>confidently that Newell and Simon style cognitivism is recognized as dead,
>and that everyone in AI now is a kind of ecletic practitioner drawing
>from a variety of different methods. I can't comment on this myself, but
>it does seem to me that many practitioners I encounter are quite hospitable
>to the possibility of, e.g. so-called "analog" methods.
>

Although I don't seem inclined to accept Newel/Simon's hypotheses and
even taking into consideration my preference for the dynamical systems
theory, I'm not comfortable at dismissing completely the symbolicists
work. I guess we will eventually have to re-read their works as something
to be attained by our systems, no matter what methods we use to
process low level sensory transducing and perception. What GOFAI seem
to propose, in useful terms, is the tip of the iceberg. One way or
another, we will have to build the rest of the iceberg, even using
different principles, but the tip will have to perform the way
symbolicists have established.

>
>On this basis, from the point of view of an explanatory level, I think
>it makes excellent sense to assert: the explanatory principles behind human
>learning are very different those of a Finite-state machine like a HLUT.
>You, naturally enough, recoiled at the suggestion that the explanatory
>principles are those of a finite automaton -- even though few have
>ever proposed Finite state machines as a serious psychological
>theory of total human behavior. (They *have* been advanced as models
>of specific parts or aspects of human behavior, however. rnd possibly
>behaviorism or at least early behaviorist explanations of verbal
>behavior can be seen as based on them. but I don't think they have
>ever been promulgated as a grand theory in the cognitivist tradition.)

You seem to have grasped some aspects of my reluctance with HLUTs.
My primary concern is that of a realist engineer. I know that we humans
don't have very much "mental space" to waste with unfruitful paradigms.
AI is stuck because we're not perceiving what is the solution.
The last 4 decades were spent delving with a problem and few really
general progress have been made. Most of the progresses were inside
sects which do not share basic principles regarding intelligence.
To improve our chances of perceiving the problem ahead of us, we
must minimize the "noise" in front of our eyes. HLUTs seem to be
a good exemplar of that kind of noise.

>
>Of course many so-called computationalists would agree 100% with you.
Chomsky
>held, for example, that the only systematic explanatory principles are
>found at the abstract level of linguistic *competence*, the content of
>your tacit knowledge of grammar, not verbal performance, which is an
>unsystematic mess resulting from many interacting factors.  At the
>level of competence Chomsky asserted that finite automata are not
>adequate models. But he never asserted the possibility of any computational
>model of verbal behavior, much less all human behavior -- he seems to
>believe there may be no system to be found here, in part because of free
will.
>

I have a special "relationship" with Chomsky and Fodor. I disagree with
both of them in some issues and I totally agree in others. Chomsky's
attack at behaviorism is one of the things that I praise in his vision.

>But now we face a problem. The people who assert the existence of a
>HLUT can now respond that they are not now and never were interested in
>scientifically useful explanations. They are not interested in the
>existence of an autonomous explanatory level, or in classifications
>under which there is some systematicity to be found in human behavior.
>They are not interested in cognitive science at all, really, as a
>responsible scientific enterprise involved in the search for powerful
>explanatory principles.

I would really like to read about their opinions in relation to the
issues you raised in this paragraph.

>
>Rather, they are evidently willing to defend their thesis when challenged
>by shifting to pitching their claim at a different, wholly
>non-explanatory, 'atomic" level of explanation.
>

That seems to be true. But I'm not sure you can persuade them
to make a clear statement of this position :-)

>So, they grant, perhaps brain function *is* best explained not as a
>symbolic computer but rather at the level of a phsycial device
>operating in accordance with physical law, recognizing patterns by
>resonating, for example, learning by performing gradient descent in
>weight space, for example, maintaining equilibrium by coupling a
>certain dynamical system with the state of the world in an interaction,
>for example..., so that the right explanatory principles are not those
>of symbolic representations and rational inference rules. Still, they
>may crow, a discrete-state machine could *simulate* any of these
>dynamical processes to any desired accuracy.
>

I may concede that everything may boil down to a FSA automaton. What
I cannot agree is that this is the best level to do our job of
understanding human cognition and implementing it artificially in
machines. Using FSAs from principle seem to me to be the same unwise
procedure as programming a Pentium III using machine code instead
of a high-level language. One may admit that the code will run
fast and be smaller, but the gain in understandability and generality
far outperforms any eventual gain in performance. Besides, in relation
to artificial intelligence, we're not in the optimization phase,
we're in the phase of making something work, even if each transaction
takes a month to complete.

>
>For that reason, I would mainly want to resist getting sucked into a
>debate about the merits of the digital *simulability* defense at some
>atomic, non-psychological level of description. If that's the best
>"computationalism" has to offer, then I would say computationalism is
>useless.  Possibly there is some grand metaphysical issue in which its
>truth could be pertinent, perhaps, e.g. a debate about free will, but
>it is not interesting to the philosophy of cognitive science.

I agree entirely.

>
>I rather want to point out the shifting of the ground that can go on.
>There is a shift from saying the the explanatory principles underlying
>human intelligent behavior are those of rule-governed symbolic
>computation -- an interesting and defeasible thesis about an explanatory
>level -- to saying that *whatever* the principles are, if any, a
>discrete state system could etch out some approximation to the total
>system trajectory that is good enough, a boring thesis that fails to
>define any interesting explanatory equivalence classes.

Our choice of models to handle this problem will affect dramatically
our chances of successes. We all know that this (intelligence) is
probably one of the greatest problems our science has tackled so far.

Regards,
Sergio Navega.

From: ohgs@chatham.demon.co.uk (Oliver Sparrow)
Subject: Re: VOTE: Your HLUT Opinion is Valuable Here
Date: 16 Mar 1999 00:00:00 GMT
Message-ID: <36f11424.2902784@news.demon.co.uk>
Content-Transfer-Encoding: 7bit
X-NNTP-Posting-Host: chatham.demon.co.uk:158.152.25.87
References: <36e951fb@news3.us.ibm.net> <7ck4ao$gcq$1@usenet01.srv.cis.pitt.edu>
Content-Type: text/plain; charset=us-ascii
X-Complaints-To: abuse@demon.net
X-Trace: news.demon.co.uk 921574222 nnrp-07:5317 NO-IDENT chatham.demon.co.uk:158.152.25.87
MIME-Version: 1.0
Newsgroups: comp.ai.philosophy

(Anders N Weinstein) wrote:

>I hold a kind of ontological pluralism: I think that reality can be
>described and explained at many levels, and in many different terms,
>and I think these terms are not all reducible to a common base.

I exactly agree with you. Also both with the words and what you adduce
in the following. This is one of the most useful posts to this
usegroup that I have seen, and I thank you for the effort that was
involved in writing it.

>Now as articulated by these worthies, computationalism is a paradigm
>that defines a level, and one that may prove fruitless. I.e. it is a
>high-level hypothesis that could be shown up as false, or, if you
>prefer, could degenerate into a stagnant research program, or be shown
>to be the wrong conceptual scheme for good explanations.

My sole quibble is that at the end, you lay out your goods but fail to
steer the customer. I suspect that this is a wise strategy. We are, in
respect of strong AI, like blind men - persons - feeling parts of an
intangible elephant. There is no handle that we can grasp upon the
workings that lead to cognition, we have no idea what cognition 'is'
or indeed how to talk about it at anything beyond a phenomenological
level. We can, as a consequence, validate very little about its
operations, save to note its absence (subject to more or less caveats)
when certain brain states exist, such as anaesthesia, major lesions
and some forms of sleep. It may be that discussion is fruitless until
we have such a handle. It may be that we experience computing
surprises - or more probably, that neurophysiology will offer insight
- that will offer us the necessary grip. My guess is that we need to
map  the foothills and learn to live in them before we go for the snow
peak of native legend, hidden in currently perpetual cloud.
_______________________________

Oliver Sparrow

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: VOTE: Your HLUT Opinion is Valuable Here
Date: 16 Mar 1999 00:00:00 GMT
Message-ID: <36eea6d8@news3.us.ibm.net>
References: <36e951fb@news3.us.ibm.net> <7ck4ao$gcq$1@usenet01.srv.cis.pitt.edu> <36f11424.2902784@news.demon.co.uk>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 16 Mar 1999 18:45:44 GMT, 200.229.240.44
Organization: SilWis
Newsgroups: comp.ai.philosophy

Oliver Sparrow wrote in message <36f11424.2902784@news.demon.co.uk>...
> (Anders N Weinstein) wrote:
>
>>I hold a kind of ontological pluralism: I think that reality can be
>>described and explained at many levels, and in many different terms,
>>and I think these terms are not all reducible to a common base.
>
>I exactly agree with you. Also both with the words and what you adduce
>in the following. This is one of the most useful posts to this
>usegroup that I have seen, and I thank you for the effort that was
>involved in writing it.
>

I remember having read another post of you in which you had proposed
something on similar lines, so the praise should also go to you. And
I agree that Anders' post was really very important.

Regards,
Sergio Navega.

From: ohgs@chatham.demon.co.uk (Oliver Sparrow)
Subject: Re: VOTE: Your HLUT Opinion is Valuable Here
Date: 15 Mar 1999 00:00:00 GMT
Message-ID: <36f1c06a.2886825@news.demon.co.uk>
Content-Transfer-Encoding: 7bit
X-NNTP-Posting-Host: chatham.demon.co.uk:158.152.25.87
References: <36e951fb@news3.us.ibm.net>
Content-Type: text/plain; charset=us-ascii
X-Complaints-To: abuse@demon.net
X-Trace: news.demon.co.uk 921485969 nnrp-06:21856 NO-IDENT chatham.demon.co.uk:158.152.25.87
MIME-Version: 1.0
Newsgroups: comp.ai.philosophy

"Sergio Navega"  asked whether a HLUT would present:

>a) The exact behavior presented by "our" Albert Einstein everytime
>this HLUT is "run"

I have answered this in a separate thread. It depends what
instantiating routines exist within the AlbertE-HLUT. If it is
self-modifying (which means it's a H-PROM, I suppose), then it will
tend to drift off as novel stimuli are presented, but obviously start
from the same place. If it is a H-ROM, then it will drift in much the
same way, but within a much smaller decision space. That is, every
branching structure, every vote-weighing element would already be in
place and unchangeable. (Or are there an infinity of these as well?
WHat distinguishes AE in this multitude?)

>b) The exact behavior presented by "our" Albert Einstein at
>least once in an arbitrarily large (but finite) number of "runs"

As above

>c) A behavior very similar to our Albert Einstein in any "run"

Well that would be rather the point of a successful instantiation,
wouldn't it? If it did not 'do' that task, then it would not be a LUT
of AE. Turing II.

>d) A behavior considered intelligent in every "run"

As (c)

>e) A behavior considered as human (no regard to intelligence) in
>any "run"

As (c)

>f) Does the HLUT have all possible behaviors of Albert Einstein
>stored in its entries?

Not what AE could become when presented with new things. That is, if
all the HLUT can do is LU, then it will find only what was there when
it started. These can make new configurations, but not configurations
which represent states that 'AE' evolves to become. Indeed, it cannot
evolve in this way, merely run in a maze.

>g) Will this HLUT ever be able to discover the Teory of Relativity
>(assuming no limit of time)?

How old is the AE from which this snapshot is taken?

>h) Will this HLUT discover Relativity (same period of time than
>the real Einstein)?

Depends

>i) Can I say that this HLUT is intelligent?

No, but the system of wehich it is a part could be is properly
designed: it was in the real AE, for example.

>j) If you had the opportunity to use this HLUT to help you in
>the decisions of your life, would you use it?

Only if in close orbit around a relativistic object.
_______________________________

Oliver Sparrow

From: Jim Washington <jwashin@vt.edu>
Subject: Re: VOTE: Your HLUT Opinion is Valuable Here
Date: 17 Mar 1999 00:00:00 GMT
Message-ID: <36F02391.747DAE8B@vt.edu>
Content-Transfer-Encoding: 7bit
References: <36e951fb@news3.us.ibm.net> <36EFD300.A4C17D@vt.edu> <36eff363@news3.us.ibm.net>
Content-Type: text/plain; charset=us-ascii
X-Complaints-To: Abuse Role <abuse@lynchburg.net>, We Care <abuse@newsread.com>
X-Trace: monger.newsread.com 921707363 208.219.99.147 (Wed, 17 Mar 1999 16:49:23 EDT)
Organization: Virginia Tech
MIME-Version: 1.0
NNTP-Posting-Date: Wed, 17 Mar 1999 16:49:23 EDT
Newsgroups: comp.ai.philosophy

Sergio Navega wrote:

> Thanks for your vote, Jim.

May I respond to your response with a koan?

I dreamed that I was wading across a river.  My boss was on
one bank yelling at me.  He was saying that I was doing it
wrong.  I said that there was not a right nor wrong way to
wade across a river. He picked up a stone and threw it.  It
hit me square on the forehead.  In that moment my dream-self
was enlightened.

I think this better than a point-by-point about where we
disagree. No?

Regards,

-- Jim

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: VOTE: Your HLUT Opinion is Valuable Here
Date: 18 Mar 1999 00:00:00 GMT
Message-ID: <36f0fc03@news3.us.ibm.net>
References: <36e951fb@news3.us.ibm.net> <36EFD300.A4C17D@vt.edu> <36eff363@news3.us.ibm.net> <36F02391.747DAE8B@vt.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 18 Mar 1999 13:13:39 GMT, 129.37.182.242
Organization: SilWis
Newsgroups: comp.ai.philosophy

Jim Washington wrote in message <36F02391.747DAE8B@vt.edu>...
>Sergio Navega wrote:
>
>> Thanks for your vote, Jim.
>
>May I respond to your response with a koan?
>
>I dreamed that I was wading across a river.  My boss was on
>one bank yelling at me.  He was saying that I was doing it
>wrong.  I said that there was not a right nor wrong way to
>wade across a river. He picked up a stone and threw it.  It
>hit me square on the forehead.  In that moment my dream-self
>was enlightened.
>
>I think this better than a point-by-point about where we
>disagree. No?
>

One thing is clear about HLUTs and that was not mentioned here.
Although HLUTs are able to replicate any kind of intelligent
behavior (and even non-intelligent, like trying to argue against
HLUTs ;-), HLUTs cannot have one's thoughts. When somebody dreams
and reasons, all this is unknown to the HLUT, if those dreams
and reasonings don't show up externally as behavior. That's the
essence of HLUTs: you're seen as a stimulus/response machine,
much like a leech.

Regards,
Sergio Navega.

From: andersw+@pitt.edu (Anders N Weinstein)
Subject: Re: VOTE: Your HLUT Opinion is Valuable Here
Date: 17 Mar 1999 00:00:00 GMT
Message-ID: <7covgp$cst$1@usenet01.srv.cis.pitt.edu>
References: <36e951fb@news3.us.ibm.net> <7ck4ao$gcq$1@usenet01.srv.cis.pitt.edu> <36f11424.2902784@news.demon.co.uk> <36EF536D.9D2B45ED@sandpiper.net>
Organization: University of Pittsburgh
Newsgroups: comp.ai.philosophy

In article <36EF536D.9D2B45ED@sandpiper.net>,
Jim Balter  <jqb@sandpiper.net> wrote:
>Oliver Sparrow wrote:
>>
>>  (Anders N Weinstein) wrote:
>>
>> >I hold a kind of ontological pluralism: I think that reality can be
>> >described and explained at many levels, and in many different terms,
>> >and I think these terms are not all reducible to a common base.
>>
>> I exactly agree with you. Also both with the words and what you adduce
>> in the following. This is one of the most useful posts to this
>> usegroup that I have seen, and I thank you for the effort that was
>> involved in writing it.
>
>I beg to differ.  The next paragraph contradicts the first.
>Levels of description and explanation are not *hypotheses*,
>and "the wrong conceptual scheme" implies a monad of
>"right conceptual schemes".

As a result of reading Quine's "Two Dogmas", I tend to believe there is
no sharp distinction between a high-level empirical hypothesis and a
conceptual framework. The distinction may rather be only a matter of
degree. Similarly for the distinction between a very general or
high-level hypothesis being shown false and a conceptual framework
being shown useless.

It seems to me to embody a kind of hypothesis to hold that a certain
level of description is a good one. That is why Newell and Simon could
speak of the Physical Symbol System *hypothesis*, even though it is
really a framework or program for developing more determinate specific
theories at the supposed symbolic processing level.

Inquiry could demonstrate there is no useful Physical Symbol System
level of description of the nerual processes underlying certain human
capacities, chiefly by showing there is a good explanation employing
another set of explanatory principles, e.g.  some drawn from dynamical
systems. In that case we can say the symbol system hypothesis has been
shown false.  That would still be compatible with there being a
plurality of useful descriptions, it would just say the Physical
Symbol System level is not among them.

>Anders' program here for years has been to deny a place for
>computationalism -- his take on ontological pluralism strikes me like
>members of the American Christian right who complain about religious
>persecution out of one side of their mouths while referring to the U.S.
>as a "Christian nation" out of the other.

I'm not sure what you mean by "deny a *place* for computationalism".
Pluralism can't entail that *everything* is true or that every approach
is equally good.

I will admit to two very general positions. The first is relatively
apriori -- ie. much more towards the conceptual pole. I want to insist
on an ontological distinction between the psychological and the control
system levels of description. That is, I want to argue that mental states as
implicitly defined by everyday intentional explanation are not states
of an inner "control system" inside the brain of any sort. (A computer
person might say I take them to be entirely "virtual".)

I can easily summarize this position in slogan form: "the mind is NOT a
control system".  That entails I hold the mind is not an "analog"
control system, not a "digital" control system, not a hybrid
architecture control system. Rather my view is that person-level (or,
more generally, organism-level) intentional states are located at quite
a different level than any control system level, and are constituted in
part by relations to the environment.

On this basis, I can identify one fundamental confusion of *some*
cognitive-scientifically oriented philosophy of mind: the failure to
distinguish properly between mental states and control system states,
between intentional explanation and control system explanation. On the
other hand, I also see plenty of places where some such distinction
*is* recognized: e.g. in Dennett's distinction between the intentional
stance and the design stance, or in Newell's idea of the Knowledge
Level, or in various roboticist's ideas about "knowledge compilation"
or the "fallacy of implementing the description" (a label I got from
Chris Malcolm, I believe). Perhaps I draw more importance from it than
these others do.

That, as I said, is a relatively conceptual position. I think we can
determine this largely by reflection and thought experiments on the
ordinary uses of pyschological words and the commitments involved
therein. The Twin Earth thought experiments might be one way of showing
this, for example.

On the other hand, I do not deny that there is something that can to
some extent be viewed as a robot "control system" inside our heads.  So
of course I conced that there is, one or more levels down from the
mental proper, as it were, a potential object of scientific study. This
science, based on something like Dennett's "design stance", is concerned
with viewing the neural system in functional terms, the way we
ordinarily view a robot control system. We might think of this project
as one of reverse-engineering the human control system.

But control system science like any other is complicated by
the search for the right explanatory concepts or levels.  On this issue we
confront a much more empirical matter. At the control system level my
sympathies definitely lie with "anti-Cartesian" alternatives to classic
symbolic computational models, e.g. with behavior-based
robotics.  But only empirical work could prove these viable.

As I see it, it is still possible some conceptual work might be needed to
help make a place for these alternatives and to foster communication
between adherents to different paradigms.  But I certainly don't take
myself to be trying to rule out anything genuinely empirical. 


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net