Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Can artificial intelligence really exist?
Date: 30 Nov 1998 00:00:00 GMT
Message-ID: <3663142a.0@news3.ibm.net>
References: <3661DCD8.3E0EA91A@hotmail.com> <36625102.3336AF9A@northernnet.com> <73u2k2$dgo$1@news.rz.uni-karlsruhe.de> <73u6f5$hue$1@reader2.wxs.nl> <73uhe4$247$1@xinwen.daimi.au.dk>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 30 Nov 1998 21:54:50 GMT, 166.72.21.40
Organization: SilWis
Newsgroups: comp.ai,comp.ai.alife,comp.ai.neural-nets,comp.ai.philosophy

Lars Kroll Kristensen wrote in message <73uhe4$247$1@xinwen.daimi.au.dk>...
>In <73u6f5$hue$1@reader2.wxs.nl> "H.J. Gould" <john@liemar.nl> writes:
>
>I agree completely with your argument regarding the chinese box.
>
>The best arguments I have heard, regarding the impossibillity of
>Machine
>intelligence  is that any intelligence is by nescessity connected to a
>body. The counter counter argument is of course to simulate the body
>as well...
>
>Any opinions on that ? Is it possible to generate machine intelligence
>without simulating atleast a rudimentary "body" for the intelligence ?
>

Here we go again with Searle! I promised myself not entering this
kind of discussion anymore. Let's make one exception.

Searle's argument is strong and fragile at the same time. I hope to be
clear enough with my arguments.

It is strong because it showed clearly that a computer fed with a bunch
of symbols from any human language *will not* develop a *human-equivalent*
level of understanding of the world. Inside the Chinese Room you may
put anything: a Cray, a Sun workstation, an "idiot savant" or a
group of 10 "Einsteins". Their performance in the task of understanding
Chinese (or the world outside the room, for that matter), will be
miserably poor.

It is fragile because our brain does, in a way, something *very* similar
to the Chinese Room and all understanding that we have about the universe
around us is obtained through an analogous process. This argument needs
more space to be clarified.

Think about our brain as the entity we're trying to leave inside the
room. Everything this brain captures from the outside world comes from
the sensory perceptions, things that are "outside" the room. This brain
is fed only with signals (pulses) in which all that is relevant is the
timing aspect between the spikes. The brain in that room does not have
only one "door" through which it receives these pulses, but a large
quantity of them, coming from the primary sensory inputs (vision,
audition, etc) and also from others (exteroceptive, proprioceptive,
interoceptive), responsible for our "feeling" of internal events in
our body.

The train of pulses received is comparable to the chinese inputs because
it is a codification of external signals (light, for example, will be
translated from photons to a sequence of pulses). What this brain "sees"
of the world is a careful transformation, made by our sensory equipment.
It is *not* the external signal! To communicate what it senses, our
vision uses a "syntax" that is used to codify what is received in
corresponding pulses.

This brain must find meaning from the the incoming pulses. It has only
one way of starting this process: only looking for patterns and correlations
among the received pulses, temporal "coincidences", things that happen
one after the other, and so on.

Here is where Searle's vision of the problem needed more development:
our brain is looking for meaning in the "syntax" of the pulses, much
as a human in the original room would start looking for meaning
in the syntax of the incoming chinese symbols. This is enough for us
to see that a human being fed with chinese symbols would be able, after
some time, to perceive some *regularities* in the chinese phrases.

This will be enough for that human to start conjecturing the remainder
of a phrase by its initial words. Obviously, this is a cry from the
understanding of chinese, but it will be, I claim, the kind of
"meaning" that can be extracted from the syntax of the chinese phrases.

Now guess what: if this human in the chinese room is allowed to look
to a photograph linked to the text it receives (say a photo of a sun for
the phrase "the sun is rising") after some time he will be able to
ascribe *meaning* to the symbols he receives (he will identify the
word "sun" after some experiences). This photograph will
enter his eyes and will be converted into spikes, will resonate
in his visual cortex and will inform him of the "meaning": what
he knows about "suns", that every (sighted) human knows. In this
case, it is a "chinese room" inside another.

Searle failed to perceive that his brain is inside a "room" being
fed with data from the world and deriving meaning from
what it receives. All of us who work with Artificial Intelligence
must be aware of what this "means".

Regards,
Sergio Navega.

From: Jim Balter <jqb@sandpiper.net>
Subject: Re: Can artificial intelligence really exist?
Date: 30 Nov 1998 00:00:00 GMT
Message-ID: <36631FCB.704424A8@sandpiper.net>
Content-Transfer-Encoding: 7bit
References: <3661DCD8.3E0EA91A@hotmail.com> <36625102.3336AF9A@northernnet.com> <73u2k2$dgo$1@news.rz.uni-karlsruhe.de> <73u6f5$hue$1@reader2.wxs.nl> <73uhe4$247$1@xinwen.daimi.au.dk> <3663142a.0@news3.ibm.net>
Content-Type: text/plain; charset=us-ascii
Organization: Sandpiper Networks, Inc.
Mime-Version: 1.0
Newsgroups: comp.ai,comp.ai.alife,comp.ai.neural-nets,comp.ai.philosophy

Sergio Navega wrote:

> Here we go again with Searle! I promised myself not entering this
> kind of discussion anymore. Let's make one exception.
>
> Searle's argument is strong and fragile at the same time. I hope to be
> clear enough with my arguments.
>
> It is strong because it showed clearly that a computer fed with a bunch
> of symbols from any human language *will not* develop a *human-equivalent*
> level of understanding of the world.

It most certainly did not do any such thing!  Please read the literature
that *rebuts* Searle's argument.  It is widely believed among computer
scientists that Searle's argument is flawed.  It cannot therefore have
"clearly" shown what it is purported to have shown, even if it *did*
show it.

> Inside the Chinese Room you may
> put anything: a Cray, a Sun workstation, an "idiot savant" or a
> group of 10 "Einsteins". Their performance in the task of understanding
> Chinese (or the world outside the room, for that matter), will be
> miserably poor.

The *premise* of the CR is that the Chinese Room itself is
*competent* in all tasks which require an understanding Chinese,
even if it doesn't "actually" understand Chinese.  Searle grants
that the *behavior* of the CR is equivalent to that of one who
understands Chinese.  His argument is strictly *metaphysical*.

Let's at least get the fundamentals of this thought experiment right
before pursuing its implications.

--
<J Q B>

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Can artificial intelligence really exist?
Date: 01 Dec 1998 00:00:00 GMT
Message-ID: <3663d460.0@news3.ibm.net>
References: <3661DCD8.3E0EA91A@hotmail.com> <36625102.3336AF9A@northernnet.com> <73u2k2$dgo$1@news.rz.uni-karlsruhe.de> <73u6f5$hue$1@reader2.wxs.nl> <73uhe4$247$1@xinwen.daimi.au.dk> <3663142a.0@news3.ibm.net> <36631FCB.704424A8@sandpiper.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 1 Dec 1998 11:34:56 GMT, 200.229.243.99
Organization: SilWis
Newsgroups: comp.ai,comp.ai.alife,comp.ai.neural-nets,comp.ai.philosophy

Jim Balter wrote in message <36631FCB.704424A8@sandpiper.net>...
>Sergio Navega wrote:
>
>> Here we go again with Searle! I promised myself not entering this
>> kind of discussion anymore. Let's make one exception.
>>
>> Searle's argument is strong and fragile at the same time. I hope to be
>> clear enough with my arguments.
>>
>> It is strong because it showed clearly that a computer fed with a bunch
>> of symbols from any human language *will not* develop a
*human-equivalent*
>> level of understanding of the world.
>
>It most certainly did not do any such thing!  Please read the literature
>that *rebuts* Searle's argument.  It is widely believed among computer
>scientists that Searle's argument is flawed.  It cannot therefore have
>"clearly" shown what it is purported to have shown, even if it *did*
>show it.
>

What is clear for some is not so for others. Those who rebuts Searle's
argument in all aspects (as you seem to be doing) forget that we are
naive when it comes to thinking about "human-equivalent intelligence",
which was the main point of my previous paragraph.

For me, understanding Chinese imply that precondition: human-likeness.
Nothing will get human-equivalent understanding of the world unless it
has at least one human-equivalent sensory equipment, human-equivalent
"computational power" and human-equivalent emotional drives. All this
is essential to human-equivalent performance. A man inside that room
fails in the first prerequisite (he does not have access to the external
world). A rational robot, with vision, audition, etc, will fail on the
third (no emotions and/or drives). A dog with "emotions" and drives,
will fail on the second (lack of computationally equivalent brain).

>> Inside the Chinese Room you may
>> put anything: a Cray, a Sun workstation, an "idiot savant" or a
>> group of 10 "Einsteins". Their performance in the task of understanding
>> Chinese (or the world outside the room, for that matter), will be
>> miserably poor.
>
>The *premise* of the CR is that the Chinese Room itself is
>*competent* in all tasks which require an understanding Chinese,
>even if it doesn't "actually" understand Chinese.  Searle grants
>that the *behavior* of the CR is equivalent to that of one who
>understands Chinese.  His argument is strictly *metaphysical*.
>
>Let's at least get the fundamentals of this thought experiment right
>before pursuing its implications.
>

The task of believing that the room is competent in "all" tasks is
typical of philosophers: it may take more than the universe to store
all possible interpretation of phrases (this is much worse than the
frame problem). From this starting point, Searle's argument is just
a joke. But I guess that Searle wasn't concerned with this impossibility.
This is equivalent of doing an exploratory trip to the center of the
sun and coming back alive: impossible in any current account, but
useful to mentally experiment with the idea. In this regard, playing
with imagination as if it was possible, Searle's argument is strong.
This does not mean, however, that I agree with Searle's conclusions.

What I would like to read from Searle is the "back to earth" lesson
that this argument suggests: language alone is incompetent to provide
understanding, it demands one "listener" with world grounded concepts
(meaning) to make use of it. This is what Stevan Harnad preaches in
his "symbol grounding problem", a much more useful exploration of the
argument (although a little bit modified by the chinese dictionary
idea).

If CR argument is flawed, so are the usual rebuttals to his argument.
Searle's lesson should be another, different than that he may have
originally envisioned. Harnad's argument is closer to a better
exploitation of this intriguing thought experiment.

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Can artificial intelligence really exist?
Date: 01 Dec 1998 00:00:00 GMT
Message-ID: <3663fa33.0@news3.ibm.net>
References: <3661DCD8.3E0EA91A@hotmail.com> <36625102.3336AF9A@northernnet.com> <73u2k2$dgo$1@news.rz.uni-karlsruhe.de> <73u6f5$hue$1@reader2.wxs.nl> <73uhe4$247$1@xinwen.daimi.au.dk> <3663142a.0@news3.ibm.net> <3663cfad.723161@news.euronet.nl>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 1 Dec 1998 14:16:19 GMT, 166.72.21.59
Organization: SilWis
Newsgroups: comp.ai.philosophy

TechnoCrate wrote in message <3663cfad.723161@news.euronet.nl>...
>On Mon, 30 Nov 1998 18:49:29 -0200, "Sergio Navega" <snavega@ibm.net>
>wrote:
>
>>It is strong because it showed clearly that a computer fed with a bunch
>>of symbols from any human language *will not* develop a *human-equivalent*
>>level of understanding of the world. Inside the Chinese Room you may
>>put anything: a Cray, a Sun workstation, an "idiot savant" or a
>>group of 10 "Einsteins". Their performance in the task of understanding
>>Chinese (or the world outside the room, for that matter), will be
>>miserably poor.
>>
>This only shows that a computer running a straightforward formal
>system will lack our concept of understandment. Searle doesn't prove
>that a computer can't do better than that. He doesn't prove that it
>isn't possible to make computerprograms doing the same thing as our
>brains. He only stated that mere symbol shuffeling in an expert system
>isn't enough to make understandment.
>

I guess I will be flamed on this.

Searle's experiment can be seen as something saying that no machine
operating only through formal manipulations of symbols will ever be
able to "understand" the world, whatever that really means. May I
confuse everybody here and say that this may be utterly wrong?

My position, from previous posts, can be inferred as being against
the symbolic approach to AI. I usually stand together with the guys
who say that interaction with the world is fundamental and that
language alone (and the Turing Test, in a similar fashion) is not
enough to assess one's competence at being humanly intelligent.

What is not so clear is that this position does not have nothing
against the idea that our brain is a symbol processor,
implemented on top of a signal processing machinery. This is not
the position of GOFAI, but this is also not what the purely
connectionist guys are claiming either.

What is a symbol? A "TREE" is a symbol that stands for the real
thing, that tree that "you know what I mean". It is not the real thing,
it just stands for it. Everything we have inside our brain have
this same characteristic, although we have a lot of symbols referring
to other *internal* symbols.

TREE. This particular symbol have a structure. In our computers, it is
made of four ASCII characters that, in turn, are comprised of
four eight-bit sequences that can be further reduced to four
memory locations (that may even be split in different memory chips).
This will end up being electrical charges stored in transistors
being refreshed continuously. So where's the symbol?
It is not strange to think of that symbol as being *distributed*
inside our computers. But they "act" as if they were stored in a single
position.

Searle's argument is good because it presents the impossibility
of assembling the same "structure" that a human have inside its mind
based only on language alone (that is also a symbolic representation,
but of a different kind, with a different syntax).

When my brain sees a tree, it derives a lot of things. It perceives
edges, borders, shapes, color variations. Symbolic (and subsumed)
versions of a dozen of primitive characteristics are the things that
make up my "concept" of tree. It is not unique, it is vague to encompass
several examples, it may be used in analogies. My "symbols" of a tree
may not have a real world counterpart, they "act" as recipes for
recognizing (or classifying) other instances of trees.

Remember: symbol is everything that stands for another thing. So,
this:

y = ax + b

is a symbolic representation of a straight line, as much as this:

-------------------------

The last representation may ease some kinds of visual processing,
but it is symbolic. There is probably a dozen properties that
this representation must obey in order to keep being a representation
of a straight line. The same happens with trees. Unfortunately, we're
not able to describe those properties, because they are strictly
"personal" and made by the grouping of lower level symbols, created
by the processing of other symbols and the sensory pre-processing
mechanisms. The fact that we're not able to "define" a tree in
language-like terms does not imply that our internal concept is
not made of symbolic, primitive elements, just as the straight
lines above.

Whatever is used by the brain to represent what we see (propositions,
distributed "pixel-like" representations, a mixture of both, whatever)
all we have inside it is "symbolic" (in accordance with the meaning I'm
using this word here). I can't see it differently.

Whatever is used by the brain to physically implement this (coordinated
neural spiking, temporal synchrony, neural groups, columnar organizations,
hebbian synapses, spreading activation, whatever) it "acts" as if it was
manipulating symbols, usually meaningless to our awareness.

These symbols are, for the most part, unknown to our awareness although
their processing results in properties that we may occasionally notice.
Sometimes we are even able to name some of these properties (which means
finding "language symbols" used to stand for an internal group of
symbols). When I say "red" to a blind man, he will not "understand"
what it means, although he is able to use that word in phrases.

It may be fluid, it may be vague, it may be uncertain, it may be dynamic,
it may have random components. This does not prevent its implementation
into another capable "symbolic" machine, provided that this machine is
able to duplicate the necessary functions and provided this machine is
put to interact with the same world in which we live.

We may have a hard time understanding how things are physically
constructed and manipulated in our brain, but maybe we don't
need this information to build a similarly working machine. We "just"
must understand the basic principles, the "fundamental laws" of the
intelligent reasoning. The best way to do this is to study neuroscience
and cognitive psychology and perceive *what is relevant* and what is not,
what suggests basic, fundamental principles to be obeyed and what is
just "silicon level stuff". It is in fact the construction
of an airplane not flapping wings as birds do, but by aerodynamic
principles. We must find those aerodynamic principles. We must find
the "causal laws" of thinking.

Regards,
Sergio Navega.

From: usurper@euronet.nl (TechnoCrate)
Subject: Re: Can artificial intelligence really exist?
Date: 01 Dec 1998 00:00:00 GMT
Message-ID: <36641dd6.20734858@news.euronet.nl>
Content-Transfer-Encoding: 7bit
References: <3661DCD8.3E0EA91A@hotmail.com> <36625102.3336AF9A@northernnet.com> <73u2k2$dgo$1@news.rz.uni-karlsruhe.de> <73u6f5$hue$1@reader2.wxs.nl> <73uhe4$247$1@xinwen.daimi.au.dk> <3663142a.0@news3.ibm.net> <3663cfad.723161@news.euronet.nl> <3663fa33.0@news3.ibm.net>
Content-Type: text/plain; charset=us-ascii
Organization: Bad Advice Department
Mime-Version: 1.0
Newsgroups: comp.ai.philosophy

On Tue, 1 Dec 1998 11:08:13 -0200, "Sergio Navega" <snavega@ibm.net>
wrote:

>TechnoCrate wrote in message <3663cfad.723161@news.euronet.nl>...
>>On Mon, 30 Nov 1998 18:49:29 -0200, "Sergio Navega" <snavega@ibm.net>
>>wrote:
>>
>>>It is strong because it showed clearly that a computer fed with a bunch
>>>of symbols from any human language *will not* develop a *human-equivalent*
>>>level of understanding of the world. Inside the Chinese Room you may
>>>put anything: a Cray, a Sun workstation, an "idiot savant" or a
>>>group of 10 "Einsteins". Their performance in the task of understanding
>>>Chinese (or the world outside the room, for that matter), will be
>>>miserably poor.
>>>
>>This only shows that a computer running a straightforward formal
>>system will lack our concept of understandment. Searle doesn't prove
>>that a computer can't do better than that. He doesn't prove that it
>>isn't possible to make computerprograms doing the same thing as our
>>brains. He only stated that mere symbol shuffeling in an expert system
>>isn't enough to make understandment.
>>
>
>
>I guess I will be flamed on this.
>
Well... I don't want to dissapoint you so here goes:

YOU *&^%#$*&$^^!!!!

Let's continue

>Searle's experiment can be seen as something saying that no machine
>operating only through formal manipulations of symbols will ever be
>able to "understand" the world, whatever that really means. May I
>confuse everybody here and say that this may be utterly wrong?
>
>My position, from previous posts, can be inferred as being against
>the symbolic approach to AI. I usually stand together with the guys
>who say that interaction with the world is fundamental and that
>language alone (and the Turing Test, in a similar fashion) is not
>enough to assess one's competence at being humanly intelligent.
>
Well, I stand with you then :-)

>What is not so clear is that this position does not have nothing
>against the idea that our brain is a symbol processor,
>implemented on top of a signal processing machinery. This is not
>the position of GOFAI, but this is also not what the purely
>connectionist guys are claiming either.
>
[ snip ]

>Searle's argument is good because it presents the impossibility
>of assembling the same "structure" that a human have inside its mind
>based only on language alone (that is also a symbolic representation,
>but of a different kind, with a different syntax).
>
Searle's argument serves the pessimists who only want to hear that
a.i. will never reach our own level. He does so by focusing on the
"symbol shuffelers" and ignores the more ambitious cybernetics
movement (the ones focusing on the whole organism). He's evil, keep
away from him.

>Whatever is used by the brain to represent what we see (propositions,
>distributed "pixel-like" representations, a mixture of both, whatever)
>all we have inside it is "symbolic" (in accordance with the meaning I'm
>using this word here). I can't see it differently.
>
BTW "Image and Brain" of Kosslyn focuses on this (shortly: David Marr
was wrong ;-) It should be madatory reading, one gets a "feel" of how
the brain functions.

>These symbols are, for the most part, unknown to our awareness although
>their processing results in properties that we may occasionally notice.
>Sometimes we are even able to name some of these properties (which means
>finding "language symbols" used to stand for an internal group of
>symbols). When I say "red" to a blind man, he will not "understand"
>what it means, although he is able to use that word in phrases.
>
Ofcourse one can learn what kind of stimulus results in the activation
of "symbols" we use to recognize objects with. You will have little
trouble with recognizing the face of , let's say , Mick Jagger. You
will do so in a fraction of a second even if his image is degraded.
Problems arise if you want to draw a picture of his face (without
looking at it, although even if you will look it will be hard).
Artists learn by lots of trial and error what kind of properties they
use to recognize certain faces with and draw them. Nobody has direct
access to the lower level systems (ofcourse it wasn't necessary to be
able to do so in order to survive) but we can find out about them by
this kind of clever feedback.

>It may be fluid, it may be vague, it may be uncertain, it may be dynamic,
>it may have random components. This does not prevent its implementation
>into another capable "symbolic" machine, provided that this machine is
>able to duplicate the necessary functions and provided this machine is
>put to interact with the same world in which we live.
>
"A conversation with Einstein's brain" (believe it's from Hofstadter
but I'm not sure) focuses on this. Shortly: after Einstein's death his
entire brain, neuron by neuron, is described in a book. One could ask
him questions by applying values to his "audio neurons" and calculate
how they affect the other neurons, this calculating goes on for a
(long) while until one gets the motor neurons of the mouth fired up. A
little bit of conversion will get you his "spoken" answer to your
question.

Ofcourse, the book isn't intelligent (even though you will get pretty
intelligent answers out of it). It's just a collection of symbols
which need to be processed. Symbols are meaningless without a system
to process them (and make them).

It's not the symbols that are at the heart of intelligence, it's the
system that makes them and manages them: the perceptual system, which
is somewhat underestimated (also by Hofstadter).

Sure, one can do a great deal of symbol manipulating on the conceptual
system and make a lot of analogies that will make you understand, it
also comes in handy for episodic memory but it is the perceptual
system that provides the terms of the conceptual one and without the
perceptual system the conceptual one is meaningless since the concepts
will have no grounding in reality and can't be processed since
concepts (in all kind of combinations) are fed back in the perceptual
system (I can hardly believe this is actually a single sentence ;-)

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Can artificial intelligence really exist?
Date: 01 Dec 1998 00:00:00 GMT
Message-ID: <36645629.0@news3.ibm.net>
References: <3661DCD8.3E0EA91A@hotmail.com> <36625102.3336AF9A@northernnet.com> <73u2k2$dgo$1@news.rz.uni-karlsruhe.de> <73u6f5$hue$1@reader2.wxs.nl> <73uhe4$247$1@xinwen.daimi.au.dk> <3663142a.0@news3.ibm.net> <3663cfad.723161@news.euronet.nl> <3663fa33.0@news3.ibm.net> <36641dd6.20734858@news.euronet.nl>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 1 Dec 1998 20:48:41 GMT, 200.229.242.239
Organization: SilWis
Newsgroups: comp.ai.philosophy

TechnoCrate wrote in message <36641dd6.20734858@news.euronet.nl>...
>On Tue, 1 Dec 1998 11:08:13 -0200, "Sergio Navega" <snavega@ibm.net>
>wrote:
>
>>TechnoCrate wrote in message <3663cfad.723161@news.euronet.nl>...
>>>On Mon, 30 Nov 1998 18:49:29 -0200, "Sergio Navega" <snavega@ibm.net>
>>>wrote:
>>>
>>>>It is strong because it showed clearly that a computer fed with a bunch
>>>>of symbols from any human language *will not* develop a
*human-equivalent*
>>>>level of understanding of the world. Inside the Chinese Room you may
>>>>put anything: a Cray, a Sun workstation, an "idiot savant" or a
>>>>group of 10 "Einsteins". Their performance in the task of understanding
>>>>Chinese (or the world outside the room, for that matter), will be
>>>>miserably poor.
>>>>
>>>This only shows that a computer running a straightforward formal
>>>system will lack our concept of understandment. Searle doesn't prove
>>>that a computer can't do better than that. He doesn't prove that it
>>>isn't possible to make computerprograms doing the same thing as our
>>>brains. He only stated that mere symbol shuffeling in an expert system
>>>isn't enough to make understandment.
>>>
>>
>>
>>I guess I will be flamed on this.
>>
>Well... I don't want to dissapoint you so here goes:
>
>YOU *&^%#$*&$^^!!!!
>
>Let's continue
>

Thanks, now I feel better ;-)

>
>>Searle's argument is good because it presents the impossibility
>>of assembling the same "structure" that a human have inside its mind
>>based only on language alone (that is also a symbolic representation,
>>but of a different kind, with a different syntax).
>>
>Searle's argument serves the pessimists who only want to hear that
>a.i. will never reach our own level. He does so by focusing on the
>"symbol shuffelers" and ignores the more ambitious cybernetics
>movement (the ones focusing on the whole organism). He's evil, keep
>away from him.
>

If Searle seems radical in his arguments, so appears to me most of
the rebutters (the systems's reply seems absurd to me).
However, I hardly discard any serious opinion, I always try
to find *why* some apparently nonsensical belief emerged in the
mind of the other, even if it seems clearly wrong. There's something
very strong that impelled Searle to write that paper. I don't agree
with him, but I find his opinion *very* worthwhile and so I consider
most of the other serious critics of AI (like the Dreyfus brothers).

Instead of radically rebutting Searle's idea it is better to
understand what lead him to propose it and then develop a model that
can, in a first look, lead the observer to believe Searle and, on
a more detailed look, to agree that his argument is not correct.
I'm not saying this just because of Searle. Look at the number of
messages in this thread. Searle's original paper is from 1980!
Why this is still alive? There's something about human cognition
that drives some people to think seriously about Searle's idea.

>>Whatever is used by the brain to represent what we see (propositions,
>>distributed "pixel-like" representations, a mixture of both, whatever)
>>all we have inside it is "symbolic" (in accordance with the meaning I'm
>>using this word here). I can't see it differently.
>>
>BTW "Image and Brain" of Kosslyn focuses on this (shortly: David Marr
>was wrong ;-) It should be madatory reading, one gets a "feel" of how
>the brain functions.
>

Stephen Kosslyn was exactly what I had in mind when I wrote that paragraph
although the subtitle of his book ("The resolution of the imagery debate")
seems a little bit pompous to me. But even agreeing with Kosslyn's main
ideas, as I said earlier, I keep one eye on what Pylyshyn says. Why
Pylyshyn insists with his ideas? (he is finishing a new book, "Seeing: An
Essay on Vision and Mind"). I firmly believe that one does not have to
decide between Kosslyn and Pylyshyn, throwing the loser in the trash
can permanently. We must keep our eyes open, even having strong
preference for one of them.

It is funny, connectionists feel today that Jerry Fodor's time has gone,
that most of what he preached (modularity with central coordination,
language of thought, etc) does not have neuropsychological plausibility.
Funny, I think, is what prompted Fodor to come up with such theories and
specially Language of Thought (LOT). I have some serious problems with
Fodor's ideas (in particular with his defense of innate knowledge) but
I am reevaluating LOT again, not within his original assumptions and
working models, but as something that can support that "primitive
symbolic" level that categorizes what comes from sensory mechanisms.

Regards,
Sergio Navega.

From: usurper@euronet.nl (TechnoCrate)
Subject: Re: Can artificial intelligence really exist?
Date: 02 Dec 1998 00:00:00 GMT
Message-ID: <366521f3.30784382@news.euronet.nl>
Content-Transfer-Encoding: 7bit
References: <3661DCD8.3E0EA91A@hotmail.com> <36625102.3336AF9A@northernnet.com> <73u2k2$dgo$1@news.rz.uni-karlsruhe.de> <73u6f5$hue$1@reader2.wxs.nl> <73uhe4$247$1@xinwen.daimi.au.dk> <3663142a.0@news3.ibm.net> <3663cfad.723161@news.euronet.nl> <3663fa33.0@news3.ibm.net> <36641dd6.20734858@news.euronet.nl> <36645629.0@news3.ibm.net>
Content-Type: text/plain; charset=us-ascii
Organization: Bad Advice Department
Mime-Version: 1.0
Newsgroups: comp.ai.philosophy

On Tue, 1 Dec 1998 17:40:12 -0200, "Sergio Navega" <snavega@ibm.net>
wrote:

>>Searle's argument serves the pessimists who only want to hear that
>>a.i. will never reach our own level. He does so by focusing on the
>>"symbol shuffelers" and ignores the more ambitious cybernetics
>>movement (the ones focusing on the whole organism). He's evil, keep
>>away from him.
>>
>
>If Searle seems radical in his arguments, so appears to me most of
>the rebutters (the systems's reply seems absurd to me).
>However, I hardly discard any serious opinion, I always try
>to find *why* some apparently nonsensical belief emerged in the
>mind of the other, even if it seems clearly wrong. There's something
>very strong that impelled Searle to write that paper. I don't agree
>with him, but I find his opinion *very* worthwhile and so I consider
>most of the other serious critics of AI (like the Dreyfus brothers).
>
Hofstadter thought of Searle as a good test to check a.i. and I agree
with him. Ofcourse a.i. should be scrutinized by people like Searle,
it's a good thing, but that doesn't make him less evil. To hell with
him! (I always appreciate the emotional response since our
intellectual neocortex only serves as an addition to the limbic
system, not as a replacement ;-)

However: let's return to the chinese cabinet "with a twist". From the
outside the cabinet appears to be intelligent. We know that the inside
is not. Kosslyn already showed that there's quite some feedback into
the perceptual system (the whole of our sensory with processing units
that make us experience everything).

Operations are performed on symbols that are entered but let's say
that the cabinet also has a self symbol and takes notice of its own
behaviour (in terms of symbols ofcourse). It takes a look in the book
(hehe it rhymes) and finds out that it is intelligent.

Now we have a double checked Turing test. Not only does the cabinet
appear to be intelligent to observers, it also appears to be
intelligent to itself.

A little bit too easy

Ofcourse, in order to be intelligent a simple formal system won't
suffice, it needs to be dynamic and be able to learn. Symbols are not
simply given, they are made and grow (the twisted chinese cabinet will
find out about its own workings, gain some self knowledge). There are
systems that manage the symbols (which are in fact concepts vaguely
described as you already stated) and can make analogies so a deeper
understandment is reached.

Too make it even worse we have the behaviour controlled by changing
goals (all kinds of temporally termed ones) ,emotions ,mood , crude
heuristics, instincts (the hardwired heuristics) and reflexes. Symbols
are not only activated by the outside world but also by the internals.
The behaviour and internal states are all scrutinized by the
perceptual system and added up to the self symbol.

There are still symbols but they are much more vague and can be viewed
from different points and the book is replaced by a system but it's
still a chinese cabinet with no single subsystem being "intelligent"
or self aware. But when the cabinet evaluates itself it will say it is
self aware and intelligent and so will observers. It's important to
note that it will arrive at that conclusion, it will have learned
about it, it wasn't given. Just like all the other concepts in which
the only hardwired part the terms or primitives of the perceptual
system is.

Now, are we such a twisted chinese cabinet that will always pass the
double Turing test? (now that's a nice philosophical finish ;-)

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Can artificial intelligence really exist?
Date: 21 Dec 1998 00:00:00 GMT
Message-ID: <367e61b0.0@news3.ibm.net>
References: <3670512a.2712952@news-server> <wolfe-1712981258050001@wolfe.ils.nwu.edu> <Pine.SUN.3.96.981217155717.26373A-10 <367a6222.0@news3.ibm.net> <75ej9v$gqq@abyss.West.Sun.COM> <367bb9fd.0@news3.ibm.net> <367BF749.E8D6F2E0@sandpiper.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 21 Dec 1998 14:56:48 GMT, 200.229.242.15
Organization: SilWis
Newsgroups: comp.ai,comp.ai.philosophy

Jim Balter wrote in message <367BF749.E8D6F2E0@sandpiper.net>...
>Sergio Navega wrote:
>
>> Maybe that is the root of the enthusiasm with neural nets, something
>> that I regret. I really do not believe that the answer to AI is
>> using ANNs. Each day I find another reason to doubt of purely
>> connectionist solutions. They seem to be happy with the network
>> of simple elements and we know that neurons are everything but
>> simple elements. Then, enter the hypothesis of representation
>> by populations of neurons and you start to find again a place
>> for symbols in the brain. I always found Newell's Physical
>> Symbol System Hypothesis utterly strange. Why such an
>> hypothesis, with no evidence to support it? I guess we may be
>> surprised if in the coming years we find that Newell was
>> not so far from the truth.
>
>Unless you think that Newell made a wildly lucky random guess,
>no different than hypothesizing that, say, being grey is necessary
>and sufficient to intelligence, this should suggest to you that your
>notion of what constitutes evidence is flawed.  When we find something
>"utterly strange", it is generally our understanding that is flawed,
>and not the world.

Yes, that's correct. The world is never wrong, only our models of it.
What I find strange in Newell's idea is that after hypothesizing he
starts going ahead as if that was true, without great concern of
looking for empirical or causal models that could support it.
Then, connectionists entered the party and dismissed Newell.

> This goes for life, consciousness, wave-particle duality, Newell
> proposing a hypothesis, whatever.  And yet discussions of these topics
>are full of an intellectual arrogance that assumes that
>our ability to fit evidence into a model is superb, and so when
>something doesn't fit the model, it must be weird, special, "utterly
>strange", a sort of wart on reality, like life (a vital force, separate
>from our clockwork model of everything else), or consciousness
>(something "non-functional", different from our functional causal model
>of everything), or wave-particle duality (different from our billiard ball
> model of everything), or Newell and his purported
>evidence-free hypothesis (different from our supported-by-evidence
>model of scientific hpyotheses).  But in each case these things can be fit
> into a common model, sometimes by finding a broader model that
>encompasses the older one.

I didn't understand what was your point. I was comparing Newell's hypothesis
that, although intuitively smooth, don't seem to be backed by nothing from,
say, neuroscience. On the other hand, many connectionists insist on
distributed "representations" and they use networks of neurons in our
brain as analogical scaffolding. Apparently, connectionists are better
prepared.

Don't misunderstand me here, I'm more sympathetic to Newell, I'm just
trying to fill the gap between his hypothesis and the real world we know.
In other words, I'm looking for supports of Newell's hypothesis that
are stronger than the (weak) ones claimed by connectionists. Searching
for these supports is, in my opinion, one way of obtaining also more
information about the *mechanics* of this symbol manipulation with
eventual indications of who is closer to the truth (statisticians,
bayesianists, etc).

>
>Here's something I found on the web that goes directly to Newell's
>hypothesis and its established basis in history.  You need not agree
>with the Nominalist thesis to recognize that it not "utterly strange",
>and that the physical symbol hypothesis borrows from it:
>
>  The main dilemma about semantics lies here. Following Aristotle,
>  thoughts represent things. As a consequence, you can have a well
>  founded thought, and no need to express it. The alternative point of
>  view started with Stoicism, and was developed by Nominalists (apart
>  from the quasi-mythic Roscellinus, I have in mind Ockham, Hobbes, and
>  all the greatest thinkers of 17th century): against B), they claim
>  that language is a bridge between thought and world. As a consequence,
>  you cannot have meaningful thought without symbols (see: Leibniz's
>  Dialogus or Wittgenstein's Tractatus).
>
>From http://shr.stanford.edu/shreview/4-1/text/matteuzzi.commentary.html
>
>The note is worth reading, as well the other notes in the series
>(follow the ToC link), which comprise 33 commentaries on Herbert A. Simon's
"Literary Criticism: A Cognitive Approach".
>

Thanks for the link. That set of articles is really very interesting.
Simon's reply (unfortunately, inacessible from the ToC, only by the
NEXT button of the last article) is almost as large as his original
article. Answering to replies is sometimes better than the initial
exposition of one's ideas. Searching around the link you supplied
I found another gem, full of very good articles:

Constructions of the Mind

http://shr.stanford.edu/shreview/4-2/text/toc.html

Regards,
Sergio Navega.


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net