Selected Newsgroup Message

Dejanews Thread

From: "Sergio Navega" <snavega@ibm.net>
Subject: What it takes to understand chinese
Date: 13 May 1999 00:00:00 GMT
Message-ID: <373ae016@news3.us.ibm.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 13 May 1999 14:22:14 GMT, 166.72.21.14
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

Chinese room is back again, John Searle will never die.

a) Chinese room is impossible in this universe
----------------------------------------------
It takes a "rule book" greater than the universe to answer
several ordinary chinese questions intelligently. Write to
the room this phrase (in chinese) and wait for the correct
answer:

"I have the number four trillion, eight hundred millions,
nine hundred thousand and fifty six. If I add one to
this number and then subtract one from this number, tell
me what is the number I'll have"

Obviously, to answer any kind of chinese question such as
this, the rule book must be infinite, so as to predict all
kinds of questions similar to that one. Unless you consider
*generative rules*, in which case you could answer this
kind of question using a *finite* database of rules (this
generative database would include essential arithmetic
knowledge). That's what several AI systems have been doing.
But in this case, the problem of item b) applies.

b) Rule-only methods cannot account for all kinds of understanding
------------------------------------------------------------------
There are several kinds of knowledge which cannot be coded
only in rule-like manner. Typical questions involve *recognition*,
which is something that cannot be coded in a finite number of
if/then pairs (how many apples can you see and categorize them
all as being apples?). For an example, make the chinese room answer
this question:

Husband:  "I will leave you"
Wife:     "Who is SHE?"
Question: "Why did the woman ask this?"

The answer to this question (explanation of a situation) involves
recognition of patterns and establishment of probable causal
models, things that cannot be coded in a reasonable amount of
rules. But then you may say that you can devise a system with
patterns and probabilistic knowledge embedded. This will work fine,
but there are situations where we can question something imaginary,
that nobody never devised before. This leads us to item c)

c) Learning and world inference
-------------------------------
To keep the size of the "rule-book" finite, even using things
like patterns and probabilistic distributions, we must account
for learning and world-level reasoning. To see why this is
significant, try to pose this question to the chinese room:

"I have invented a new equipment. It is the papercorder. It is
analogous to a videocassette recorder, but the video is
recorded in a paper tape, through punches (holes), instead of
writing into a magnetic tape. Do you think it will be easy to
erase and reutilize a previously recorded tape of my papercorder?"

The answer to this question involves learning a new thing
(the papercorder) and world-knowledge (punches in a paper cannot
be undone by simple methods). Chinese room will not be able to
answer this (and a lot of others, try thinking in a cup filled
with water used as a weight to avoid the fall of something,
when water evaporates, the thing will fall) Unless, of course,
it had knowledge of the real world, in which case we get
to item d)

d) A chinese room that works
----------------------------
A chinese room that have arms, eyes, ears, that is able to
capture patterns and regularities from these senses and act
physically, that is able to propose experiments in the world
and learn its causal mechanisms. A chinese room that started
almost empty, and learned throughout interactions with the world,
capturing and perceiving what is relevant and coding this
"knowledge" in any kind of reasonable means. Now put any computer
conveniently programmed doing this work inside the room and we
have a great chance of making a chinese room that works just
like us, Searle notwithstanding.

Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: What it takes to understand chinese
Date: 17 May 1999 00:00:00 GMT
Message-ID: <3740357e@news3.us.ibm.net>
References: <373ae016@news3.us.ibm.net> <373f0884@newsread3.dircon.co.uk>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 17 May 1999 15:27:58 GMT, 129.37.183.189
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

daniel wrote in message <373f0884@newsread3.dircon.co.uk>...
>> d) A chinese room that works
>> ----------------------------
>> A chinese room that have arms, eyes, ears, that is able to
>> capture patterns and regularities from these senses and act
>> physically, that is able to propose experiments in the world
>> and learn its causal mechanisms. A chinese room that started
>> almost empty, and learned throughout interactions with the world,
>> capturing and perceiving what is relevant and coding this
>> "knowledge" in any kind of reasonable means. Now put any computer
>> conveniently programmed doing this work inside the room and we
>> have a great chance of making a chinese room that works just
>> like us, Searle notwithstanding.
>
>i think you mean 'what if we shut a baby in the room?!'
>
>you'd have to let it out occasionally.
>

The baby is already inside a room! He/she is inside its skull, with
only perceptual equipment in contact with the world (eyes, ears,
touch, etc).

Regards,
Sergio Navega.

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: What it takes to understand chinese
Date: 17 May 1999 00:00:00 GMT
Message-ID: <374065da@news3.us.ibm.net>
References: <373ae016@news3.us.ibm.net> <19990517124513.21445.00002436@ng-fi1.aol.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 17 May 1999 18:54:18 GMT, 200.229.240.251
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy

LocalFolk wrote in message <19990517124513.21445.00002436@ng-fi1.aol.com>...
>Sergio...
>>..."I have the number four trillion, eight hundred millions,
>>nine hundred thousand and fifty six. If I add one to
>>this number and then subtract one from this number, tell
>>me what is the number I'll have"...
>
>Why is this number any more difficult in Chinese than in
>English?  Do you think an American can count to infinity
>before a Chinese person?  You are hard to understand, here.
>Does that mean that one of us is *not* intelligent?
>

Jim, I didn't say it was more difficult in chinese. My goal
was to show that a system able to answer this question had to
understand what it means OR have infinitely programmed
responses (because numbers are infinite). The chinese room
appears to be a member of the latter class.

Obviously, any man/woman chinese, english or whatever is
able to answer that question. Also obvious is the fact that
any man/woman does not have access to infinite knowledge.
This suggests that Searle's experiment is trying to find
hair in eggs.

>>Obviously, to answer any kind of chinese question such as
>>this, the rule book must be infinite
>I GEB to differ!
>Number systems were CREATED to allow humans to work
>in these realms.  Our rule book is only "infinite" in that we learn
>something new every day.  In that sense, so will an AI's "book."
>But a computer does not need infinite resources any more than
>a human does.
>

I wouldn't say that our "rule book" is infinite. They are
quite finite, given that we have a limited life span. But
the process of generalization that we often use when
thinking may give rise to infinite lines of thought (that
is, if we had infinite time).

>>Husband:  "I will leave you"
>>Wife:     "Who is SHE?"
>>Question: "Why did the woman ask this?"
>Ask a six-year-old human the same question.
>This kind of bias reminds me of the IQ tests that were "culture-
>centric" back in the late '60's/early '70's.  I do not require my AI
>to be "smarm-compliant."  Does anyone else?  Two words for you:
>"HA ha!"

Jim, I may "ha ha" together with you, but my point remains valid.
I presented that example to show that very often what is considered
an intelligent reaction is the result of *recognition* of a situation
that one has *never* lived before (contrast this with the explicit
coding that a chinese room rule book must have). It is the use of
elements and concepts that were developed previously and that have
that property of being *generative*, allowing the construction of
near infinite examples. A chinese room, along with its "rule book",
can only implement this kind of reaction having *all the entries and
answers* stored beforehand, in a supreme act of infinite predictive
power. I say that our brain and also a conveniently programmed
computer are capable of such a performance without the need to
resort to omniscience and infinity, two impossible concepts
in our physical world.

>
>>c) Learning and world inference
>>-------------------------------
>>To keep the size of the "rule-book" finite,
>And we do this why?  Because we do not have Infinite RAM/HD space?
>How about a "virtually infinite" rule book?  One that, given infinite
>disk space, could be filled with rules, but--face it--who wants to bother?
>Humans are the same way--look at an expert (say a comic-book expert)
>someone who knows every frigging detail of the comic-book universe.
>Can they get a date?  Would they even know whey HE left HER?  Actually,
>most "wire-headed" experts are pretty pathetic in other areas.  Would this
>be the case if they had--and utilized-- infinite learning space?
>I don't think so..
>

I'm not exactly sure what's your point here. I'm even not sure if you're
agreeing with me or not :-). We don't have infinite brains, yet we
are capable of producing behaviors that, to be mimicked, would require,
although finite, a chinese room that would have to use all the matter
of this universe to code the rules of their instruction book. This
precondition is, for me, a strong point against the chinese room
thought experiment.

><yadda yadda yadda>
>> try to pose this question to the chinese room:
>>"I have invented a new equipment. It is the papercorder. It is
>>analogous to a videocassette recorder, but the video is
>>recorded in a paper tape, through punches (holes), instead of
>>writing into a magnetic tape. Do you think it will be easy to
>>erase and reutilize a previously recorded tape of my papercorder?"
>

>First of all, I might suggest that your first sentence was poorly formed.
>Does that make you unintelligent?

No, that only is coherent with English being my second language :-)

>Second, ask the same question to my Great Aunt Stella (85).
>I reckon she'd say you were talkin' CHINESE!!!
>In other words, she could not answer the question intelligently,
>except to honestly say "I don't know."  (The dreaded IDK form!)
>Does that make HER uninelligent?  THINK HARD before answering
>this one, bubbah--NOBODY disses Aunt Stella!
>Ahem :-}
>

No! By saying "I don't know" she would start to seem intelligent!
And the definitive proof will be to teach your aunt what is a
paper tape, how a camera digitizes video, etc, etc. It may take
a while but, unless she got anything better to do, you'll be
able to show her all the meaningful concepts. Then, she will
use those concepts to reassess your question and answer it
intelligently.

Obviously, you could write a rule book for a chinese room in
order to develop exactly the kind of dialogue that you'd have
with your aunt, duplicating exactly the kind of interaction.
But in order to do that, you'd have, necessarily, to have a
rule book with all possible interactive exchanges. I guess
this book will not fit in the quantum particles of our
galaxy.

>>...try thinking in a cup filled
>>with water used as a weight to avoid the fall of something,
>>when water evaporates, the thing will fall)...
>Sounds like the perfect murder, to ME!

I guess I've seen this trick in some movie.

>
>>Unless, of course,
>>it had knowledge of the real world,
>Actually, if you had posed some kind of question to ME, using the
>evaporating water trick, I would not have been able to answer
>intelligently.  Sorry.
>

Come on, don't be modest!

>Which brings us
>>...to item d)
>
>>d) A chinese room that works
>>----------------------------
>>A chinese room that have arms, eyes, ears...
>etc...
>That's sort of cheating, no?
>

Yes, it is. Do we have any other alternative? Yes, I think
we have, and that involves cheating the computer such as
it seems to have arms, eyes, and ears. But that's the
topic for another exchange.

>>Sergio Navega.
>Consistent, Questioning, Reasonable, and Articulate...
>Your contributions to this newsgroup are numerous, positive,
>and a delight for non-PhD's like myself.
>

Are you serious? Now, in spite of the cup with water, you're
definitely looking intelligent ;-)

Regards,
Sergio Navega.


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net