Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version B 2.10.2 9/18/84; site watdcsu.UUCP
Path: utzoo!linus!decvax!tektronix!uw-beaver!cornell!vax135!houxm!ihnp4!cbosgd!clyde!watmath!watnot!watdcsu!dmcanzi
From: dmcanzi@watdcsu.UUCP (David Canzi)
Newsgroups: net.philosophy
Subject: Re: Searle's Pearls
Message-ID: <1810@watdcsu.UUCP>
Date: Thu, 31-Oct-85 05:35:35 EST
Article-I.D.: watdcsu.1810
Posted: Thu Oct 31 05:35:35 1985
Date-Received: Sun, 3-Nov-85 13:42:09 EST
References: <2412@sjuvax.UUCP> <1779@watdcsu.UUCP> <2461@sjuvax.UUCP>
Reply-To: dmcanzi@watdcsu.UUCP (David Canzi)
Distribution: net
Organization: U of Waterloo, Ontario
Lines: 53

In article <2461@sjuvax.UUCP> tmoody@sjuvax.UUCP (T. Moody) writes:
>>I suggest that, even though neither the man in the Chinese room, nor
>>the manual he reads from can be said to understand Chinese, the system
>>consisting of both man and manual understands Chinese.
>
>Searle anticipates this move, which he calls the "systems reply." ...
>
>"Let the individual internalize all of these elements of the system.
>He memorizes the rules in the ledger and the data banks of Chinese
>symbols, and he does all the calculations in his head.  The individual
>then incorporates the entire system.  There isn't anything at all to
>the system that he does not encompass.  We can even get rid of the
>room and suppose he works outdoors.  All the same, he understands
>nothing of the Chinese, and a fortiori neither does the system,
>because there isn't anything in the system that isn't in him."

That's a tough one.  I won't attempt to argue against it until after
I've read Searle's paper, and maybe not even then.  But I do have
a couple of comments.

First, even though you say Searle was not trying to prove that machines
will never think, I can't see how one can escape that conclusion
if we accept the Chinese Room argument.  Let's carry the above a step
further, and have the man memorize a manual describing phonetic Chinese
instead of written Chinese, and have him follow the rules to generate
spoken responses to a *real* Chinese man who is talking to him.
Suppose, in the middle of the conversation, the phone rings.  The
Chinese man answers the phone, frowns, hangs up, then walks over to the
rule-following man and says, in Chinese, "There's been a bomb threat.
We have to leave the building."  The rule-following man responds in
Chinese, saying "Let's go." Then he sits and waits for the Chinese man
to say something else.

One step further: the manual not only describes the Chinese language,
but uses some notation to represent sensory observations and movements
of the body.  The man memorizes the manual and can carry out the rules
at the normal speed of somebody who really understands Chinese.
(Clearly he must be *very* talented.)  Repeat the bomb threat scenario,
and he gets up from his chair and heads for the exit, but doesn't know
why he's leaving.  There is no observable difference between
understanding and the lack thereof.  AI people have a good reason to be
annoyed by Searle's argument.

One final step, and I think this'll amuse you: add back the specs for
written Chinese in the manual used in the previous paragraph, have the
man memorize it, then put him to work in a Swahili Room, with a Swahili
manual written in Chinese...
-- 
David Canzi

Lazlo's Chinese Relativity Axiom:
     No matter how great your triumphs or how tragic your defeats
     approximately one billion Chinese couldn't care less.