Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version B 2.10.2 9/5/84; site sjuvax.UUCP
Path: utzoo!linus!decvax!bellcore!petrus!scherzo!allegra!princeton!astrovax!sjuvax!tmoody
From: tmoody@sjuvax.UUCP (T. Moody)
Newsgroups: net.philosophy
Subject: Re: Searle's Pearls
Message-ID: <2461@sjuvax.UUCP>
Date: Sun, 27-Oct-85 01:53:32 EST
Article-I.D.: sjuvax.2461
Posted: Sun Oct 27 01:53:32 1985
Date-Received: Thu, 31-Oct-85 21:46:58 EST
References: <2412@sjuvax.UUCP> <1779@watdcsu.UUCP>
Reply-To: tmoody@sjuvax.UUCP (T. Moody)
Distribution: net
Organization: St. Joseph's University, Phila. PA.
Lines: 61
Summary: The "Systems Response"

In article <1779@watdcsu.UUCP> dmcanzi@watdcsu.UUCP (David Canzi) writes:
>
>I suggest that, even though neither the man in the Chinese room, nor
>the manual he reads from can be said to understand Chinese, the system
>consisting of both man and manual understands Chinese.
>-- 
>David Canzi

Searle anticipates this move, which he calls the "systems reply."  I
shall briefly summarize his response to it, and throw in my own $.02.
In fact, the easiest thing is to quote Searle directly:

"Let the individual internalize all of these elements of the system.
He memorizes the rules in the ledger and the data banks of Chinese
symbols, and he does all the calculations in his head.  The individual
then incorporates the entire system.  There isn't anything at all to
the system that he does not encompass.  We can even get rid of the
room and suppose he works outdoors.  All the same, he understands
nothing of the Chinese, and a fortiori neither does the system,
because there isn't anything in the system that isn't in him."

It seems to me that the response is quite clear.  But let's throw in a
few reminders about what Searle is up to.  First, he is NOT trying to
prove that the mind is not a Turing Machine.  Second, he is NOT trying
to prove that "machines will never think".  He IS interested in the
roots of intentionality, and he IS claiming that what makes a system
an "intentional system" is NOT the fact that it passes the Turing
Test, nor is it the fact that its brain is instantiating a Turing
Machine algorithm.  He is NOT claiming that intentional systems must
be made of neurons, although he does point out that biological systems
are just the right sorts of things to possess intentionality.

Why?  Because of the "causal powers" of biological systems.  If you
want to know more about what Searle thinks about this, his recent book
_Intentionality_ is where he puts it together.

The Chinese Room argument is only supposed to be a critique of the
Turing Test as providing a sufficient condition of intentionality.
Searle believes that you have to have a richer repertoire of
interactions with the environment and other beings to have sufficient
conditions of intentionality.  That means richer than what the Turing
Test measures.  Remember, the Turing Test is based on blind exchange
of typed texts.  People like Hofstadter claim that all interpresonal
relations are informal Turing Tests, but this is ridiculous.  The
whole point of the Turing Test, as set up by A. M. Turing himself, is
to establish a carefully *restricted* mode of interaction, in which
only text-exchange counts.  Throw away the restrictions, and you're
not talking about the Turing Test anymore.  Turing believed that
anything more than text-exchange would be extraneous to determining
intentionality.  THIS is what Searle is trying to refute.

Of course, Turing didn't talk about intentionality; he talked about
*thinking* and mental states.  But maybe there are important
distinctions to be made between intentionality, mental states, and
consciousness.


Todd Moody                 |  {allegra|astrovax|bpa|burdvax}!sjuvax!tmoody
Philosophy Department      |
St. Joseph's U.            |         "I couldn't fail to
Philadelphia, PA   19131   |          disagree with you less."