Xref: utzoo comp.ai:4785 sci.lang:5250
Path: utzoo!attcan!utgpu!jarvis.csri.toronto.edu!rutgers!psuvax1!psuvm!miy1
From: MIY1@PSUVM.BITNET (N Bourbaki)
Newsgroups: comp.ai,sci.lang
Subject: Chinese Room
Message-ID: <89269.042406MIY1@PSUVM.BITNET>
Date: 26 Sep 89 08:24:06 GMT
Organization: Penn State University
Lines: 40


In article <15157@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:
>In article <567@ariel.unm.edu> bill@wayback.unm.edu (william horne) writes:
>
>>This example is relavant to AI, because it questions the validity of the
>>Turing Test as a test of "understanding", as well as questioning the
>>legitimacy of rule based systems as models of intelligence.
>
>One serious flaw in the Chinese Room Problem is that it relies on the
>so-called 'conduit metaphor' (originally described by Michael Reddy in A.
>Ortony's _Metaphor_and_Thought_ Cambridge U. Press 1979).  That metaphor
>assumes that meaning is essentially contained in the linguistic expression.

>  The conduit metaphor
>is very powerful and useful as a means of illuminating the behavior of
>language, but, like all analogies, it breaks down.  Those who deal with real
>language to language translation know that there is no one-to-one match
>between expressions in one language and those in another.

But this difficulty would affect the native Chinese speaker and the
Chinese Room Demon equally.   That is one premise of Searle's
argument - the "mechanical" system is presumed to be just as competent
(not necesarily perfect) at translation as the "understanding" system.

Searle would have you believe that the "mechanical" system lacks
true understanding because it lacks "intentionality".  But this
begs the question, and leads immediately to the "other minds" problem.
Searle acknowledges this objection in _Minds, Brains, and Programs_,
but shrugs it off as only being worth "a short reply", basically that
cognitive states are not created equal, and that systems which exhibit
intentionality are more worthy of being described as "understanding"
than formal symbol-manipulating systems.

The gist of his conundrum is not to validate (or invalidate) any particular
linguistic theory, but to attack so-called "strong AI".  I don't find it a
very convincing argument.  It seems too much like vitalism -- that
there is something special about brains that cannot be duplicated by
artificial means.

N. Bourbaki