Path: utzoo!attcan!uunet!ginosko!gem.mps.ohio-state.edu!uwm.edu!mailrus!cornell!uw-beaver!fluke!ssc-vax!bcsaic!ray
From: ray@bcsaic.UUCP (Ray Allis)
Newsgroups: comp.ai
Subject: Re: What's the Chinese room problem?
Keywords: Chinese
Message-ID: <15313@bcsaic.UUCP>
Date: 28 Sep 89 18:14:58 GMT
Organization: Boeing Computer Services ATC, Seattle
Lines: 75


In article <15157@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:

>One serious flaw in the Chinese Room Problem is that it relies on the
>so-called 'conduit metaphor' (originally described by Michael Reddy in A.
>Ortony's _Metaphor_and_Thought_ Cambridge U. Press 1979).  That metaphor
>assumes that meaning is essentially contained in the linguistic expression.  A
>logical consequence of this belief is that one can devise a set of principles
>for translating from one language into another without losing any of the
>semantic 'stuff' that a linguistic expression conveys.  The conduit metaphor
>is very powerful and useful as a means of illuminating the behavior of
>language, but, like all analogies, it breaks down.  Those who deal with real
>language to language translation know that there is no one-to-one match
>between expressions in one language and those in another.  An alternative view
>of linguistic communication is to assume that linguistic expressions merely
>help to shape the flow of mental pictures (alas, another metaphor :-) that
>constitute the end product of communication.  Therefore, there is no necessary
>one-to-one correspondence between linguistic expressions in one language and
>those in another.  The trick to translation is to construct expressions in the
>target language that evoke the same thoughts as those in the source language.
>And this may even be impossible without modification of the target language
>(i.e. the creation of new words to fit new experiences).  So I claim that the
>Chinese room problem rests on incorrect assumptions about the nature of
>language and understanding.

It seems to me your position is in fact very close to Searle's.  The problem
I have with his little parable is that he pretends that the output from
the Chinese room is satisfactory (or rather lets us assume so).  I believe that
if the room does not "understand" Chinese, and he argues that it does not, 
the output will not be satisfactory.  BTW, in his original illustration there
is no inter-language translation, it's Chinese in and Chinese out, with
the transformation rules in English.

   "But the point of the [Chinese room] argument, I think has been lost
   in a lot of the subsequent literature developed around this, so I want
   to emphasize the point of it.  The point of the argument is not that
   somehow or other we have an "intuition" that I don't understand
   Chinese, that I find myself *inclined to say* that I don't understand
   Chinese but, who knows, perhaps I really do.  That is not the point.
   The point of the story is to remind us of a conceptual truth that we
   knew all along; namely, that there is a distinction between manipulating
   the syntactical elements of languages and actually understanding the
   language at a semantic level.  What is lost in the AI *simulation of*
   cognitive behavior is the distinction between syntax and semantics.

   Now the point of the story can be stated more generally.  A computer
   program, by definition, has to be defined purely syntactically.  It
   is defined in terms of certain formal operations performed by the
   machine.  That is what makes the digital computer such a powerful
   instrument.  One and the same hardware system can instantiate an
   indefinite number of different computer programs, and one and the same
   program can be run on different hardwares, because the program has to
   be defined purely formally.  But for that reason the formal simulation
   of language understanding will never by itself be the same as duplication.
    Why?   Because in the case of actually understanding a language, we have
   something more than a formal or syntactical level.  We have a semantics.
   We do not just shuffle uninterpreted symbols, we actually know what they
   mean."

   John Searle, "Minds and Brains Without Programs", Mindwaves,
   Colin Blakemore and Susan Greenfield, eds., 1987.

Your comments that "there is no necessary one-to-one correspondence between
linguistic expressions in one language and those in another." and "The trick
to translation is to construct expressions in the target language that evoke
the same thoughts as those in the source language." are really what prompted
this posting.  I have several times stated my belief that AI's "natural
language understanding" and "machine translation", in their present form as
symbol manipulation, are futile efforts.  The phrase 'computational linguistics'
betrays a deep misunderstanding of what language *is*, i.e. a means to
communicate *experience*.  "Understanding" is the subjective experience evoked
in a receiving mind by language, whether spoken, written or gestured.
Translation *requires* understanding, missing (so far) in computers because
symbol systems, which are by design form without content, lack experiences to
evoke.