Xref: utzoo sci.lang:5266 comp.ai:4799
Path: utzoo!attcan!uunet!ginosko!brutus.cs.uiuc.edu!psuvax1!rutgers!eddie!uw-beaver!fluke!ssc-vax!bcsaic!rwojcik
From: rwojcik@bcsaic.UUCP (Rick Wojcik)
Newsgroups: sci.lang,comp.ai
Subject: Re: What's the Chinese room problem?
Message-ID: <15336@bcsaic.UUCP>
Date: 29 Sep 89 01:03:35 GMT
References: <822kimj@yvax.byu.edu>
Reply-To: rwojcik@bcsaic.UUCP (Rick Wojcik)
Organization: Boeing Computer Services AI Center, Seattle
Lines: 76


Celso Alvarez (CA) writes:
me>. . .  The trick to translation is to construct expressions in the
me>target language that evoke the same thoughts as those in the source
me>language. 

CA> Much more than thoughts are evoked by language.  How do you translate
CA> the signalling of identity, roles, and social relationships?

I think that such concepts have to be represented as thought structures, since
they have an impact on language structure.  But your question may be filed
under my general question: Just what do 'Chinese Room' debaters think a
translation is?  What criteria do you use to judge that a translation from one
language to another is successful?  My position is that there is no such thing
as translation in an absolute sense.  A seemingly trivial example is the
translation of expressions that refer to language-specific grammatical
structure.  Thus, there is no way to translate French 'tutoyer' directly into
English.  You must rely on circumlocution.  It means roughly 'use the intimate
2nd person singular form of the verb'.  But practical translators might take
an equivalent French expression to 'Don't tutoyer me' into English as 'Don't
use that tone of voice with me', or some such thing.  But it is difficult to
say what makes one such translation better than another.  People can get into
heated arguments over such questions.

N. Boubaki (NB) writes:
>...Those who deal with real
>language to language translation know that there is no one-to-one match
>between expressions in one language and those in another.
NB> But this difficulty would affect the native Chinese speaker and the
NB> Chinese Room Demon equally.   That is one premise of Searle's
NB> argument - the "mechanical" system is presumed to be just as competent
NB> (not necesarily perfect) at translation as the "understanding" system.

I know, but I think that Searle, like most of us, has implicitly adopted the
conduit metaphor in his conceptualization of the problem.  He really believes
that there is some absolute sense whereby an expression in one language
corresponds to one in another.  This seems clear from his insistence that the
translation itself be 'mechanical'--in other words, symbol manipulation.
Those involved in translation understand that the translation process requires
editing and revision.  Who determines that the "mechanical" system is "just as
competent" if there is no mechanical basis for judging competence?  But that
is just what you need to do in order to bring about translation.  You need
mechanize the ability to judge and revise.  That would be tantamount to
mechanizing the understanding process, since it is only by understanding
expressions in two different languages that you can judge their equivalence.

I want to be careful to distinguish modern Machine Translation efforts, which
do not attempt to automate the revision process (rather they attempt to
facilitate it), from an ideal MT system, which would require mechanized
understanding to do its job properly.  So I agree with you that Searle
ultimately begs the question.  The question is whether or not 'understanding'
is a mechanizable process.  He either assumes that it is not, or he doesn't
have a proper conception of the nature of translation.

Ray Allis (RA) writes:
RA>It seems to me your position is in fact very close to Searle's.  The problem
RA>I have with his little parable is that he pretends that the output from
RA>the Chinese room is satisfactory (or rather lets us assume so).  I believe
RA>that if the room does not "understand" Chinese, and he argues that it does
RA>not, the output will not be satisfactory...

From my above remarks, you should see that I am closer to your viewpoint than
Searle's.  In fact, I find myself largely in agreement with most of what you
said.  I would only quibble on the issue of whether or not modern NLP efforts,
including MT, are futile.  The pragmatic purpose of such work is to increase
human efficiency in language-intensive work on computers.  There are many good
things you can do without addressing the need for full language understanding.
MT (really Machine-Assisted Translation) can improve the output of a human
translator, even though the MAT system may produce some pretty bad
translations.  Our grammar-checking system is proving useful in the writing of
aircraft maintenance manuals.  But this takes us away from the philosophical
question of whether or not you can mechanize language understanding.

-- 
Rick Wojcik   csnet:  rwojcik@atc.boeing.com	   
              uucp:   uw-beaver!bcsaic!rwojcik