Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version B 2.10.1 6/24/83; site sdamos.UUCP
Path: utzoo!linus!decvax!harpo!whuxlm!akgua!sdcsvax!sdamos!elman
From: elman@sdamos.UUCP (Jeff Elman)
Newsgroups: net.ai
Subject: Re: Sastric Sanskrit
Message-ID: <19@sdamos.UUCP>
Date: Tue, 23-Oct-84 01:46:05 EDT
Article-I.D.: sdamos.19
Posted: Tue Oct 23 01:46:05 1984
Date-Received: Fri, 19-Oct-84 05:36:57 EDT
References: <12975@sri-arpa.UUCP>
Organization: Phonetics Lab, UC San Diego
Lines: 77

Rick,

Thank you for taking the time to respond to the comments on your
original article.

I think this discussion reveals some very basic differences in 
assumptions that one can make, as far as how one should
approach the goal of designing an intelligent natural language
processor.  I'd like to address those basic issues directly.  I think
they're far more interesting than the question of whether or not
Sastric Sanskrit contained ambiguity.

At one point you say

    "Certainly ambiguity is a major impediment to designing
    an intelligent natural language processor.  It would be very desirable
    to work with a language that allows natural flexibility without
    ambiguity."

Whether or not ambiguity poses an obstacle to  building a successful
natural language processor depends up what your processor looks like.
Don't assume that all architectures have the same problems.

That is, I would agree whole-heartedly with you that language understanding 
systems which are patterned after traditional machine-based parsers find 
ambiguity to be a serious problem.  Such systems also have a lot of difficulty 
with another, related problem, which is the enormous variability in the 
acoustic waveforms which represent given phonemes, syllables, or words.

I see both problems -- syntactic ambiguity and acoustic variability --  as
related because they have to do with instances where the mapping from
surface to meaning is complex; and  where one has to take other factors
into account.  

I think it is extremely important to point out that in most cases, what
one might label as "ambiguous" utterances are -- in their context -- really
not at all ambiguous.  Similarly, the acoustic variability displayed
by (say) a bilabial stop in different phonetic environments does not prevent
listeners from recognizing that they heard a bilabial.   Human listeners
do very well at integrating contextual information into the language
understanding process.  (Of course, sometimes we do misunderstand each other.
But human performance is so much better than machine based systems that
it's beside the point.)

My conclusion about how to deal with ambiguity or variability is thus 
different than yours.  You say

    "It would be very desirable to work with a language that 
    allows natural flexibility without ambiguity."


I say the alternative is to leave the language alone and work with a language 
*processor* that is able to take advantage of contextual constraints and has 
the kind of computational power which is needed to integrate information from
large numbers of sources.  Serial von Neumann machines do not have
this kind of power.  If you use them then of course you will be forced
into processing only languages  with a highly restricted syntax and a 
minimum of ambiguity.  There are many occasions where this kind of
limitation is satisfactory, and so that's fine.

But I think it's more challenging to accept the ambiguity of natural
language as a given, and then to figure out how it is that people
(still the only really successfull speech understanders around) 
resolve that ambiguity.   My strong feeling is that this leads
you to investigating the sorts of highly interactive, parallel architectures 
that are being studied here at UC San Diego, at CMU, at Brown, and at
other places.  
 
Jeff Elman  
Phonetics Lab, Dept. of Linguistics, C-008
Univ. of Calif., San Diego La Jolla, CA 92093
(619) 452-2536,  (619) 452-3600

UUCP:      ...ucbvax!sdcsvax!sdamos!elman
ARPAnet:   elman@nprdc.ARPA