Path: utzoo!attcan!uunet!husc6!bbn!rochester!ur-tut!sunybcs!boulder!ncar!noao!amethyst!kww
From: kww@amethyst.ma.arizona.edu (K Watkins)
Newsgroups: comp.ai
Subject: Language-related capabilities (was Re: Human-human communication)
Summary: Language enables distinguishing between model and reality
Message-ID: <700@amethyst.ma.arizona.edu>
Date: 31 May 88 23:22:27 GMT
References: <32403@linus.UUCP> <238@proxftl.UUCP>
Reply-To: watkins@rvax.ccit.arizona.edu (K Watkins)
Organization: Dept. of Math., Univ. of Arizona, Tucson AZ 85721
Lines: 38

In article <238@proxftl.UUCP> tomh@proxftl.UUCP (Tom Holroyd) writes:
>Name one thing that isn't expressible with language! :-)

>A dog might "know" something and not be able to describe it, but this is
>a shortcoming of the dog.  Humans have languages, natural and artificial,
>that let us manipulate and transmit knowledge.
>
>Does somebody out there want to discuss the difference between the dog's
>way of knowing (no language) and the human's way of knowing (using language)?

A dog's way of knowing leaves no room that I can see for distinguishing 
between the model of reality that the dog contemplates and the reality 
itself.  A human's way of knowing--once the human is a competent user of
language--definitely allows this distinction, thus enabling lies, fiction,
deliberate invention, and a host of other useful and hampering results of 
recognized possible disjunction between the model and the reality.

One aspect of this, probably one of the most important, is that it makes it
easy to recognize that in any given situation there is much unknown but 
possibly relevant data...and to cope with that recognition without freaking
out.

It is also possible to use language to _refer_ to things which language cannot
adequately describe, since language users are aware of reality beyond the
linguistic model.  Some would say (pursue this in talk.philosophy, if at all) 
language cannot adequately describe _anything_; but in more ordinary terms, it
is fairly common to hold the opinion that certain emotional states cannot be
adequately described in language...whence the common nonlinguistic 
"expression" of those states, as through a right hook or a tender kiss.

Question:  Is the difficulty of accurate linguistic expression of emotion at
all related to the idea that emotional beings and computers/computer programs
are mutually exclusive categories?

If so, why does the possibility of sensory input to computers make so much
more sense to the AI community than the possibility of emotional output?  Or
does that community see little value in such output?  In any case, I don't see
much evidence that anyone is trying to make it more possible.  Why not?