Xref: utzoo comp.ai:2772 talk.philosophy.misc:1665
Path: utzoo!utgpu!watmath!clyde!att!pacbell!ames!sgi!arisia!quintus!ok
From: ok@quintus.uucp (Richard A. O'Keefe)
Newsgroups: comp.ai,talk.philosophy.misc
Subject: Re: Artificial Intelligence and Intelligence
Message-ID: <792@quintus.UUCP>
Date: 2 Dec 88 07:59:10 GMT
References: <1976@crete.cs.glasgow.ac.uk> <2717@uhccux.uhcc.hawaii.edu>  <3ffb7cfc.14c3d@gtephx.UUCP>
Sender: news@quintus.UUCP
Reply-To: ok@quintus.UUCP (Richard A. O'Keefe)
Distribution: comp.ai
Organization: Quintus Computer Systems, Inc.
Lines: 53

In article <3ffb7cfc.14c3d@gtephx.UUCP> gibsong@gtephx.UUCP (Greggo) writes:
>Don't emotions enter into intelligence at all, or
>do they just "get in the way"?

Emotions have often been discussed in the AI literature.  See, for example,
Aaron Sloman's "You don't need a soft skin to have a warm hear."
Emotions have a large cognitive component; they aren't just physiological.
(C.S.Lewis in his essay "Transposition" pointed out that Samuel Pepys
reported the same phsyical sensations when seasick, when in love with his
wife, and on hearing some wind music, and in the latter case promptly
decided to practice the instrument.)  Considering the range of human
temperaments, the experience and expression of emotion probably isn't
necessary for "intelligence".  I wonder, though.  I have seen programs
which nauseated me, and they were bad programs, and I have seen programs
which brought tears of pleasure to my eyes, and they were good programs.
If emotions can be aroused by such "mathematical" things as programs,
and aroused *appropriately*, perhaps they are more important than I
think.  Such emotions certainly motivate me to write better programs.

>One of the prime foundations for
>intelligence would surely be "an awareness of self".

"Foundation" in what sense?  Let's be science fictional for a moment,
and imagine a sessile species, which every spring buds off a mobile
ramet.  The mobile ramet sends sense data to the sessile ramet, and
the sessile ramet sends commands to the mobile one.  The sessile
ramets live in a colony, and the mobile ones gather food and bring
it back, and otherwise tend the colony.  Every winter the mobile
ramets die and the sessile ones hibernate.  The mobile ramets are
"cheap" to make because they have just enough brain to maintain their
bodies and communicate with the sessile ones, which means that they
can be quite a bit smaller than a human being and still function
intelligently, because the brain is back in the sessile ramet.

Is it necessary for the sessile ramet to know which of the ones in the
colony is itself?  No, provided all the sessiles are maintained, it
doesn't much matter.  (It helps if physiological states of the sessiles
such as hunger and illness are obvious from the outside, wilting leaves
or something like that.)  These creatures would presumably be aware of
themselves *as*mobiles*.

I was about to write that an intelligent entity would need access to its
own plans in order to critise them before carrying them out, but even that
may not be so.  Imagine a humanoid robot which is *not* aware of its own
mental processes, but where information about those processes is visible
on a debugging panel at the back.  Two such robots could check each other
without being able to check themselves.

"An awareness of self" might be important to an intelligent organism,
but it might be a *consequence* of intelligence rather than a
*precondition* for it.  It is usually claimed that human babies have
to learn to distinguish self from non-self.  (How anyone can _know_
this I've often wondered.)