Path: utzoo!utgpu!watmath!clyde!att!rutgers!ukma!uflorida!novavax!maddoxt
From: maddoxt@novavax.UUCP (Thomas Maddox)
Newsgroups: comp.ai
Subject: Re: Artificial Intelligence and Intelligence
Message-ID: <834@novavax.UUCP>
Date: 3 Dec 88 20:58:38 GMT
References: <484@soleil.UUCP> <1654@hp-sdd.HP.COM> <1908@crete.cs.glasgow.ac.uk> <281@esosun.UUCP> <177@iisat.UUCP> <800@quintus.UUCP>
Reply-To: maddoxt@novavax.UUCP (Thomas Maddox)
Organization: Nova University, Fort Lauderdale, Florida
Lines: 83

In article <800@quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
>There's an interesting question here about human psychology:  which
>emotions are innate, and which emotions are culture-dependent.  Lakoff's
>book includes a list of apparently culture-independent emotional states
>(unfortunately I left W,F&DT at home, so I can't quote it), and I was
>surprised at how short it was. 

	From Lakoff, _Women, Fire and Dangerous Things_, p. 38:

In a major crosscultural study of facial gestures expressing emotion,
Ekman and his associates discovered that there were basic emotions
that seem to correlate universally with facial gestures:  happiness,
sadness, anger, fear, surprise, and interest.  Of all the subtle
emotions that people feel and have words and concepts for around the
world, only these have consistent correlates in facial expressions
across cultures.

	end-quote

	I agree that Lakoff's book is extremely interesting with
regard to key problems in AI, particularly in its replacement of what
he calls the "classical theory that categories are defined in terms 
of common properties of their members" with a new view ("experimental
realism" or "experientialism").  In his "Preface," Lakoff says,

The issue is this:

Do meaningful thought and reason concern merely the manipulation of
abstract symbols and their correspondence to an ojbective reality,
independent of any embodiment (except, perhaps, for limitations
imposed by the organism)?

Or do meaningful thought and reason essentially concern the nature of
the organism doing the thinking--including the nature of its body, its
interactions in its environment, its social character, and so on?

	end-quote

	Like Lakoff, I'm convinced that the second set of answers
points in the correct direction.  As a science fiction writer who has
tried to present an artificial intelligence realistically, I saw from
the start that the *embodied* categories Lakoff speaks of had to be
presupposed in order to present a being I could consider intelligent.

	(By the way, I hope readers see there is a difference between 
Lakoff's view, which poses interesting questions for AI research, and 
the views of eminent anti-AI theorists such as Dreyfus and Weizenbaum 
[and vocal net anti-AI types such as Cockton].  Lakoff says (p. 338):

I should point out that the studies discussed in this volume do not in
any way contradict studies in artificial intelligence . . . in
general. . . . We shall discuss only computational approaches to the
study of mind.  Even there, our results by no means contradict all
such approaches.  For example, they do not contradict what have come
to be called "connectionist" theories, in which the role of the body
in cognition fits naturally. 

	end-quote) 

	Lakoff's work is especially interesting when set next to
a recent book by Terry Winograd and Fernando Flores, _Understanding
Computers and Cognition_.  They likewise reject the tradition which
sees reason as "the systematic manipulation of representations."
However, they use a philosophical tradition very different from that
employed in usual AI studies:   to wit, the Continental tradition of
hermeneutics and phenomenology that includes Heidegger and Gadamer.
They also include "speech act" theory, from Austin and Searle and, 
in biology, Maturana's work.

	What these books (along with some essays of Daniel Dennett's)
represent to me is an attempt at coming to terms conceptually with the
high-level problems posed by AI.  The doctrinaire anti-AI group
continue to snipe from the sidelines, with arguments that say (1) it
can't be done, and (2) even if it could, it shouldn't; the workers who
are trying to create artificial intelligence (i.e., the makers of the
hardware and software) quite often are submersed entirely in their
particular problems and speak almost exclusively in the technicalities
appropriate to those problems.  Thus, the intelligent and approachable
work being done by Lakoff et alia serves us all:  this is one of the
characteristic problems of our time and one of our civilization's
greatest wagers, and those of us who are trying to understand it
(rather than deride or implement it) need a coherent universe of
discourse in which understanding might take place.