Xref: utzoo comp.society.futures:507 comp.ai:1660
Path: utzoo!yunexus!geac!geacrd!cbs
From: cbs@geacrd.UUCP (Chris Syed)
Newsgroups: comp.society.futures,comp.ai
Subject: Re: Social science gibber [Was Re:  Various Future of AI
Summary: Philosophy and AI
Keywords: Sociology, Philosophy, AI
Message-ID: <248@geacrd.UUCP>
Date: 9 May 88 14:56:25 GMT
Article-I.D.: geacrd.248
Posted: Mon May  9 10:56:25 1988
References: <457@novavax.UUCP> <503@dcl-csvax.comp.lancs.ac.uk>
Organization: Geac Computers, Toronto CANADA
Lines: 60

This is a comment upon parts of two recent submissions, one by
Simon Brooke and another from Jeff Dalton.

Brooke writes:

> AI has two major concerns: the nature of knowledge, and the nature of
> mind. These have been the central subject matter of philosophy since
> Aristotle, at any rate. The methods used by AI workers to address these
> problems include logic - again drawn from Philosophy. So to summarise:
> AI addresses philosophical problems using (among other things)
> philosophers tools. Or to put it differently, Philosophy plus hardware -
> plus a little computer science - equals what we now know as AI. The fact
> that some workers in the field don't know this is a shameful idictment on
> the standards of teaching in AI departments.

  If anyone doubts these claims, s/he might try reading something on Horne
  clause logic. {I know, Horne probably dosen't have an 'e' on it}. And, 
  as Brooke says, a dose of Thomas Kuhn seems called for. It is no accident
  that languages such as Prolog seem to appeal to philosophers.
  In fact, poking one's head into a Phil common room these days is much like
  trotting down to the Comp Sci dept. All them philosophers is talking 
  like programmers these days. And no wonder - at last they can simulate
  minds. Meanwhile, try Minsky's _Community of Mind_ for a peek at the
  crossover from the other direction. By the by, it's relatively hard to 
  find a Phil student, even at the graduate level, who can claim much
  knowledge of Aristotle these days (quod absit)! Nevertheless, dosen't
  some AI research have more mundane concerns than the study of mind?
  Like how do we zap all those incoming warheads whilst avoiding wasting
  time on the drones? 

Jeff Dalton writes:

> Speaking of outworn dogmas, AI seems to be plagued by behaviorists,
> or at least people who seem to think that having the right behavior
> is all that is of interest: hence the popularity of the Turing Test.

  I'm not sure that the Turing Test is quite in fashion these days, though
  there is notion of a 'Total Turing Test' (Daniel C. Dennet, I think?).
  Behaviourism, I must admit, gives me a itch (positively reinforcing, I'm
  sure). But I wonder just what 'the right behaviour' _is_, anyway? It
  seems to me that children (from a Lockean 'tabula rasa' point of view),
  learn & react differently from adults (with all that emotional baggage
  they carry around). One aspect of _adult_ behaviour I'm not sure
  AI should try to mimic is our nasty propensity to fear admitting one's
  wrong. AI research offers Philosophy a way to strip out all the
  social and cultural surrounds and explore reasoning in a vaccuum... 
  to experiment upon artificial children. But adult  humans cannot observe,
  judge, nor act without all that claptrap. As an Irishman from MIT once
  observed, "a unique excellence is always a tragic flaw". Maybe it
  depends on what you're after?
      
      {uunet!mnetor,yunexus,utgpu}                     o        ~
      !geac!geacrd!cbs (Chris Syed)    ~          \-----\---/   
      GEM: CHRIS:66                   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   "There can be no virtue in obeying the law of gravity." - J.E.McTaggart.