Path: utzoo!attcan!utgpu!jarvis.csri.toronto.edu!rutgers!cs.utexas.edu!csd4.milw.wisc.edu!bionet!agate!shelby!lindy!news
From: GA.CJJ@forsythe.stanford.edu (Clifford Johnson)
Newsgroups: comp.ai
Subject: Re: Is there a definition of AI?
Message-ID: <4298@lindy.Stanford.EDU>
Date: 11 Aug 89 17:18:04 GMT
Sender: news@lindy.Stanford.EDU (News Service)
Distribution: usa
Lines: 51

Here's a footnote I wrote describing "AI" in a document re
nuclear "launch on warning" that only mentioned the term in
passing.  I'd be interested in criticism.  It does seem a rather
arbitrary term to me.

  Coined by John McCarthy at Dartmouth in the 1950s, the phrase
  "Artificial Intelligence" is longhand for computers.  Today's
  machines think.  For centuries, classical logicians have
  pragmatically defined thought as the processing of raw
  perceptions, comprising the trinity of: categorization of
  perceptions (Apprehension); comparison of categories of
  perceptions (Judgment); and the drawing of inferences from
  connected comparisons (Reason).  AI signifies the performance
  of these definite functions by computers.  AI is also a
  buzz-term that salesmen have applied to virtually all 1980's
  software, but which to data processing professionals especially
  connotes software built from large lists of axiomatic "IF x
  THEN y" rules of inference.  (Of course, all programs have some
  such rules, and, viewed at the machine level, are logically
  indistinguishable.) The idiom artificial intelligence is
  curiously convoluted, being applied more often where the coded
  rules are rough and heuristic (i.e. guesses) rather than
  precise and analytic (i.e. scientific).  The silly innuendo is
  that AI codifies intuitive expertise.  Contrariwise, most AI
  techniques amount to little more than brute trial-and-error
  facilitated by rule-of-thumb short-cuts.  An analogy is jig-saw
  reconstruction, which proceeds by first separating pieces with
  corners and edges, and then crudely trying to find adjacent
  pairs by exhaustive color and shape matching trials.  This
  analogy should be extended by adding distortion to all pieces
  of the jig-saw, so that no fit is perfect, and by repainting
  some, removing other, and adding a few irrelevant pieces.  A
  most likely, or least unlikely, fit is sought.  Neural nets are
  computers programmed with an algorithm for tailoring their
  rules of thumb, based on statistical inference from a large
  number of sample observations for which the correct solution is
  known.  In effect, neural nets induce recurrent patterns from
  input observations.  They are limited in the patterns that they
  recognize, and are stumped by change.  Their programmed rules
  of thumb are not more profound, although they are more
  complicated, raw "IF... THEN" constructs.  Neural nets derive
  their conditional branchings from underlying rules of
  statistical inference, and cannot extrapolate beyond the
  fixations of their induction algorithm.  Like regular AI
  applications, they must select an optimal hypotheses from a
  simple, predefined set.  Thus, all AI applications are largely
  probabilistic, as exemplified by medical diagnosis and missile
  attack warning.  In medical diagnosis, failure to use and heed
  a computer can be grounds for malpractice, yet software bugs
  have gruesome consequences.  Likewise, missile attack warning
  deters, yet puts us all at risk.