Path: utzoo!attcan!uunet!lll-winken!lll-tis!ames!mailrus!iuvax!pur-ee!a.cs.uiuc.edu!uxc.cso.uiuc.edu!uxe.cso.uiuc.edu!morgan
From: morgan@uxe.cso.uiuc.edu
Newsgroups: comp.ai
Subject: Re: does AI kill?
Message-ID: <46400009@uxe.cso.uiuc.edu>
Date: 13 Jul 88 15:23:00 GMT
References: <1376@daisy.UUCP>
Lines: 20
Nf-ID: #R:daisy.UUCP:1376:uxe.cso.uiuc.edu:46400009:000:1106
Nf-From: uxe.cso.uiuc.edu!morgan    Jul 13 10:23:00 1988


I think these questions are frivolous. First of all, there is nothing in
the article that says AI was involved. Second, even if there was, the 
responsibility for using the information and firing the missile is the
captain's. The worst you could say is that some humans may have oversold
the captain and maybe the whole navy on the reliability of the information
the system provides. That might turn out historically to be related to
the penchant of some people in AI for grandiose exaggeration. But that's
a fact about human scientists. 

And if you follow the reasoning behind these questions consistently,
you can find plenty of historical evidence to substitute 'engineering'
for 'AI' in the three questions at the end.
I take that to suggest that the reasoning is faulty.

Clearly the responsibility for the killing of the people in the Iranian
airliner falls on human Americans, not on some computer.

At the same time, one might plausibly interpret the Post article as
a good argument against any scheme that removes human judgment from the
decision process, like Reagan's lunatic fantasies of SDI.