Path: utzoo!utgpu!water!watmath!clyde!att!osu-cis!tut.cis.ohio-state.edu!husc6!linus!mbunix!marsh
From: marsh@mitre-bedford.ARPA (Ralph J. Marshall)
Newsgroups: comp.ai
Subject: Re: Free Will & Self-Awareness
Message-ID: <31832@linus.UUCP>
Date: 13 May 88 13:49:42 GMT
References: <4134@super.upenn.edu> <3200014@uiucdcsm> <1484@pt.cs.cmu.edu> <1029@crete.cs.glasgow.ac.uk> <912@cresswell.quintus.UUCP> <5404@venera.isi.edu> <1115@crete.cs.glasgow.ac.uk> <17442@glacier.STANFORD.EDU> <31738@linus.UUCP>
Sender: news@linus.UUCP
Reply-To: marsh@mbunix (Ralph Marshall)
Organization: The MITRE Corporation, Bedford, Mass.
Lines: 24

In article <31738@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>I was glad to see John Nagle bring up Asimov's 3 moral laws of robots.
>Perhaps the time has come to refine these just a bit, with the intent
>of shaping them into a more implementable rule-base.
>
>I propose the following variation on Asimov:
>
>     IV.   A robot may act to expand its powers of observation and
>           cognition, and may enlarge its knowledge base without limit.
>
I don't think I want the U.S. government "expanding its powers or
observation without limit" since I still think I am entitled to some
privacy.  I therefore certainly don't want some random robot, controlled
by and reporting to God knows who attempting to gain as much information
as it can possibly acquire.

On a different note, your change of wording from human to sentient being
is too vague for this type of rule.  While I agree that other lifeforms
that we may encounter should be given the same respect we reserve for
other humans, I don't think we would ever want to choose a sentient robot
over a human in a life or death situation in which only one could be saved.
(This was the rationale for sending Lt. Cmdr. Data into a hostile situation
_alone_ on a recent Star Trek and I agreed with it entirely.  Androids/robots/
artificial persons are more expendable than people)