Path: utzoo!utgpu!water!watmath!clyde!att!osu-cis!tut.cis.ohio-state.edu!husc6!linus!mbunix!bwk
From: bwk@mitre-bedford.ARPA (Barry W. Kort)
Newsgroups: comp.ai
Subject: Re: Free Will & Self-Awareness
Summary: Eupraxophy for Robots.
Message-ID: <31738@linus.UUCP>
Date: 12 May 88 18:14:12 GMT
References: <4134@super.upenn.edu> <3200014@uiucdcsm> <1484@pt.cs.cmu.edu> <1029@crete.cs.glasgow.ac.uk> <912@cresswell.quintus.UUCP> <5404@venera.isi.edu> <1115@crete.cs.glasgow.ac.uk> <17442@glacier.STANFORD.EDU>
Sender: news@linus.UUCP
Reply-To: bwk@mbunix (Barry Kort)
Organization: IdeaSync, Inc., Chronos, VT
Lines: 22

I was glad to see John Nagle bring up Asimov's 3 moral laws of robots.
Perhaps the time has come to refine these just a bit, with the intent
of shaping them into a more implementable rule-base.

I propose the following variation on Asimov:

      I.   A robot may not harm a human or other sentient being,
           or by inaction permit one to come to harm.

     II.   A robot may respond to requests from human beings,
           or other sentient beings, unless this conflicts with
           the First Law.

    III.   A robot may act to protect its own existence, unless this
           conflicts with the First Law."

     IV.   A robot may act to expand its powers of observation and
           cognition, and may enlarge its knowledge base without limit.

Can anyone propose a further refinement to the above?

--Barry Kort