Path: utzoo!attcan!uunet!seismo!sundc!pitstop!sun!decwrl!labrea!glacier!jbn
From: jbn@glacier.STANFORD.EDU (John B. Nagle)
Newsgroups: comp.ai
Subject: Re: Free Will & Self-Awareness
Message-ID: <17442@glacier.STANFORD.EDU>
Date: 12 May 88 04:15:52 GMT
References: <4134@super.upenn.edu> <3200014@uiucdcsm> <1484@pt.cs.cmu.edu> <1029@crete.cs.glasgow.ac.uk> <912@cresswell.quintus.UUCP> <5404@venera.isi.edu> <1115@crete.cs.glasgow.ac.uk>
Reply-To: jbn@glacier.UUCP (John B. Nagle)
Organization: Stanford University
Lines: 23

In article <1115@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>
>Unfortunately, all attempts to date to present a moral rule-base have
>failed, so the chances of morality being rule-based are slim.

     There have been attempts, such as the following.

      "I.   No robot may harm a human being, or by inaction cause one to come
	    to harm.

      II.   A robot must obey all orders from human beings, unless this
	    conflicts with the First Law.

     III.   A robot must act to protect its own existence, unless this
	    conflicts with the First or Second Law."

					(I. Asimov, circa 1955)

Yes, we don't know how to implement this yet.  Yes, it's a morality for
slaves.  But it is an important concept.  As we work toward mobile robots,
it is worth keeping in mind.

					John Nagle