From: utzoo!decvax!yale-com!leichter
Newsgroups: net.ai
Title: Re: The Mind"s I
Article-I.D.: yale-com.711
Posted: Sat Jan 22 11:11:48 1983
Received: Mon Jan 24 03:59:44 1983
References: yale-com.707

How will you make an intelligent machine do the dirty work for you?  Human
societies had slavery until very recently; getting HUMANS to do your dirty
work for you is a (depressingly easily) solved problem.  The basic trick is
to make the slaves believe that the situation they are living in is right -
and make sure they have no hope of any other life - like just across the
border.  Since you control their environments, especially their upbringing,
you can arrange to do the former; the latter is a matter of what kind of
society your neighbors have.  Historically, slave revolts are a fairly rare,
and even less often successful, occurence.

What about intelligent computers?  Here, you don't even have to worry about
indirect methods of indoctrination; you can control the data base the systems
start with, what kind of likes and dislikes they have, and so on.  Any program
that was really like a human would have a (PERSONAL) concept of pain.  It
would be easy to include some simple command that triggers intense pain.
Further, it's unlikely that anywhere in the world would a society of free
computers exist; there would be no "underground railroad" to run to.

While I agree that there are real MORAL questions to deal with here, I think
the PRACTICAL issues would be pretty easily solvable.

Of course, you could argue that a real ability to revolt is a necessary part
of a "really intelligent" program.  In the abstract, you would be right; it
\\\if what you want is an accurate model of HUMAN intelligence, that would
probably be a necessary part.  However, we have no trouble recognizing as
human the "faithful manservant" who really believes that "his place" is to
serve.  Talking intelligently to such a person is not particularly hard.
							-- Jerry
						decvax!yale-comix!leichter