Path: utzoo!utgpu!watmath!clyde!att!osu-cis!tut.cis.ohio-state.edu!rutgers!mit-eddie!uw-beaver!cornell!rochester!pt.cs.cmu.edu!andrew.cmu.edu!ap1i+
From: ap1i+@andrew.cmu.edu (Andrew C. Plotkin)
Newsgroups: comp.ai
Subject: Re: Artificial Intelligence and Intelligence
Message-ID: 
Date: 7 Dec 88 18:00:32 GMT
References: <484@soleil.UUCP> <1654@hp-sdd.HP.COM> <1908@crete.cs.glasgow.ac.uk> <1791@cadre.dsl.PITTSBURGH.EDU> <1918@crete.cs.glasgow.ac.uk> <44150@yale-celray.yale.UUCP> <281@esosun.UUCP>  <283@esosun.UUCP> 

/ > ... if you had a computer console in front of you with a button
/ > marked "pain" (any human simulator had better have some sort of sensory
/ > input), would you consider it okay to push it?

/ Yes, if I was testing the computer's pain circuits.  When a computer
/ is in pain (i.e. a circuit board is burning out, or a cable is being
/ cut), I want to be sure that it can sense its distress and accurately
/ report its state of well-being.

Ah, but I'm not talking about a system that senses damage to the computer. I'm
talking about something that applies stimuli to the simulated pain inputs of the
simulated human.

    You brought up "computers being able to simulate humans," and I'm using that
concept. To clarify it, let me describe it as a program running on a computer;
with input routines that feed data to the same thought-mechanisms that human
sensory nerves feed to in the human mind; with output routines that take data
from the appropriate thought-mechanisms and display it in suitable form. Given
any input, it will produce output as a typical human would. (Passing the Turing
test, therefore.)

    (The "easiest" way to this is to create a trillion CPU's, each capable of
simulating one neuron, and hooking them together. Sensory input could then be
pushed into the "sensory neurons" directly. However, the exact mechanism is not
relevant here.)

    Now, there's a big difference between damage to the computer and simulated
pain. One degrades the performance of the simulation; the other makes the
simulation yell "ouch!" (assuming it's a good simulation.)
    Obvious example: if a brain surgeon is working on a conscious patient, the
patient feels no pain (assuming the cut scalp has been numbed.) The surgeon can
poke around, feed minute electrical currents in, and so forth; the patient will
see strange flashes, have odd bits of memory pop up, and so forth. If the
surgeon drops his scalpel in, the patient will stop thinking or suffer
functional loss, but no pain is involved, unless sensory centers are hit.

/   Similarly, if I put the machine in
/ emotional pain (by giving it a program that runs forever and does
/ no useful work), I hope the machine can diagnose the problem and
/ gracefully aprise me of my error.

Keep thinking human simulation. The machine would simulate reactions like "Damn,
this is boring." Or, more likely, "Why should I do this idiot work? Program it
into a computer!"
   (Of course, if it was a simulation of a reasonably open-minded human, you
could easily convince it that it was really a computer. That its optical inputs
are coming from cameras would be a giveaway. But I doubt it was settle down and
execute C for the rest of its existence. Assume it was a simulation of you --
would you?)

--Z