Path: utzoo!attcan!uunet!seismo!esosun!jackson@freyja.css.gov From: jackson@freyja.css.gov (Jerry Jackson) Newsgroups: comp.ai Subject: Re: Artificial Intelligence and Intelligence Message-ID: <283@esosun.UUCP> Date: 30 Nov 88 17:52:21 GMT References: <484@soleil.UUCP> <1654@hp-sdd.HP.COM> <1908@crete.cs.glasgow.ac.uk> <1791@cadre.dsl.PITTSBURGH.EDU> <1918@crete.cs.glasgow.ac.uk> <44150@yale-celray.yale.UUCP>, <281@esosun.UUCP>Sender: news@esosun.UUCP Reply-To: jackson@freyja.css.gov (Jerry Jackson) Organization: SAIC, San Diego Lines: 78 In-reply-to: ap1i+@andrew.cmu.edu (Andrew C. Plotkin) In article , ap1i+@andrew (Andrew C. Plotkin) writes: >/>goals, plans, desires for life than we. But they'll be no less >/>intelligent, thinking, feeling beings than we, for it. >/ >/ I can accept the needs, goals, and plans... but why does everyone >/ assume that an intelligent machine would be a *feeling* being? I see >/ no reason to assume that an IM would be capable of experiencing >/ anything at all. This doesn't imply that it wouldn't be intelligent. > >Aren't feeling and emotion automatic byproducts of thought? I find it hard to >imagine an entity capable of thought which would *not* have (for example) some >negative reaction to a threat, involving some attempt to rapidly find >alternatives, etc... which is to say, fear and excitement. Other emotional >responses can be argued similarly. > I agree that any thinking entity would have a negative reaction to a threat, involving some attempt to rapidly find alternatives. I just don't see this as being "fear" and "excitement". Let me explain why with an analogy: Why does a person take aspirin? I don't believe that the following goes on in his head -- "I say, It appears that those neurons over there are firing excessively. Perhaps I should interrupt their overly enthusiastic behavior..". I claim it is more like: "Owww... that really *hurts*. Gimme some aspirin... NOW!" Although the physical effect of the aspirin might be to cut off some signal in the nervous system, this has very little to do with a person's immediate motivation for taking it. I claim that the signal and the pain are two entirely different sorts of beasts. >"really feeling emotion." I'll be glad to answer this point, if you can first >prove to me that *you* "really feel" pain, sorrow, or pleasure, and don't just >react mechanically...) I've heard people (usually behaviorists) make this point but I'm never sure if they're serious (I didn't see a smiley :-). An attempt to answer the riddle of subjective experience by denying its existence seems somewhat pointless. BTW: In a torture situation, I don't think I would have a hard time convincing *anyone* that they can "really feel" pain. Would you agree that torture is wrong? Why? :-) > >/ I would rather not have a >/ machine that I would be afraid to turn off for fear of harming >/ *someone*. > >If you don't want a "someone", you'd better stay out of AI research... :-) > I am definitely *not* an opponent of AI. I think it is very likely that we will be able to create systems that are *operationally* indistinguishable from humans doing the same tasks. I think this will be a great thing. I do claim, however, that there is still likely to be a difference between an intelligent machine (here referring to a machine that models intelligent behavior in a functionalist sense, not by physically copying the brain) and a human (or other animal). >/ It does seem that our experience is rooted in some kind of >/ electro-chemical phenomenon, but I think it is an incredible leap of >/ faith to assume that logic circuits are all that is required :-). > >It's a point of faith, certainly, since we don't have more than one example. But >I don't think it's unjustified, since nothing more than logic circuits has ever >been observed. ("Logic circuits" is a bit of a misnomer, since neurons don't act >like single standard logic gates. However, it looks possible to simulate them >with electronics.) > >--Z As I mentioned earlier, I believe the standard functionalist approach to AI will bear fruit -- In fact, I think we will be able to generate systems to perform any tasks we can think of... even simulate a human! It seems unlikely that the same approach will generate artificial *beings* with subjective experience, but this is just fine with me. ;-) --Jerry Jackson