Xref: utzoo comp.ai:2713 talk.philosophy.misc:1630
Path: utzoo!utgpu!watmath!clyde!att!osu-cis!tut.cis.ohio-state.edu!husc6!yale!engelson
From: engelson@cs.yale.edu (Sean Philip Engelson)
Newsgroups: comp.ai,talk.philosophy.misc
Subject: Re: Artificial Intelligence and Intelligence
Message-ID: <44150@yale-celray.yale.UUCP>
Date: 27 Nov 88 23:43:22 GMT
References: <484@soleil.UUCP> <1654@hp-sdd.HP.COM> <1908@crete.cs.glasgow.ac.uk> <1791@cadre.dsl.PITTSBURGH.EDU> <1918@crete.cs.glasgow.ac.uk>
Sender: root@yale.UUCP
Reply-To: engelson@cs.yale.edu (Sean Philip Engelson)
Followup-To: comp.ai
Organization: Computer Science, Yale University, New Haven, CT 06520-2158
Lines: 49
In-reply-to: gilbert@cs.glasgow.ac.uk (Gilbert Cockton)

In article <1918@crete.cs.glasgow.ac.uk>, gilbert@cs (Gilbert Cockton) writes:
>In article <1791@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu (Gordon E. Banks) writes:
>>In article <1908@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>>>
>>>Intelligence arises through socialisation.  
>>>
>>Why is this a good argument against the possibility of machine intelligence?
>Cos you can't take a computer, not even the just truly awesomest
>nooral network ever, to see the ducks, get it to throw them bread,
>etc, etc.

Yet, my good friend.  YET.

>Take a walk through your life.  Can you really see a machine going
>through that with an identical outcome?

Of course not.  Intelligent machines won't act much like humans at
all.  They will have different needs, different feelings, different
goals, plans, desires for life than we.  But they'll be no less
intelligent, thinking, feeling beings than we, for it.

>If so, lay off the cyberpunk
>and get some fresh air with some good folk :-)

Perhaps you should lay off the mysticism and get some fresh
rationality with some good folk :-)

>-- 
>Gilbert Cockton, Department of Computing Science,  The University, Glasgow
>	gilbert@uk.ac.glasgow.cs !ukc!glasgow!gilbert

Sean

----------------------------------------------------------------------
Sean Philip Engelson, Gradual Student
Yale Department of Computer Science
51 Prospect St.
New Haven, CT 06520
----------------------------------------------------------------------
The frame problem and the problem of formalizing our intuiutions about
inductive relevance are, in every important respect, the same thing.
It is just as well, perhaps, that people working on the frame problem
in AI are unaware that this is so.  One imagines the expression of
horror that flickers across their CRT-illuminated faces as the awful
facts sink in.  What could they do but "down-tool" and become
philosophers?  One feels for them.  Just think of the cut in pay!
		-- Jerry Fodor
		(Modules, Frames, Fridgeons, Sleeping Dogs, and the
		 Music of the Spheres)