Path: utzoo!mnetor!uunet!husc6!bloom-beacon!mit-eddie!uw-beaver!cornell!rochester!pt.cs.cmu.edu!speech2.cs.cmu.edu!yamauchi From: yamauchi@speech2.cs.cmu.edu (Brian Yamauchi) Newsgroups: comp.ai Subject: Re: Arguments against AI are arguments against human formalisms Message-ID: <1630@pt.cs.cmu.edu> Date: 7 May 88 02:14:26 GMT References:<368693.880430.MINSKY@AI.AI.MIT.EDU> <1103@crete.cs.glasgow.ac.uk> Sender: netnews@pt.cs.cmu.edu Organization: Carnegie-Mellon University, CS/RI Lines: 97 In article <1103@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes: > In article <1579@pt.cs.cmu.edu> yamauchi@speech2.cs.cmu.edu (Brian Yamauchi) writes: > >Cockton seems to be saying that humans do have free will, but is totally > >impossible for AIs to ever have free will. I am curious as to what he bases > >this belief upon other than "conflict with traditional Western values". > Isn't that enough? What's so special about academia that it should be > allowed to support any intellectual activity without criticism from > the society which supports it? Surely it is the duty of all academics > to look to the social implications of their work? Having free will, > they are not obliged to pursue lines of enquiry which are so controversial. These are two completely separate issues. Sure, it's worthwile to consider the social consequences of having intelligent machines around, and of course, the funding for AI research depends on what benefits are anticipated by the government and the private sector. This has nothing to do with whether it is possible for machines to have free will. Reality does not depend on social consensus. -------------------------------------------- Or do you believe that the sun revolved around the earth before Copernicus? After all, the heliocentric view was both controversial and in conflict with the social consensus. In any case, since when is controversy a good reason for not doing something? Do you also condemn any political or social scientist who has espoused controversial views? > I have other arguments, which have popped up now and again in postings > over the last few years: > > 1) Rule-based systems require fully formalised knowledge-bases. This is a reasonable criticism of rule-based systems, but not necessary a fatal flaw. > Conclusion, AI as a collection of mathematicians and computer > scientists playing with machines, cannot formalise psychology where > no convincing written account exists. Advances here will come from > non-computational psychology first, as computational psychology has > to follow in the wake of the real thing. I am curious what sort of non-computational psychology you see as having had great advances in recent years. > [yes, I know about connectionism, but then you have to formalise the > inputs. For an intelligent robot (see below), you can take inputs directly from the sensors. > Furthermore, you don't know what a PDP network does know] This is a broad overgeneralization. I would recommend reading Rumelhart & McClelland's book. You can indeed discover what a PDP network has learned, but for very large networks, the process of examining all of the weights and activations becomes impractical. Which, at least to me, is suggestive of an analogy with human/animal brains with regard to the complexity of the synapse/neuron interconnections (just suggestive, not conclusive, by any means). > AI depends on being able to use written language (physical symbol > hypothesis) to represent the whole human and physical universe. Depends on which variety of AI..... > BTW, Robots aren't AI. Robots are robots. And artificially intelligent robots are artificially intelligent robots. > 3) The real world is social, not printed. The real world is physical -- not social, not printed. Unless you consider it to be subjective, in which case if the physical world doesn't objectively exist, then neither do the other people who inhabit it. > Anyway, you did ask. Hope this makes sense. Well, you raise some valid criticisms of rule-based/logic-based/etc systems, but these don't preclude the idea of intelligent machines, per se. Consider Hans Moravec's idea of building intelligence from the bottom up (starting with simple robotic animals and working your way up to humans). After all, suppose you could replace every neuron in a person's brain with an electronic circuit that served exactly the same function, and afterwards, the individual acted like exactly the same person. Wouldn't you still consider him to be intelligent? So, if it is possible -- or at least conceivable -- in theory to build an intelligent being of some type, the real question is how. ______________________________________________________________________________ Brian Yamauchi INTERNET: yamauchi@speech2.cs.cmu.edu Carnegie-Mellon University Computer Science Department ______________________________________________________________________________