Path: utzoo!utgpu!water!watmath!clyde!bellcore!rutgers!njin!princeton!udel!burdvax!sdcrdcf!ucla-cs!maui!bjpt
From: bjpt@maui.cs.ucla.edu (Benjamin Thompson)
Newsgroups: comp.ai
Subject: Re: Who else isn't a science?
Message-ID: <13100@shemp.CS.UCLA.EDU>
Date: 3 Jun 88 20:22:32 GMT
References: <3c671fbe.44e6@apollo.uucp> <10510@agate.BERKELEY.EDU>
Sender: news@CS.UCLA.EDU
Reply-To: bjpt@cs.ucla.edu (Benjamin Thompson)
Organization: UCLA Computer Science Department
Lines: 14

In article <10510@agate.BERKELEY.EDU> weemba@garnet.berkeley.edu writes:
>Gerald Edelman, for example, has compared AI with Aristotelian
>dentistry: lots of theorizing, but no attempt to actually compare
>models with the real world.  AI grabs onto the neural net paradigm,
>say, and then never bothers to check if what is done with neural
>nets has anything to do with actual brains.

This is symptomatic of a common fallacy.  Why should the way our brains
work be the only way "brains" can work?  Why shouldn't *A*I workers look
at weird and wonderful models?  We (basically) don't know anything about
how the brain really works anyway, so who can really tell if what they're
doing corresponds to (some part of) the brain?

Ben