Path: utzoo!attcan!uunet!husc6!mailrus!ames!pasteur!agate!garnet!weemba
From: weemba@garnet.berkeley.edu (Obnoxious Math Grad Student)
Newsgroups: comp.ai
Subject: Re: Who else isn't a science?
Message-ID: <11387@agate.BERKELEY.EDU>
Date: 26 Jun 88 18:17:37 GMT
References: <13100@shemp.CS.UCLA.EDU> <3c84f2a9.224b@apollo.uucp> <10785@agate.BERKELEY.EDU> <34227@linus.UUCP>
Sender: usenet@agate.BERKELEY.EDU
Reply-To: weemba@garnet.berkeley.edu (Obnoxious Math Grad Student)
Organization: Brahms Gang Posting Central
Lines: 62
In-reply-to: marsh@mitre-bedford.ARPA (Ralph J. Marshall)

In article , now expired here, ???? asked me for references.  I
find this request strange, since at least one of my references was in
the very article being replied to, although not spelled out as such.

Anyway, let me recommend the following works by neurophysiologists:

G M Edelman _Neural Darwinism: The Theory of Neuronal Group Selection_
(Basic Books, 1987)

C A Skarda and W J Freeman "How brains make chaos in order to make sense
of the world", _Behavorial and Brain Sciences_, (1987) 10:2 pp 161-195.

These researchers start by looking at *real* brains, *real* EEGs, they
work with what is known about *real* biological systems, and derive very
intriguing connectionist-like models.  To me, *this* is science.

GME rejects all the standard categories about the real world as the start-
ing point for anything.  He views brains as--yes, a Society of Mind--but
in this case a *biological* society whose basic unit is the neuronal group,
and that the brain develops by these neuronal groups evolving in classical
Darwinian competition with each other, as stimulated by their environment.

CAS & WJF have developed a rudimentary chaotic model based on the study
of olfactory bulb EEGs in rabbits.  They hooked together actual ODEs with
actual parameters that describe actual rabbit brains, and get chaotic EEG
like results.
------------------------------------------------------------------------
In article <34227@linus.UUCP>, marsh@mitre-bedford (Ralph J. Marshall) writes:
>	"The ability to learn or understand or to deal with new or
>	 trying situations."

>I'm not at all sure that this is really the focus of current AI work,
>but I am reasonably convinced that it is a long-term goal that is worth
>pursuing.

Well, sure.  So what?  Everyone's in favor of apple pie.
------------------------------------------------------------------------
In article <2618@mit-amt.MEDIA.MIT.EDU>, bc@mit-amt (bill coderre) writes:

>Oh boy. Just wonderful. We have people who have never done AI arguing
>about whether or not it is a science [...]

We've also got what I think a lot of people who've never studied the
philosophy of science here too.  Join the crowd.

>May I also inform the above participants that a MAJORITY of AI
>research is centered around some of the following:

>[a list of topics]

Which sure sounded like programming/engineering to me.

>		   As it happens, I am doing simulations of animal
>behavior using Society of Mind theories. So I do lots of learning and
>knowledge acquisition.

Well good for you!  But are you doing SCIENCE?  As in:

If your simulations have only the slightest relevance to ethology, is your
advisor going to tell you to chuck everything and try again?  I doubt it.

ucbvax!garnet!weemba	Matthew P Wiener/Brahms Gang/Berkeley CA 94720