From: utzoo!watmath!cbostrum
Newsgroups: net.ai
Title: AI's level of operation
Article-I.D.: watmath.4393
Posted: Sun Jan 23 01:02:45 1983
Received: Sun Jan 23 01:47:16 1983

There is a controversy associated with all of the "special sciences" as
to whether there is a legitimate "level" of existence that is the object
of their study, or whether ultimately they are merely doing some sort
of physics, and that their science will only become fully understood or
legitimate when the ultimate reduction to physics is produced.
There is one classical chain which goes: physics, chemistry, biology,
pyschology, social science (economics, polisci, etc), where every element
in the chain is supposed to be reducible to the previous one. Without
disputing this, I would like to know where people think AI fits in, not
to the chain, but just in general.
A lot of talk in net.misc previously went on all about making nueral
models, and watching them evolve, and this sort of thing. This would
imply that there is no signifigant "knowledge level" as newell calls
it, and that there is no meat to taking what dennett calls the 
"intentional stance". Or perhaps it merely implies that those positions
are too difficult to get results with.
Personally I at least hope (and presently believe) that both implications
are false. Surely there are signifigant things about intelligence we can
learn without going to the low-level brain stuff, and surely there must
be signifigant things we can learn on the "higher" level that would actually
be IMPOSSIBLE to learn on the "lower" level. It seems to me that AI is 
predicated upon these optimistic beliefs.
What are others thoughts about this?