Path: utzoo!utgpu!water!watmath!clyde!att!osu-cis!tut.cis.ohio-state.edu!mailrus!uwmcsd1!bbn!rochester!pt.cs.cmu.edu!cadre!pitt!cisunx!vangelde
From: vangelde@cisunx.UUCP (Timothy J Van)
Newsgroups: comp.ai
Subject: AI (GOFAI) and cognitive psychology
Message-ID: <11009@cisunx.UUCP>
Date: 13 Jul 88 04:48:34 GMT
Organization: Univ. of Pittsburgh, Comp & Info Sys
Lines: 25

What with the connectionist bandwagon, everyone seems to be getting a lot
clearer about just what AI is and what sort of a picture of cognition
it embodies.  The short story, of course, is that AI claims that thought
in general and intelligence in particular is the rule governed manipulation
of symbols.  So AI is committed to symbolic representations with a
combinatorial syntax and formal rules defined over them.  The implemenation
of those rules is computation.

Supposedly, the standard or "classical" view in cognitive psychology is
committed to exactly the same picture in the case of human cognition, and 
so goes around devising models and experiments based on these commitments.


My question is - how much of cognitive psychology literally fits this kind
of characterization?  Some classics, for example the early Shepard and 
Metzler experiments on image rotation dont seem to fit the description 
very closely at all.  Others, such as the SOAR system, often seem to 
remain pretty vague about exactly how much of their symbolic machinery
they are really attributing to the human cognizer.

So, to make my question a little more concrete - I'd be interested to know
what people's favorite examples of systems that REALLY DO FIT THE 
DESCRIPTION are?  (Or any other interesting comments, of course.)

Tim van Gelder