Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Path: utzoo!mnetor!seismo!rutgers!ucla-cs!zen!ucbvax!SDCSVAX.UCSD.EDU!norman%ics
From: norman%ics@SDCSVAX.UCSD.EDU (Donald A. Norman)
Newsgroups: comp.ai.digest
Subject: Why AI is not a science
Message-ID: <8707031429.AA11064@sunl.ICS>
Date: Fri, 3-Jul-87 10:29:41 EDT
Article-I.D.: sunl.8707031429.AA11064
Posted: Fri Jul  3 10:29:41 1987
Date-Received: Tue, 7-Jul-87 01:28:52 EDT
Sender: usenet@ucbvax.BERKELEY.EDU
Distribution: world
Organization: Donald A. Norman, UCSD Institute for Cognitive Science
Lines: 75
Approved: ailist@stripe.sri.com


A private message to me in response to my recent AI List posting,
coupled with general observations lead me to realize why so many of us
otherwise friendly folks in the sciences that neighbor AI can be so
frustrated with AI's casual attitude toward theory: AI is not a science
and its practitioners are woefuly untutored in scientific method.

At the recent MIT conference on Foundations of AI, Nils Nilsson stated
that AI was not a science, that it had no empirical content, nor
claims to emperical content, that it said nothing of any emperical
value.  AI, stated Nilsson, was engineering.  No more, no less.  (And
with that statement he left to catch an airplane, stopping further
discussion.)  I objected to the statement, but now that I consider it
more deeply, I believe it to be correct and to reflect the
dissatisfaction people like me (i.e., "real scientists") feel with AI.
The problem is that most folks in AI think they are scientists and
think they have the competence to pronounce scientific theories about
almost any topic, but especially about psychology, neuroscience, or
language.   Note that perfectly sensible dsciplines such as
mathematics and philosophy are also not sciences, at least not in the
normal intrerpretation of that word.  It is no crime not to be a
science.  The crime is to think you are one when you aren't.

AI worries a lot about methods and techniques, with many books and
articles devoted to these issues.  But by methods and techniques I
mean such topics as the representation of knowledge, logic,
programming, control structures, etc.  None of this method includes
anything about content.  And there is the flaw: nobody in the field of
Artificial Intelligence speaks of what it means to study intelligence,
of what scientific methods are appropriate, what emprical methods are
relevant, what theories mean, and how they are to be tested.  All the
other sciences worry a lot about these issues, about methodology,
about the meaning of theory and what the appropriate data collection
methods might be.  AI is not a science in this sense of the word.
	Read any standard text on AI: Nilsson or Winston or Rich or
	even the multi-volumned handbook.  Nothing on what it means to
	test a theory, to compare it with others, nothing on what
	constitutes evidence, or with how to conduct experiments.
	Look at any science and you will find lots of books on
	experimental method, on the evaluation of theory.  That is why
	statistics are so important in psychology or biology or
	physics, or why counterexamples are so important in
	linguistics.  Not a word on these issues in AI.
The result is that practitioners of AI have no experience in the
complexity of experimental data, no understanding of scientific
method.  They feel content to argue their points through rhetoric,
example, and the demonstration of programs that mimic behavior thought
to be relevant.  Formal proof methods are used to describe the formal
power of systems, but this rigor in the mathematical analysis is not
matched by any similar rigor of theoretical analysis and evaluation
for the content.

This is why other sciences think that folks in AI are off-the-wall,
uneducated in scientific methodology (the truth is that they are), and
completely incompetent at the doing of science, no matter how
brilliant at the development of mathematics of representation or
formal programming methods.  AI will contribute to the A, but will
not contribute to the I unless and until it becomes a science and
develops an appreciation for the experimental methods of science.  AI
might very well develop its own methods -- I am not trying to argue
that existing methods of existing sciences are necessarily appropriate
-- but at the moment, there is only clever argumentation and proof
through made-up example (the technical expression for this is "thought
experiment" or "gadanken experiment").  Gedanken experiments are not
accepted methods in science: they are simply suggestive for a source
of ideas, not evidence at the end.

don norman

Donald A. Norman
Institute for Cognitive Science C-015
University of California, San Diego
La Jolla, California 92093
norman@nprdc.arpa    	{decvax,ucbvax,ihnp4}!sdcsvax!ics!norman
norman@sdics.ucsd.edu	norman%sdics.ucsd.edu@RELAY.CS.NET