Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Path: utzoo!utgpu!water!watmath!clyde!cbosgd!ucbvax!BRILLIG.UMD.EDU!hendler
From: hendler@BRILLIG.UMD.EDU.UUCP
Newsgroups: comp.ai.digest
Subject: Re:  AIList Digest   V5 #171
Message-ID: <8707062225.AA18518@brillig.umd.edu>
Date: Mon, 6-Jul-87 18:25:51 EDT
Article-I.D.: brillig.8707062225.AA18518
Posted: Mon Jul  6 18:25:51 1987
Date-Received: Sat, 11-Jul-87 13:44:48 EDT
Sender: daemon@ucbvax.BERKELEY.EDU
Distribution: world
Organization: The ARPA Internet
Lines: 20
Approved: ailist@stripe.sri.com

While I have some quibbles with Don N.'s long statement on AI viz (or
vs.) science, I think he gets close to what I have felt a key point
for a long time -- that the move towards formalism in AI, while important
in the change of AI from a pre-science (alchemy was Drew McDermott's
term) to a science, is not enough.  For a field to make the transition
an experimental methodology is needed.  In AI we have the potential
to decide what counts as experimentation (with implementation being
an important consideration) but have not really made any serious
strides in that direction.  When I publish work on planning and
claim ``my system makes better choices than '' I cannot verify this other than by showing
some examples that my system handles that 's can't.  But of 
course, there is no way of establishing that  couldn't do
examples mine can't and etc.  Instead we can end up forming camps of
beliefs (the standard proof methodology in AI) and arguing -- sometimes
for the better, sometimes for the worse.
 While I have no solution for this, I think it is an important issue
for consideration, and I thank Don for provoking this discussion.

 -Jim Hendler