Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP Posting-Version: version B 2.10.2 9/18/84; site water.UUCP Path: utzoo!watmath!watnot!water!rggoebel From: rggoebel@water.UUCP (Randy Goebel LPAIG) Newsgroups: net.ai Subject: Re: Carl E. Hewitt's ``Prolog will fail...so will LOGIC...'' Message-ID: <782@water.UUCP> Date: Sat, 17-Aug-85 21:27:31 EDT Article-I.D.: water.782 Posted: Sat Aug 17 21:27:31 1985 Date-Received: Tue, 20-Aug-85 05:44:08 EDT References: <9955@ucbvax.ARPA> Organization: U of Waterloo, Ontario Lines: 65 > Prolog (like APL before it) will fail as the foundation for Artificial > Intelligence because of competition with Lisp. There are commercially > viable Prolog implementations written in Lisp but not conversely. Is there anyone around who claimed that APL would be the foundation for AI? Will they admit it? I don't believe that the lack of commercially viable LISP implementations written in Prolog has anything to do with the question. Functional programmers already know how BAD LISP is as a functional programming language; in fact, there is now much research directed at understanding how the virtues of logic and functional programming can be combined (e.g., see Joe Goguen's work). It seems that it is the LISP machine manufacturers who felt the demand for Prolog. You don't suppose that SYMBOLICS would develop the microcode for Prolog if nobody would buy it? > LOGIC as a PROGRAMMING Language will fail as the foundation for AI because: > > 1. Logical inference cannot be used to infer the decisions that need to be > taken in open systems because the decisions are not determined by > system inputs. > One can compute what is computable with ``LOGIC.'' The notion of deduction that is demonstrated with the Socrates syllogisms is the same as the one that shows how sorting can be done axiomatically. Saying that ``logical inference cannot be used...'' is either claiming that what must be inferred in open systems is not computable, or is based on some fundamental misconception about what logic is? There has been some work on combining meta and object level logics (e.g., Weyrauch, Bowen and Kowalski) which suggest that all is not lost? There has also been some work on viewing the external world as just another knowledge base, with a human serving as transducer...surely Hewitt is not suggesting this is misguided? > 2. Logic does not cope well with the contradictory knowledge bases inherent > in open systems. It leaves out counterarguments and debate. My first reaction is to say ``neither do humans,'' followed by ``what other method of reasoning about the world does?'' The use of logic is for expedience, not for promulgating dogma. There is nothing about logic that precludes one from positing a theory of counterargument or debate; if one has such a theory? There are even systems that use deduction to construct consistent knowledge bases, and that attempt to reason with multiple (possibly inconsistent) knowledge bases. There is nothing about logic that precludes one postulating a theory of how to reason from multiple inconsisten knowledge bases. > > 3. Taking action does not fit within the logic paradigm. > There is a lot left to learn about the semantics of destructive assignment, and a computation theory of action. However the ``logic paradigm,'' at least in AI, is to use logic as a methodology for developing computational formalisms for reasoning about the world. Briefly, it provides a way to talk about what our symbols mean, and a way to talk about what our programs are supposed to do (so that we can can compare that with what they really do). I think everyone should re-read Pat Hayes IJCAI '77 article entitled ``In defence of logic'' to help balance Hewitt's opinion. Randy Goebel Logic Programming and Artificial Intelligence Group University of Waterloo