Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP Posting-Version: version B 2.10.1 6/24/83; site cvaxa.UUCP Path: utzoo!watmath!clyde!burl!ulysses!unc!mcnc!philabs!cmcl2!seismo!mcvax!ukc!warwick!cvaxa!aarons From: aarons@cvaxa.UUCP (Aaron Sloman) Newsgroups: net.ai Subject: Minsky's definition of AI Message-ID: <162@cvaxa.UUCP> Date: Tue, 29-Oct-85 05:33:05 EST Article-I.D.: cvaxa.162 Posted: Tue Oct 29 05:33:05 1985 Date-Received: Sun, 3-Nov-85 05:31:50 EST Organization: Univ of Sussex, Cognitive Studies, UK Lines: 134 Xpath: warwick ubu -- Do we need a good definition of AI? ----------------------- Marvin Minsky once defined Artificial Intelligence as '... the science of making machines do things that would require intelligence if done by men'. I don't know if he still likes this, but it is often quoted with approval, even by at least one recent net-user. A slightly different definition, similar in spirit but allowing for shifting standards, is given in the textbook on AI by Elaine Rich (McGraw-Hill 1983): '.. the study of how to make computers do things at which, at the moment, people are better.' There are several problems with these definitions. (a) They suggest that AI is primarily a branch of engineering concerned with making machines do things (though Minsky's use of the word 'science' hints at a study of general principles). (b) Perhaps the main objection is their concern with WHAT is done rather than HOW it is done. There are lots of things computers do which would require intelligence if done by people but which have nothing to do with AI, because there are unintelligent ways of getting them done if you have enough speed. E.g. calculators can do complex sums which would require intelligence if done by people. Even simple sums done by a very young child would be regarded as an indication of high intelligence, though not if done by a simple mechanical calculator. Was building calculators to go faster or be more accurate than people once AI? For Rich, does it matter in what way people are currently better? (c) Much AI (e.g. work reported at IJCAI) is concerned with studying general principles in a way that is neutral as to whether it is used for making new machines or explaining how existing systems (e.g. people or squirrels) work. For instance, John McCarthy is said to have coined the term 'Artificial Intelligence' but it is clear that his work is of this more general kind, as is much of the work by Minsky and others at MIT. Many of those who use computers in AI do so merely in order to test, refine, or demonstrate their theories about how people do something, or, more profoundly, because only with the aid of computational concepts can we hope to express theories with rich enough explanatory power. (Which does not mean that present-day computational concepts are sufficient.) For these reasons, the 'Artificial' part of the name is a misnomer, and 'Cognitive Science' or 'Computational Cognitive Science' might have been better names. But it is too late to change the name now, despite the British Alvey Programme's use of "IKBS" (Intelligent Knowledge Based Systems) instead of "AI" -- Towards a better definition ------------------------------- Winston, in the second edition of his book on AI (Addison Wesley, 1984) defines AI as 'the study of ideas that enable computers to be intelligent', but quickly moves on to identify two goals: 'to make computers more useful' 'to understand the principles that make intelligence possible'. His second goal captures the spirit of my complaint about the other definitions. (I made similar points in 'The Computer Revolution in Philosophy' (Harvester and Humanities Press, 1978; now out of print)). All this assumes that we know what intelligence is: and indeed we can recognise instances even when we cannot define it, as with many other general concepts, like 'cause' 'mind' 'beauty' 'funniness'. Can we hope to have a study of general principles concerning X without a reasonably clear definition of X? Since almost any behaviour can be the product of either an intelligent system (e.g. using false or incomplete beliefs or bizarre motives), or an unintelligent system (e.g. an enormously fast computer using an enormously large look-up table) it is important to define intelligence in terms of HOW the behaviour is produced. -- To kick off discussion here is a suggestion --------------- Intelligent systems are those which: (A) are capable of using structured symbols (e.g. sentences or states of a network; i.e. not just quantitative measures, like temperature or concentration of blood sugar) in a variety of roles including the representation of facts (beliefs), instructions (motives, desires, intentions, goals), plans, strategies, selection principles, etc. (B) are capable of being productively lazy (i.e. able to use the information expressed in the symbols in order to achieve goals with minimal effort). Although it may not be obvious, various kinds of learning capabilities can be derived from (B) which is why I have not included learning as part of the definition, which some would do. There are many aspects of (A) and (B) which need to be enlarged and clarified, including the notion of 'effort' and how different sorts can be minimised, relative to the system's current capabilities. For instance, there are situations in which the intelligent (productively lazy) thing to do is develop an unintelligent but fast and reliable way to do something which has to be done often. (E.g. learning multiplication tables.) Given a suitable notion of what an intelligent system is, I would then define AI as the study of principles relevant to explaining or designing actual and possible intelligent systems, including the investigation of both general design requirements and particular implementation tradeoffs. The reference to 'actual' systems includes the study of human and animal intelligence and its underlying principles, and the reference to 'possible' systems covers principles of engineering design for new intelligent systems, as well as possible organisms that might develop one day. The study of ranges of possibilities (what the limits and tradeoffs are, how different possibilities are related, how they can be generated, etc.) is a part of any theoretical understanding, and good AI MUST be theoretically based. There is lots of bad AI -- what John McCarthy once referred to as the 'look Ma, no hands' variety. The definition could be tied more closely to human and animal intelligence by requiring the ability to cope with multiple motives in real time, with resource constraints, in an environment which is partly friendly partly unfriendly. But probably (B) can be interpreted as including this as a special case. More generally, it is necessary to say something about the nature of the goals and the structure of the environment in which they are to be achieved. I've probably gone on too long for a net-wide discussion. Comments welcome. Aaron Sloman -- Aaron Sloman, U of Sussex, Cognitive Studies, Brighton, BN1 9QN, England uucp:...mcvax!ukc!cvaxa!aarons arpa/janet: aarons%svga@uk.ac.ucl.cs OR aarons%svga@ucl-cs