Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP Path: utzoo!mnetor!uunet!seismo!mimsy!oddjob!gargoyle!ihnp4!homxb!houdi!marty1 From: marty1@houdi.UUCP (M.BRILLIANT) Newsgroups: comp.ai Subject: Seeing-Eye robots Message-ID: <1229@houdi.UUCP> Date: Thu, 16-Jul-87 13:54:49 EDT Article-I.D.: houdi.1229 Posted: Thu Jul 16 13:54:49 1987 Date-Received: Sat, 18-Jul-87 08:48:23 EDT Organization: AT&T Bell Laboratories, Holmdel Lines: 19 Keywords: cognition recognition symbol grounding meaning Suppose one wanted to build a robot that does what a Seeing-Eye dog does (that is, helping a blind person to get around), but communicates in the blind person's own language instead of by pushing and pulling. Clearly this robot does not have to imitate a human being. But it does have to recognize objects and associate them with the names that humans use for them. It also has to interpret certain situations in its owner's terms: for instance, walking in one direction leads to danger, and walking in another direction leads to the goal. What problems will have to be solved to build such a robot? Will its hypothetical designers have to deal with the problem of mere recognition, or the deeper problem of grounding symbols in meaning? Could it be built by hardwiring sensors to a top-down symbolic processor, or would it require a hybrid processor? M. B. Brilliant Marty AT&T-BL HO 3D-520 (201)-949-1858 Holmdel, NJ 07733 ihnp4!houdi!marty1