Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Path: utzoo!mnetor!seismo!ut-sally!im4u!rutgers!princeton!mind!harnad
From: harnad@mind.UUCP (Stevan Harnad)
Newsgroups: comp.ai,comp.cog-eng
Subject: Re: The symbol grounding problem
Message-ID: <976@mind.UUCP>
Date: Sun, 5-Jul-87 01:29:02 EDT
Article-I.D.: mind.976
Posted: Sun Jul  5 01:29:02 1987
Date-Received: Sun, 5-Jul-87 05:37:31 EDT
References: <764@mind.UUCP> <768@mind.UUCP> <770@mind.UUCP> <6174@diamond.BBN.COM> <605@gec-mi-at.co.uk>
Organization: Cognitive Science, Princeton University
Lines: 24
Summary: Grounding is not just hooking peripherals to a computer
Xref: mnetor comp.ai:624 comp.cog-eng:188



In Article 184 of comp.cog-eng: adam@gec-mi-at.co.uk (Adam Quantrill)
of Marconi Instruments Ltd., St. Albans, UK writes:

>	It seems to me that the Symbol Grounding problem is a red herring.
>	If I took a partially self-learning program and data (P & D) that had 
>	learnt from a computer with 'sense organs', and ran it on a computer
>	without, would the program's output become symbolically ungrounded?...
>	[or] if I myself wrote P & D without running it on a computer at all?

This begs two of the central questions that have been raised in
this discussion: (1) Can one speak of grounding in a toy device (i.e.,
a device with performance capacities less than those needed to pass
the Total Turing Test)? (2) Could the TTT be passed by just a symbol
manipulating module connected to transducers and effectors? If a
device that could pass the TTT were cut off from its transducers, it
would be like the philosophers' "brain in a vat" -- which is not
obviously a digital computer running programs.
-- 

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU