Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Path: utzoo!mnetor!seismo!ut-sally!im4u!rutgers!ames!ptsfa!ihnp4!twitch!homxb!houdi!marty1
From: marty1@houdi.UUCP (M.BRILLIANT)
Newsgroups: comp.ai,comp.cog-eng
Subject: Re: The symbol grounding problem
Message-ID: <1206@houdi.UUCP>
Date: Sat, 4-Jul-87 22:47:25 EDT
Article-I.D.: houdi.1206
Posted: Sat Jul  4 22:47:25 1987
Date-Received: Sun, 5-Jul-87 06:36:35 EDT
References: <764@mind.UUCP> <768@mind.UUCP> <770@mind.UUCP> <6174@diamond.BBN.COM> <605@gec-mi-at.co.uk>
Organization: AT&T Bell Laboratories, Holmdel
Lines: 51
Keywords: symbols, grounding, red, herring, blind
Summary: A seeing-eye robot would need symbol grounding.
Xref: mnetor comp.ai:626 comp.cog-eng:190

In article <605@gec-mi-at.co.uk>, adam@gec-mi-at.co.uk (Adam Quantrill) writes:
> It seems to me that the Symbol Grounding problem is  a   red   herring.

As one who was drawn into a problem that is not my own, let me
try answering that disinterestedly.  To begin with, a "red
herring" is something drawn across the trail that distracts the
pursuer from the real goal.  Would Adam tell us what his real
goal is? 

Actually, my own real goal, from which I was distracted by the
symbol grounding problem, was an expert system that would (like
Adam's last example) ground its symbols only in terminal I/O. 
But that's a red herring in the symbol grounding problem.

> .....  If   I
> took  a  partially self-learning program and data (P & D) that had learnt from a
> computer with 'sense organs',  and  ran it  on a  computer  without,  would  the
> program's output become symbolically ungrounded?

No, because the symbolic data was (were?) learned from sensory
data to begin with - like a sighted person who became blind.

> Similarily, if I myself wrote P & D without running it on a  computer   at  all,
> [and came] up with identical
> P & D by analysis.  Does  that  make the  original  P  & D running on  the  com-
> puter with 'sense organs' symbolically ungrounded?

No, as long as the original program learned its symbolic data
from its own sensory data, not by having them defined by a
person in terms of his or her sensory data.

> A computer can  always  interact  via  the  keyboard  &  terminal  screen,   (if
> those   are  the only 'sense organs'), grounding its internal symbols via people
> who react to the output, and  provide further stimulus.

That's less challenging and less useful than true symbol
grounding.  One problem that requires symbol grounding (more
useful and less ambitious than the Total Turing Test) is a
seeing-eye robot: a machine with artificial vision that could
guide a blind person by giving and taking verbal instructions. 
It might use a Braille keyboard instead of speech, but the
"terminal I/O" must be "grounded" in visual data from, and
constructive interaction with, the tangible world.  The robot
could learn words for its visual data by talking to people who
could see, but it would still have to relate the verbal symbols
to visual data, and give meaning to the symbols in terms of its
ultimate goal (keeping the blind person out of trouble).

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201)-949-1858
Holmdel, NJ 07733	ihnp4!houdiem oh t.