Path: utzoo!attcan!utgpu!jarvis.csri.toronto.edu!rutgers!tut.cis.ohio-state.edu!gem.mps.ohio-state.edu!csd4.csd.uwm.edu!bionet!agate!shelby!lindy!news
From: GA.CJJ@forsythe.stanford.edu (Clifford Johnson)
Newsgroups: comp.ai
Subject: Re: Are neural nets stumped by change?
Message-ID: <4356@lindy.Stanford.EDU>
Date: 15 Aug 89 17:28:31 GMT
Sender: news@lindy.Stanford.EDU (News Service)
Distribution: usa
Lines: 49

In article ,
jk3k+@andrew.cmu.edu (Joe Keane) writes:
>In article (Clifford Johnson) writes:
>>  In adapting to change, they [NNs] only do so according to
>>statistical/numerical rules that are bounded by their (implicit
>>or explicit) preprogrammed characterizations and
>>parameterizations of their inputs.
>
>Some neural networks have carefully hand-crafted topologies.  But if you use a
>standard topology and training algorithm in a new domain, where is the
>``preprogramming''?

That's why I was careful to state "implicit or explicit" re the
"preprogramming."  Whatever the topology, a definite set of
distribution functions is implied.  True, convergence of
recognition outputs to fit a very wide range of inputs may
be engineered, but convergence takes time.  It proceeds in
steps determined by the topology, and assumes a constant
sampling space.  The lack of constancy, i.e. change, is what
stumps it.

>Similarly, with a standard topology, you aren't giving it
>any ``parameterization''; it learns them all by itself.

Yes and no.  The parameter-space is basically bounded by the
topology.  You can't, for example, have more degrees of freedom
learned by the system than exist in its topology.  And again, a
change in the external or real parameters is only relearned
over time, which is my main point.

>>Thus, a change in the basic
>>*type* of pattern is beyond their cognition.
>
>This doesn't follow.  It may seem intuitive to you, but i think it's false.
>Fill in some more steps and i'll tell you where i think the problem is.

If a neural-net optical character reader is suddenly confronted
with chinese characters, it isn't going to learn to read
them, if it's only classification choices are arabic.
Continued training might result in a systematic many-to-one
translation of chinese characters into their "closest" arabic
equivalents, closest being dependent on the aforesaid net's
topological design.

Yes, a better net might be built to include the capability to
develop further classifications (again this takes time), but
it wouldn't have a clue as to what the new patterns "mean" in
terms of decision-making that was originally defined only in
arabic terms.