Path: utzoo!utgpu!jarvis.csri.toronto.edu!rutgers!cs.utexas.edu!csd4.csd.uwm.edu!bionet!agate!shelby!lindy!news
From: GA.CJJ@forsythe.stanford.edu (Clifford Johnson)
Newsgroups: comp.ai
Subject: Are neural nets stumped by change?
Message-ID: <4331@lindy.Stanford.EDU>
Date: 14 Aug 89 17:40:28 GMT
Sender: news@lindy.Stanford.EDU (News Service)
Lines: 45

CJ> [Neural nets] are limited in the patterns that they recognize,
CJ> and are stumped by change.
LS>                                      *flame bit set*
LS> Go read about Adaptive Resonance Theory (ART) before making sweeping
LS> and false generalisations of this nature!

CJ> I would have thought stochastic convergence theory more relevant
CJ> than resonance theory.

LS> I refer to "stumped by change", which admittedly is rather
LS> inexact in itself. I am not familiar with "stochastic convergence",
LS> although perhaps there is another name for it?

In my original message I did clarify this somewhat.  The point is
that neural nets in essence automate Bayesian types of induction
algorithms.  In adapting to change, they only do so according to
statistical/numerical rules that are bounded by their (implicit
or explicit) preprogrammed characterizations and
parameterizations of their inputs.  Thus, a change in the basic
*type* of pattern is beyond their cognition.  Second, a change in
the parameters of patterns they can adaptively recognize is only
implemented over the time it takes for them to make enough
mistakes that the earlier statistics are in effect overwritten.
Third, I do not dispute that the
characterizations/parameterizations of neural nets are complex
enough to provide for "differences" (which could be called
"changes") in individual sets of observations (e.g. differently
shaped letters in optical readers).

Stochastic convergence addresses the rate at which statistical
induction algorithims converge to their asymptotic solutions, if
and when they do.  Given stable input distributions (i.e. no
changes!), convergence has been shown for all but the most
pathalogical kinds of input - but the process nevertheless takes
many observations in many cases.  Adaption does not even begin
until after the first mistaken classification.

Thanks for the reference.  The kind of stochastic convergence
that applies to generic neural net methodologies was worked on by
J. Van Ryzin in the 1960s.  Incidentally, the asymptotic result
is not an approach to certainty of pattern recognition, but an
approach to the minimum attainable probability of
misclassification.  See, e.g., Repetitive Play In Finite
Statistical Games With Unknown Distributions by Van Ryzin, Annals
of Mathematical Statistics, Vol. 37, No. 4, Aug. 1966.