Xref: utzoo comp.ai:4624 comp.ai.neural-nets:840
Path: utzoo!attcan!uunet!tut.cis.ohio-state.edu!purdue!gatech!mcnc!thorin!coggins!coggins
From: coggins@coggins.cs.unc.edu (Dr. James Coggins)
Newsgroups: comp.ai,comp.ai.neural-nets
Subject: Re: Connectionism, a paradigm shift?
Message-ID: <9143@thorin.cs.unc.edu>
Date: 13 Aug 89 14:34:01 GMT
References: <24241@iuvax.cs.indiana.edu> <568@berlioz.nsc.com>
Sender: news@thorin.cs.unc.edu
Reply-To: coggins@cs.unc.edu (Dr. James Coggins)
Organization: University Of North Carolina, Chapel Hill
Lines: 118

In article <568@berlioz.nsc.com> andrew@berlioz (Lord Snooty @ The Giant Poisoned Electric Head ) writes:
>I think you should crosspost this to comp.ai.neural-nets, whose members seem
>to exhibit the usual healthy cynicism of a comp.. group; not a pack of
>zealots by any means! 

Thank you, I'm sure.

>I think that what is required to save the field from the "hype seesaw"
>is a healthy rate of generation of solid new theoretical results.

>There is a tremendous amount of high-quality work going on, bolstered by
>the application of formal mathematical techniques.

>It seems to me that this truly sets NN research apart from the much
>more "hand-waving" stuff that I encountered when looking at conventional
>AI, when expert systems were on the rise in the early- and mid-80s.
>Here one found tree traversal stuff and Bayesian statistical variations,
>definitons of "frames" and the like; the ad hoc component was significant.
>(although fuzzy set theory has to some extent set some of this on a more 
>formal footing, I have to agree).
>
>Andrew Palfreyman	There's a good time coming, be it ever so far away,
>andrew@berlioz.nsc.com	That's what I says to myself, says I, 

I'm afraid that the theoretical foundation you appreciate is actually
inherited (or bastardized, depending on your point of view) from 
the statistical pattern recognition studies of ten to twenty years
ago.  Sure there is a theory base, but it's ready-made, much of it
not arising inherently from NNs (but being REdiscovered there).

"...only be sure please always to call it RESEARCH!"
                   from Lobachevsky by Tom Lehrer

I have been impressed with the confirmation provided by this newsgroup
that the majority of researchers in this area really are disgusted at
the publicity-mongering, money-grubbing approach of too many
well-placed (and well-heeled) labs, researchers, writers, companies,
seminar sellers, and the like.  NNs might become a significant
contribution making possible highly parallel implementations of many
kinds of processes if the science fiction futurist brain-theory
dabblers would shut up and let the real researchers develop the field
in a careful, disciplined way, without having to run interference
against massively inflated expectations of the work. 

A few months ago I posted to comp.ai.neural-nets the document
reproduced below.  I guess it was too hot for the newsgroup, but I did
receive 13 e-mail replies: 8 firmly supportive, 4 asking for more
pointers to statistical pattern recognition which I gladly supplied
(But note: Is the scholarship in the NN field really so weak that NN
researchers are unaware of twenty years of research in statistical
pattern recognition? The evidence says yes!), and one sharply critical
but easy to refute (a True Believer who went down in flames). 

I posted the document below in the spirit of my other "Outrageous
Discussion Papers" that I have been circulating to carefully selected
audiences to provoke thought and comment and encourage skepticism.  I
have one flaming the use of rule-based expert systems in medical
applications, one arguing that edges are an inadequate foundation for
vision, one arguing that automatic identification of organs in CT
scans is an unworthy task of little practical value, one that is a
manifesto for my approach to computer vision research, and the neural
net one below.  If you are interested, e-mail me, but I'm leaving now
for a three-week vacation, so don't expect my usual rapid response. 

---------------------------------------------
My assessment of the neural net area is as follows:
(consider these Six Theses nailed to the church door)

1. NNs are a parallel implementation technique that shows promise for
making perceptual processes run in real time.

2. There is nothing in the NN work that is fundamentally new except
as a fast implementation.  Their ability to learn incrementally from
a series of samples nice but not new.  The way they learn and make
decisions is decades old and first arose in communication theory,
then was further developed in statistical pattern recognition.

3. The claims that NNs are fundamentally new are founded on ignorance
of statistical pattern recognition or on simplistic views of the
nature of statistical pattern recognition.  I have heard supposedly
competent people working in NNs claim that statistical pattern
recognition is based on assumptions of Gaussian distributions which
are not required in NNs, therefore NNs are fundamentally different.
This is ridiculous.  Statistical pattern recognition is not bound to
Gaussians, and NNs do, most assuredly, incorporate distributional
assumptions in their decision criteria. 

4. A more cynical view that I do not fully embrace says that the main
function of "Neural Networks" is as a label for money.  It is a flag
you wave to attract money dispensed by people who are interested in
the engineering of real-time perceptual processing and who are
ignorant of statistical pattern recognition and therefore the lack of
substance of the neural net field.

5. Neural nets raise lots of engineering questions but little science.
Much of the excitement they have raised is based on uncritical
acceptance of "neat" demos and ignorance. As such, the area resembles
a religion more than a science.  

6. The "popularity" of neural net research is a consequence of the
miserable mathematical backgrounds of computer science students (and
some professors!).  You don't need to know any math to be a hacker, but
you have to know math and statistics to work in statistical pattern
recognition.  Thus, generations of computer science students are
susceptible to hoodwinking by neat demos based on simple mathematical
and statistical techniques that incorporate some engineering hacks
that can be tweaked forever.  They'll think they are accomplishing
something by their endless tweaking because they don't know enough
math and statistics to tell what's really going on.

---------------------------------------------------------------------
Dr. James M. Coggins          coggins@cs.unc.edu
Computer Science Department   A neuromorphic minimum distance classifier!
UNC-Chapel Hill               Big freaking hairy deal.
Chapel Hill, NC 27599-3175                -Garfield the Cat
and NASA Center of Excellence in Space Data and Information Science
---------------------------------------------------------------------