Xref: utzoo comp.ai:2788 comp.ai.neural-nets:357
Path: utzoo!utgpu!watmath!clyde!att!osu-cis!tut.cis.ohio-state.edu!bloom-beacon!husc6!endor!reiter
From: reiter@endor.harvard.edu (Ehud Reiter)
Newsgroups: comp.ai,comp.ai.neural-nets
Subject: Back-propogation question
Message-ID: <766@husc6.harvard.edu>
Date: 5 Dec 88 17:23:18 GMT
Sender: news@husc6.harvard.edu
Reply-To: reiter@harvard.harvard.edu (Ehud Reiter)
Organization: Aiken Computation Lab Harvard, Cambridge, MA
Lines: 19

Is anyone aware of any empirical comparisons of back-propogation to
other algorithms for learning classifications from examples (e.g.
decision trees, exemplar learning)?  The only such article I've seen is
Stanfill&Waltz's article in Dec 86 CACM, which claims that
"memory-based reasoning" (a.k.a. exemplar learning) does better than
back-prop at learning word pronunciations.  I'd be very interested in
finding articles which look at other learning tasks, or articles which
compare back-prop to decision-tree learners.

The question I'm interested in is whether there is any evidence that
back-prop has better performance than other algorithms for learning
classifications from examples.  This is a pure engineering question -
I'm interested in what works best on a computer, not in what people do.

Thanks.

					Ehud Reiter
					reiter@harvard	(ARPA,BITNET,UUCP)
					reiter@harvard.harvard.EDU  (new ARPA)