Path: utzoo!utgpu!attcan!uunet!seismo!sundc!pitstop!sun!amdcad!ames!ncar!noao!asuvax!nud!sunburn!gtx!al From: al@gtx.com (Alan Filipski) Newsgroups: comp.ai.neural-nets Subject: Parsimonious error metric Message-ID: <765@gtx.com> Date: 28 Sep 88 20:46:59 GMT Reply-To: al@gtx.UUCP (Alan Filipski) Organization: GTX Corporation, Phoenix Lines: 19 At the ICNN earlier this year in San Diego, Rumelhart gave a talk in which he discussed the brilliant idea of incorporating a measure of the size/complexity of a net into the error criterion being minimized. The back-prop procedure would thus tend to seek out smaller net configurations as well as more accurate ones. I thought this was the most memorable talk of the conference. Unfortunately, I did not copy down his formulas for the updating rules under this criterion. I thought I could look them up later-- but alas, they do not seem to be in the proceedings. Can anyone give a reference to a paper that covers this technique and discusses the results of his experiments? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ( Alan Filipski, GTX Corp, 8836 N. 23rd Avenue, Phoenix, Arizona 85021, USA ) ( {allegra,decvax,hplabs,amdahl,nsc}!sun!sunburn!gtx!al (602)870-1696 ) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~