Path: utzoo!utgpu!watmath!clyde!att!osu-cis!tut.cis.ohio-state.edu!bloom-beacon!bu-cs!purdue!decwrl!labrea!Portia!aluko
From: aluko@Portia.Stanford.EDU (Stephen Goldschmidt)
Newsgroups: comp.ai.neural-nets
Subject: Re: Learning arbitrary transfer functio
Message-ID: <4259@Portia.Stanford.EDU>
Date: 30 Nov 88 18:33:20 GMT
References: <399@uvaee.ee.virginia.EDU> <163400002@inmet> <5572@sdcsvax.UCSD.EDU>
Reply-To: aluko@Portia.stanford.edu (Stephen Goldschmidt)
Organization: Stanford University
Lines: 49

In article <5572@sdcsvax.UCSD.EDU> you write:

>All of this aside, I must note that the original article was misinterpreted.
>That was unfortunate, as I was theorizing on ways to improve generalized
>learning of non-linear mathematical relationships for data outside
>of the training domain... results in this area were usally fairly dismal
>in the experiments which I conducted.

I have done considerable work in modeling non-linear functions with
a program called ASPN (Algorithm for Synthesis of Polynomial Networks)
which I helped to develop at Barron Associates Inc. during 1986.
My experience was that polynomial functions (which is what ASPN
ultimately produces, though in the form of a network) are excellent 
for interpolations under certain conditions, but fail miserably
on extrapolation.  Part of the art is to configure your problem
so that the network is never asked to extrapolate.

An example:
  Suppose you want to predict the output of an unforced linear system
  of the form y'(t) = y(t) - b

  If you train your network to model the function y(t, b, y(0)) for t < 2
  and then evaluate the network on t = 3, you are asking it to extrapolate
  to values of t that it has never seen before.  This is too much to 
  ask of an economist, let alone a computer! :-)

  If, instead, you model the function y( y(t-1), y(t-2) )
  the network should discover that 
        y(t) = (1+e)*y(t-1) - e*y(t-2)
  which is not only an easier function to model, but also does not
  require explicit knowledge of b.  

  When you evaluate it on t=3, the network is not going to try to
  extrapolate (assuming that your input values of y(t-1) and y(t-2) 
  are in the range of the values used in training the network).

  Thus, it is often possible to turn an extrapolation problem into
  an interpolation problem.

>=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
>=					-				=
>= "But why not play god ? "		-   joe@amos.ling.ucsd.edu	=
>=		- un-named geneticist	-				=
>=					-				=
>=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

       Stephen R. Goldschmidt
        aluko@portia.stanford.edu
  The opiniions herein are my own.