Path: utzoo!utgpu!watmath!att!tut.cis.ohio-state.edu!gem.mps.ohio-state.edu!ginosko!aplcen!jhunix!ins_atge
From: ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards)
Newsgroups: comp.ai
Subject: Re: free will
Message-ID: <2310@jhunix.HCF.JHU.EDU>
Date: 17 Aug 89 14:49:41 GMT
References: <896@orbit.UUCP>
Reply-To: ins_atge@jhunix.UUCP (Thomas G Edwards)
Organization: The Johns Hopkins University - HCF
Lines: 40

In article <896@orbit.UUCP> philo@pnet51.cts.com (Scott Burke) writes:

> 
>  I'm sure that QM and chaos both play a part in the behavior of the human
>brain -- but I hardly hold out any hopes of it playing the role that many
>people want to make it fill, that of savior for the doctrine of free will.    
>A case in point, the above.

Let's look at the dreaded "QM+Chaos" from a computational angle:
1)  The brain is clearly a massively parallel non-linear system,
    and we should expect it to behave in a chaotic regime.
    Several neural network learning algorithms deal with the
    net as a dynamic system, which must be trained to have its output
    appraoch a desired attractor [1].  By understanding a net as
    a dynamic system, we can figure out how to change the weights to
    achieve that output.

2)  It is possible that "random" noise and QM noise are used in
    some learning procedures, and possibly decision procedures in the
    brain.  A learning algorithm may use random sampling of the
    weight space around the current weight point to determine
    how weights should be changed to achieve the desired learning.
    A good example of "random" noise used in a learning algorithm is
    simulated annealing [2].  

3)  I personally doubt "random" noise or "QM" holds the seed of knowledge,
    (I think that's a metaphysical consideration)
    but just presents tools for achieving learning.  The actual
    knowledge comes from the ability of a brain ciruit to achieve the
    desired output based upon current enviromental and brain states
    (the "learning algorithm").  There are probably many possible "local 
    minima" which a brain circuit can arrive at during any decision process, 
    and the ultimate choice between those acceptable decision choices may be 
   "made" by noise effects.

[1]  F.J. Pineda, "Dynamics and architecture for neural computation," J. of
	complexity, Vol. 4, pp. 216-245, Spet. 1988.
[2]  G.E. Hinton and T.J. Sejnowski, "Learning and Relearning in 
	Boltzmann Machines," Parallel Distributed Processing, Vol. 1,
	pp. 282-317, Rumelhart et al. eds..  MIT Press, 1986.