Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version B 2.10 5/3/83; site umcp-cs.UUCP
Path: utzoo!watmath!clyde!burl!ulysses!harpo!seismo!rlgvax!cvl!umcp-cs!speaker
From: speaker@umcp-cs.UUCP
Newsgroups: net.religion
Subject: Re: Quantum mechanics and free will... - (nf)
Message-ID: <5921@umcp-cs.UUCP>
Date: Wed, 14-Mar-84 18:16:12 EST
Article-I.D.: umcp-cs.5921
Posted: Wed Mar 14 18:16:12 1984
Date-Received: Thu, 15-Mar-84 07:19:29 EST
References: <2739@fortune.UUCP>
Organization: Univ. of Maryland, Computer Science Dept.
Lines: 73


		"STABLE computing environment"?
		
	Maybe you misunderstood or I wasn't clear enough. The uncertainty problem
	with synchronizers is NOT a matter of bad design, it is inherent in ANY
	digital system that must communicate with an "outside" (asynchronous)
	world.

I didn't say it WAS bad design!

My point was that the functionality of a computer is totally deterministic,
because the underlying software is deterministic.  I specified a STABLE
computing envronment to exclude this hardware-oriented random-crash stuff.
Computing devices (in their cleanest sense) DO NOT rely on randomness.

	There is no way (even theoretically) to avoid it.

Turing machines are clearly deterministic and do not
rely on randomness for their operation.  You'll also have a hard
time convincing us that a DFA (a computing model) is in anyway
non-deterministic.  Not only devices
on paper... but devices that function in everday life.  You will NOT
find a synchroninzer in the definition of the Turing Machine.
Nor will you find a synchronizer in a cash register.

You claim that ALL digital devices rely on randomness, but does your
hand (the first digital computer) rely on randomness or synchronizers?
Of course not... because the implementation is far above the quantum
level.

	"making random decisions...hot tea on the CPU".

	The problem is not that a "random" decision gets made. A random decision
	could be tolerated and is in fact expected (that's why the synchronizer
	is there). It's that NO decisions (or sometimes MULTIPLE "decisions") get
	made, and the logic then does any number of non-deterministic things, like
	execute NO code, multiple codes, mixtures of codes, or worse. The microscopic
	quantum effects can and do cause macroscopic system crashes.

No, no, no... my point is that functional decisions cannot be made by introducing
randomness into the implementation (e.i. hardware).

	As far as human thought goes, again I am not talking about "random", but
	"non-deterministic" (which is why I mentioned the "S. Cat"). Since neurons
	are subject to the same problems as any other synchronizers, no matter
	how complete our model of the brain becomes, we will not be able to
	predict its behaviour completely, since the completeness of our model
	is in fact limited by quantum effects. Such effects ARE significant at the
	macro level wherever binary decisions (neuron firings) are made from either
	asynchronous digital inputs (other neurons) or analog inputs (perceptions,
	hormone levels, sugar level, etc.).

This says that neurons (and other objects) will display non-deterministic
behavior because they are subject to non-deterministic quantum events.
That's like saying the cannon-ball will "fall up" once every thousand
years or so.

Neurons are not similar to semiconductor devices
since the behavior of semiconductors is more dependent upon the
atomic structure of the semiconductor.  You might very well expect
to see some aggregate effects in this kind of crystal.

Neurons involve more complex chemical reactions... not processes related
only to the atomic structure of the material.  Small-scale
quantum effects will probably be totally overshadowed by the
larger chemical reactions.

Besides... I AGREED with you on that point (assuming that neurons
DO fire non-deterministically).
-- 

				Debbie does Daleks
				- Speaker