Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version B 2.10.2 9/18/84; site calgary.UUCP
Path: utzoo!utcsri!ubc-vision!alberta!calgary!cleary
From: cleary@calgary.UUCP (John Cleary)
Newsgroups: net.philosophy,net.math
Subject: Re: Sc--nce Attack (really on minds and computers)
Message-ID: <475@calgary.UUCP>
Date: Sat, 26-Oct-85 00:32:27 EDT
Article-I.D.: calgary.475
Posted: Sat Oct 26 00:32:27 1985
Date-Received: Sat, 26-Oct-85 05:39:59 EDT
References: <299@umich.UUCP> <10699@ucbvax.ARPA> <10700@ucbvax.ARPA> <10702@ucbvax.ARPA> <1925@pyuxd.UUCP>
Organization: University of Calgary, Calgary, Alberta
Lines: 121

> > [Yes, this is exactly the point. Exhibit the Turing machine that
> > is claimed to be equivalent to the human mind, and the human mind
> > can reason about the system in ways impossible within the system.
> > Thus we contradict the assumption that the machine was equivalent
> > to the mind.]
This is a very crucial point in this discussion I think.  This is only true
IF we assume that the human mind that is doing the reasoning is not itself
part of the Turing machine being exhibited.  The problem is that the 
physical boundary about a human is most unclear.  The wiggling of an electron
on Alpha Centauri might via changes in gravitation affect the firing of one of
my neurons and so alter my behaviour.  From this (extreme) example we have to
include the whole universe in the description of the human.  That is anything
which can affect us (and so observable by us) must be included in a complete
description of our behaviour.  The set of all things observable by us (or
potentially observable by us) can validly be called the whole universe.
Unfortunately the whole universe includes all entities that can observe us
and hence reason about us (remember Heisenberg, if it can observe you then
it can affect you).

The interesting thing about digital computers is that we confuse two things,
the actual physical machine and its abstract description.  The physical machine
just like a human needs the whole universe included in it to describe it.
The abstraction (what is described in the manuals) is an approxiamtion only.
It is proably unclear from the abstract description what happens when a high
energy gamma ray passes through the CPU chip.  So I agree with those who
say a digital computer AS DESCRIBED BY A FORMAL SYSTEM cannot have the same
status as a human.  However there is no reason we know of at the moment why
a physical system cannot, indeed as the description of the physical computer
includes the whole universe and the humans in it, it already has the same 
status as the human.

This then raises some fascinating questions:

	1) Church's thesis that all computers are equivalent to a Turing
	   machine.  This is actually a PHYSICAL law (like law of gravitation)
	   potentially subject to a physical experiment.  It is conceivable
	   for example that some of the pecualiar effects of quantum mechanics
	   could allow calculations faster than any possible Turing machine.

	2) Is the entire universe a Turing machine?

	3) Is it conceivable that any thing part of the universe could
	   verify or refute 2)?

I am also struck by the similarity of the conclusions of some philosophers from
the Eastern tradition that we are all intimately connected 
with the whole universe.


> > I originally asked whether anyone disputed my claim that the human
> > mind is not equivalent to a turing machine. After all the negative
> > response, I would like to change my question to:
> > 
> > *IS THERE ANYONE THAT AGREES WITH ME THAT THE HUMAN MIND IS PROVABLY
> >  NOT EQUIVALENT TO A TURING MACHINE?*
See above.  I think this is a question for the physicists, and potentially
subject to physicl experiment.
> 

> 		"OK, but could a digital computer think?"
> 
> 	    If by "digital computer" we mean anything at all that has a level
> 	of description where it can be correctly described as the instantiation
> 	of a computer program, then again the answer is, of course, yes, since
> 	we are the instantiations of any number of computer programs, and we
> 	can think.

No I disagree, here he talks about the abstract machine.

> 
> 		"But could something think, understand, and so on *solely*
> 		 in virtue of being a computer with the right sort of program?
> 		 Could instantiating a program, the right program of course,
> 		 by itself be a sufficient condition of understanding?"
> 
> 	    This I think is the right question to ask, though it is usually
> 	confused with one of the earlier questions, and the answer to it is no.
> 
> 		"Why not?"
> 
> 	    Because the formal symbol manipulations themselves don't have
> 	any intentionality...
I agree.
> ...  If and when such machines come about, their causal powers will
> derive not from the substances they are made of, *but* *from* *their* *design*
> *and* *the* *programs* *that* *run* *in* *them*.  [ITALICS MINE]  And the way
> we will know they have those causal powers is by talking by them and listening
> carefully to what they they have to say."

This is a fascinating argument, incorrect I think.  Certainly in humans much
of their abilities come from there experience of the world, learning 
adaptation.  That is much of their state and behaviour is a result of their
experience not their genes.  I suspect any really interesting computer will be
similar. Much of its behaviour will be a result not of its original programming
but of its subsequent experience of the world.  Unfortunatly again to describe
the machines that result we must describe not only their original programming
but all their later possible experiences.  But they can potentially be 
affected by anything in the universe.

The problem with the current state of computing, robotics and AI is that most
computers have little or no interaction with the real world.  They have no 
bodies.  Hence they can to a very good approximatin be described by some formal
system.  Thus many people have a gut feeling that computers are fundamentally
different from humans.  In their guise as formal systems I think this is indeed
true.

I think there is also a practical lesson for AI here.  To get really 
interesting behaviour we need open machines which get a lot of experience of 
the real world.  Unfortunately we arent going to be able to formalize or 
predict the result. But it will be interesting.

Sorry about the length of this, but the question seemed too fascinating to
let alone.

John G. Cleary, Dept. Computer Science, The University of Calgary, 
2500 University Dr., N.W. Calgary, Alberta, CANADA T2N 1N4.
Ph. (403)220-6087
Usenet: ...{ubc-vision,ihnp4}!alberta!calgary!cleary
        ...nrl-css!calgary!cleary
CRNET (Canadian Research Net): cleary@calgary
ARPA:  cleary.calgary.ubc@csnet-relay