Path: utzoo!mnetor!uunet!yale!dvm
From: dvm@yale.UUCP (Drew Mcdermott)
Newsgroups: comp.ai
Subject: Free Will
Message-ID: <28705@yale-celray.yale.UUCP>
Date: 9 May 88 14:35:53 GMT
Organization: Yale University, New Haven, CT
Lines: 54
Keywords: philosophy


My contribution to the free-will discussion:

Suppose we have a robot that models the world temporally, and uses
its model to predict what will happen (and possibly for other purposes).
It uses Qualitative Physics or circumscription, or, most likely, various
undiscovered methods, to generate predictions.  Now suppose it is in a
situation that includes various objects, including an object it calls R,
which it knows denotes itself.  For concreteness, assume it believes
a situation to obtain in which R is standing next to B, a bomb with a
lit fuse.  It runs its causal model, and predicts that B will explode,
and destroy R.

Well, actually it should not make this prediction, because R will be
destroyed only if it doesn't roll away quickly.  So, what will R do?  The
robot could apply various devices for making causal prediction, but they
will all come up against the fact that some of the causal antecedents of R's
behavior *are situated in the very causal analysis box* that is trying to
analyze them.  The robot might believe that R is a robot, and hence that
a good way to predict R's behavior is to simulate it on a faster CPU, but
this strategy will be in vain, because this particular robot is itself.
No matter how fast it simulates R, at some point it will reach the point
where R looks for a faster CPU, and it won't be able to do that simulation
fast enough.  Or it might try inspecting R's listing, but eventually it
will come to the part of the listing that says "inspect R's listing."
The strongest conclusion it can reach is that "If R doesn't roll away,
it will be destroyed; if it does roll away, it won't be."  And then of
course this conclusion causes R to roll away.

Hence any system that is sophisticated enough to model situations that its own
physical realization takes part in must flag the symbol describing that
realization as a singularity with respect to causality.  There is simply
no point in trying to think about that part of the universe using causal
models.  The part so infected actually has fuzzy boundaries.  If R is
standing next to a precious art object, the art object's motion is also
subject to the singularity (since R might decided to pick it up before
fleeing).  For that matter, B might be involved (R could throw it), or
it might not be, if the reasoner can convince itself that attempts to
move B would not work.  But all this is a digression.  The basic point
is that robots with this kind of structure simply can't help but think of
themselves as immune from causality in this sense.  I don't mean that they
must understand this argument, but that evolution must make sure that their
causal-modeling system include the "exempt" flag on the symbols denoting
themselves.  Even after a reasoner has become sophisticated about physical
causality, his model of situations involving himself continue to have this
feature.  That's why the idea of free will is so compelling.  It has nothing
to do with the sort of defense mechanism that Minsky has proposed.

I would rather not phrase the conclusion as "People don't really have
free will," but rather as "Free will has turned out to be possession of
this kind of causal modeler."  So people and some mammals really do have
free will.  It's just not as mysterious as one might think.

                       -- Drew McDermott