Path: utzoo!attcan!uunet!husc6!mailrus!caen.engin.umich.edu!brian From: brian@caen.engin.umich.edu (Brian Holtz) Newsgroups: comp.ai Subject: Re: DVM's request for definitions Keywords: philosophy, free will Message-ID: <898@maize.engin.umich.edu> Date: 31 May 88 15:27:11 GMT References: <29049@yale-celray.yale.UUCP> <894@maize.engin.umich.edu> <1020@cresswell.quintus.UUCP> Organization: caen Lines: 89 In article <1020@cresswell.quintus.UUCP>, ok@quintus.UUCP (Richard A. O'Keefe) writes: > In article <894@maize.engin.umich.edu>, brian@caen.engin.umich.edu (Brian > Holtz) writes: > > 3. volition: the ability to identify significant sets of options and to > > predict one's future choices among them, in the absence of any evidence > > that any other agent is able to predict those choices. > > > > There are a lot of implications to replacing free will with my notion of > > volition, but I will just mention three. > > > > - If my operationalization is a truly transparent one, then it is easy > > to see that volition (and now-dethroned free will) is incompatible with > > an omniscient god. Also, anyone who could not predict his behavior as > > well as someone else could predict it would no longer be considered to > > have volition. [proceeding excerpts not in any particular order] > For me, "to have free will" means something like "to act in accord with > my own nature". If I'm a garrulous twit, people will be able to predict > pretty confidently that I'll act like a garrulous twit (even though I > may not realise this), but since I will then be behaving as I wish I > will correctly claim free will. Recall that my definition of free will ("the ability to make at least some choices that are neither uncaused nor completely determined by physical forces") left little room for it to exist. Your definition (though I doubt you will appreciate being held to it this strictly) leaves too much room: doesn't a falling rock, or the average computer program, "act in accord with [its] own nature"? > One thing I thought AI people were taught was "beware of the homunculus". > As soon as we start identifying parts of our mental activity as external > to "ourselves" we're getting into homunculus territory. I agree that homunculi are to be avoided; that is why I relegated "the ability to make at least some choices that are neither uncaused nor completely determined by *external* physical forces" to being a definition not of free will, but of "self-determination". The free will that you are angling for sounds a lot like what I call self-determination, and I would welcome any efforts to sharpen the definition so as to avoid the externality/internality trap. So until someone comes up with a definition of free will that is better than yours and mine, I think the best course is to define free will out of existence and take my "volition" as the operationalized designated hitter for free will in our ethics. > What has free will to do with prediction? Presumably a dog is not > self conscious or engaged in predicting its activities, but does that > mean that a dog cannot have free will? Free will has nothing to do with prediction; volition does. The question of whether a dog has free will is a simple one with either your definition *or* mine. By my definition, nothing has free will; by yours, it seems to me that everything does. (Again, feel free to refine your definition if I've misconstrued it.) A dog would seem to have self-determination as I've defined it, but you and I agree that my definition's reliance on ex/in-ternality makes it a suspect categorization. A dog would clearly not have volition, since it can't make predictions about itself. And since volition is what I propose as the predicate we should use in ethics, we are happily exempt from extending ethical personhood to dogs. > "no longer considered to have volition ..." I've just been reading a > book called "Predictable Pairing" (sorry, I've forgotten the author's) > name, and if he's right it seems to me that a great many people do > not have volition in this sense. If we met Hoyle's "Black Cloud", and > it with its enormous computational capacity were to predict our actions > better than we did, would that mean that we didn't have volition any > longer, or that we had never had it? A very good question. It would mean that we no longer had volition, but that we had had it before. My notion of volition is contingent, because it depends on "the absence of any evidence that any other agent is able to predict" our choices. What is attractive to me about volition is that it would be very useful in answering ethical questions about the "free will" (in the generic ethical sense) of arbitrary candidates for personhood: if your AI system could demonstrate volition as defined, then your system would have met one of the necessary conditions for personhood. What is unnerving to me about my notion of volition is how contingent it is: if Hoyle's "Black Cloud" or some prescient god could foresee my behavior better than I could, I would reluctantly conclude that I do not even have an operational semblence of free will. My conclusion would be familiar to anyone who asserts (as I do) that the religious doctrine of predestination is inconsistent with believing in free will. I won't lose any sleep over this, though; Hoyle's "Black Cloud" would most likely need to use analytical techniques so invasive as to leave little of me left to rue my loss of volition.