Xref: utzoo comp.ai:2009 sci.philosophy.tech:670 Path: utzoo!attcan!uunet!lll-winken!lll-tis!helios.ee.lbl.gov!pasteur!agate!ig!uwmcsd1!mailrus!tut.cis.ohio-state.edu!cs.utexas.edu!sdcrdcf!markb From: markb@sdcrdcf.UUCP (Mark Biggar) Newsgroups: comp.ai,sci.philosophy.tech Subject: Re: How to dispose of the free will issue (long) Keywords: free will architecture terminology Message-ID: <5384@sdcrdcf.UUCP> Date: 8 Jul 88 16:18:37 GMT References: <483@cvaxa.sussex.ac.uk> <794@l.cc.purdue.edu> <488@aiva.ed.ac.uk> Reply-To: markb@sdcrdcf.UUCP (Mark Biggar) Organization: Unisys - System Development Group, Santa Monica Lines: 17 In article <488@aiva.ed.ac.uk> jeff@uk.ac.ed.aiva (Jeff Dalton,E26 SB x206E,,2295119) writes: >In article <794@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes: >>Whether or not we have free will, we should behave as if we do, >>because if we don't, it doesn't matter. >If that is true -- if it doesn't matter -- then we will do just as well >to behave as if we do not have free will. Not so, believing in free will is a no lose situation; while believing that you don't have free is a no win situation. In the first case either your right or it doesn't matter, in the second case either your wrong or it doesn't matter. Game theory (assuming you put more value on being right then wrong (if it doesn't matter there are no values anyway)) says the believing and acting like you have free will is the way that has the most expected return. Mark Biggar {allegra,burdvax,cbosgd,hplabs,ihnp4,akgua,sdcsvax}!sdcrdcf!markb markb@rdcf.sm.unisys.com