Path: utzoo!mnetor!uunet!husc6!bloom-beacon!mit-eddie!uw-beaver!cornell!rochester!pt.cs.cmu.edu!speech2.cs.cmu.edu!yamauchi
From: yamauchi@speech2.cs.cmu.edu (Brian Yamauchi)
Newsgroups: comp.ai
Subject: Re: Free Will & Self-Awareness
Message-ID: <1631@pt.cs.cmu.edu>
Date: 7 May 88 02:46:11 GMT
References: <1029@crete.cs.glasgow.ac.uk> <4134@super.upenn.edu> <1099@crete.cs.glasgow.ac.uk>
Sender: netnews@pt.cs.cmu.edu
Organization: Carnegie-Mellon University, CS/RI
Lines: 66

In article <1099@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
> In article <5100@pucc.Princeton.EDU> RLWALD@pucc.Princeton.EDU writes:
> >    Are you saying that AI research will be stopped because when it ignores
> >free will, it is immoral and people will take action against it?
> Research IS stopped for ethical reasons, especially in Medicine and
> Psychology.  I could envisage pressure on institutions to limit its AI
> work to something which squares with our ideals of humanity.

I can envisage pressure on institutions to limit work on sociology and
psychology to limit work to that which is compatible with orthodox
Christianity.  That doesn't mean that this is a good idea.

> If the
> US military were not using technology which was way beyond the
> capability of its not-too-bright recruits, then most of the funding
> would dry up anyway.  With the Pentagon's reported concentration on
> more short-term research, they may no longer be able to indulge their
> belief in the possibility of intelligent weaponry.

Weapons are getting smarter all the time.  Maybe soon we won't need the
not-too-bright recruits.....

> >    When has a 'doctrine' (which, by the way, is nothing of the sort with
> >respect to free will) any such relationship to what is possible?
> From this, I can only conclude that your understanding of social
> processes is non-existent.  Behaviour is not classified as deviant
> because it is impossible, but because it is undesirable.

From this, I can only conclude that either you didn't understand the
question or I didn't understand the answer.  What do the labels that society
places on certain actions have to do with whether any action is
theoretically possible?  Anti-nuke activists may make it practically
impossible to build nuclear power plants -- they cannot make it physically
impossible to split atoms.

> The question is, do most people WANT a computational model of human
> behaviour?  In these days of near 100% public funding of research,
> this is no longer a question that can be ducked in the name of
> academic freedom.

100% public funding?????  Haven't you ever heard of Bell Labs, IBM Watson
Research Center, etc?  I don't know how it is in the U.K., but in the U.S.
the major CS research universities are actively funded by large grants from
corporate sponsors.  I suppose there is a more cooperative atmosphere here --
in fact, many of the universities here pride themselves on their close
interactions with the private research community.

Admittedly, too much of all research is dependent on government funds, but
that's another issue....

>  Everyone is free to study what they want, but public
> funding of a distasteful and dubious activity does not follow from
> this freedom.   If funding were reduced, AI would join fringe areas such as
> astrology, futorology and palmistry.  Public funding and institutional support
> for departments implies a legitimacy to AI which is not deserved.

A modest proposal: how about a cease-fire in the name-calling war?  The
social scientists can stop calling AI researchers crackpots, and the AI
researchers can stop calling social scientists idiots.

______________________________________________________________________________

Brian Yamauchi                      INTERNET:    yamauchi@speech2.cs.cmu.edu
Carnegie-Mellon University
Computer Science Department
______________________________________________________________________________