Path: utzoo!utgpu!water!watmath!clyde!att!osu-cis!tut.cis.ohio-state.edu!mailrus!nrl-cmf!ames!umd5!uvaarpa!mcnc!gatech!purdue!decwrl!hplabs!sdcrdcf!trwrb!aero!venera.isi.edu!smoliar
From: smoliar@vaxa.isi.edu (Stephen Smoliar)
Newsgroups: comp.ai
Subject: Re: Free Will & Self-Awareness
Message-ID: <5474@venera.isi.edu>
Date: 10 May 88 00:56:01 GMT
References: <1029@crete.cs.glasgow.ac.uk> <4134@super.upenn.edu> <3200014@uiucdcsm> <1484@pt.cs.cmu.edu> <5100@pucc.Princeton.EDU> <1099@crete.cs.glasgow.ac.uk>
Sender: news@venera.isi.edu
Reply-To: smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar)
Organization: USC-Information Sciences Institute
Path: utzoo!utgpu!water!watmath!clyde!att!osu-cis!tut.cis.ohio-state.edu!mailrus!nrl-cmf!ames!umd5!uvaarpa!mcnc!gatech!purdue!decwrl!hplabs!sdcrdcf!trwrb!aero!venera.isi.edu!smoliar
From: smoliar@vaxa.isi.edu (Stephen Smoliar)
Newsgroups: comp.ai
Subject: Re: Free Will & Self-Awareness
Message-ID: <5474@venera.isi.edu>
Date: 10 May 88 00:56:01 GMT
References: <1029@crete.cs.glasgow.ac.uk> <4134@super.upenn.edu> <3200014@uiucdcsm> <1484@pt.cs.cmu.edu> <5100@pucc.Princeton.EDU> <1099@crete.cs.glasgow.ac.uk>
Sender: news@venera.isi.edu
Reply-To: smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar)
Lines: 90

In article <1099@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:
>Research IS stopped for ethical reasons, especially in Medicine and
>Psychology.  I could envisage pressure on institutions to limit its AI
>work to something which squares with our ideals of humanity.

Just WHOSE ideals of humanity did you have in mind?  I would not be surprised
at the proposition that humanity, taken as a single collective, would not be
able to agree on any single ideal;  that would just strike me as another
manifestation of human nature . . . a quality for which the study of artificial
intelligence can develop great respect.  Back when I was a callow freshman, I
was taught to identify Socrates with the maxim, "Know thyself."  As an
individual who has always been concerned with matters of the mind, I can
think of no higher ideal to which I might aspire than to know what it is
that allows myself to know;  and I regard artificial intelligence as an
excellent scientific approach to the pursuit of this ideal . . . one which
enables me to test flights of my imagination with concrete experimentation.
Perhaps Gilbert Cockton would be kind enough to let us know what it is that
he sees in artificial intelligence research that does not square with his
personal ideals of humanity (whatever they may be);  and I hope he does not
confuse the sort of brute force engineering which goes into such endeavours
as "smart weapons" with scientific research.

>If the
>US military were not using technology which was way beyond the
>capability of its not-too-bright recruits, then most of the funding
>would dry up anyway.  With the Pentagon's reported concentration on
>more short-term research, they may no longer be able to indulge their
>belief in the possibility of intelligent weaponry.
>
Which do you want to debate, ethics or funding?  The two have a long history
of being immiscible.  The attitude which our Department of Defense takes
towards truly basic research is variable.  Right now, times are hard (but
then they don't appear to be prosperous in most of Europe either).  We
happen to have an administration that is more interested in guns than brains.
We have survived such periods before, and I anticipate that we shall survive
this one.  However, a whole-scale condemnation of funding on grounds of
ethics doesn't gain very much other than a lot of bad feeling.  Fortunately,
we have benefited from the fat years to the extent that the technology has
become affordable to the extent that some of us can pursue more abstract
studies of artificial intelligence with cheaper resources than ever before.
Anyone who REALLY doesn't want to take what he feels is "dirty" money can
function with much smaller grants from "cleaner" sources (or even, perhaps,
work out his garage).
>
>The question is, do most people WANT a computational model of human
>behaviour?

Since when do "most people" determine the agenda of any scientific inquiry.
Did "most people" care whether or not this planet was the center of the
cosmos.  The people who cared the most were navigators, and all they cared
about was the accuracy of their charts.  The people who seemed to care the
most about Darwin were the ones who were most obsessed with the fundamental
interpretation of scripture.  This may offend sociological ideals;  but
science IS, by its very nature, an elite profession.  A scientist who lets
"most people" set the course of his inquiry might do well to consider the
law or the church as an alternative profession.

>  Everyone is free to study what they want, but public
>funding of a distasteful and dubious activity does not follow from
>this freedom.

And who is to be the arbiter of taste?  I can imagine an ardent Zionist who
might find the study of German history, literature, or music to be distasteful
to an extreme.  (I can remember when it was impossible to hear Richard Wagner
or Richard Strauss in concert in Israel.)  I can imagine political scientists
who might find the study of hunter-gartherer cultures to be distasteful for
having no impact on their personal view of the world.  I have about as much
respect for such tastes as I have for anyone who would classify artificial
intelligence research as "a distasteful and dubious activity."

>   If funding were reduced, AI would join fringe areas such as
>astrology, futorology and palmistry.  Public funding and institutional support
>for departments implies a legitimacy to AI which is not deserved.


Of course, those "fringe areas" do not get their funding from the government.
They get it through their own private enterprising, by which they convince
those "most people" cited above to part with hard-earned dollars (after the
taxman has taken his cut).  Unfortunately, scientific research doesn't "sell"
quite so well, because it is an arduous process with no quick delivery.
Gilbert Cockton still has not made it clear, on scientific grounds at any
rate, why AI does not deserve this so-called "legitimacy."  In a subsequent
article, he has attempted to fall back on what I like to call the
what-a-piece-of-work-is-man line of argument.  Unfortunately, this
approach is emotional, not scientific.  Why he has to draw upon emotions
must only be because he cannot muster scientific arguments to make his
case.  Fortunately, those of us who wish to pursue a scientific research
agenda need not be deterred by such thundering.  We can devote our attention
to the progress we make in our laboratories.