Path: utzoo!utgpu!water!watmath!clyde!bellcore!tness7!killer!osu-cis!tut.cis.ohio-state.edu!mandrill!gatech!purdue!decwrl!ucbvax!BEAVER.CS.WASHINGTON.EDU!uw-nsr!uw-warp!dennis
From: uw-nsr!uw-warp!dennis@BEAVER.CS.WASHINGTON.EDU
Newsgroups: comp.society.futures
Subject: (none)
Message-ID: <8806030846.AA26207@beaver.cs.washington.edu>
Date: 3 Jun 88 08:46:35 GMT
Sender: daemon@ucbvax.BERKELEY.EDU
Organization: The Internet
Lines: 125

To: uw-nsr!uw-beaver!info-futures@bu-cs.bu.edu
CC: cpac!todd
Subject: Re: The future of AI
Warning:  This is a fairly long and pedantic reply to a common position.

Doug Thompson (watmath!isishq!doug) believes it unlikely that the
human mind can be modelled scientifically:
 
    First, science and mathematics presume repeatability and
    predictability.  The ancient idea of human free will appears to be
    at odds with both.  Humans appear to be unpredictable.  

Just because an idea (including the idea of human free will) is
ancient, doesn't mean that it is correct.  Also, just because
something appears to be unpredictable does not mean that it is.  To a
16th century peasant, eclipses appeared to be unpredictable.  To
anybody who doesn't know what random number generator created a string
of digits, the next digit is unpredictable.  (Human) brains are
arguably the most complex structures known, so it is reasonable to
think that it will be extremely difficult, but not impossible, to
predict their behavior.  To put things into some kind of historical
perspective, I think we've done fairly well, since it's only been a
few centuries since we (Western civilization) have even known that
blood circulates, less since we've known about the existence of cells,
and a scant 30 years of knowing about DNA.  We're just now figuring
out other cell mechanisms such as mitochondria and inter- and
intra-cell communication.

    AI wants to build machines that can perform tasks or make decisions as 
    well as humans. I think though, that human reason and decision making is 
    not mechanical, it is a-mathematical, a-scientific and a-rational.

Although AI researchers do want to build such machines, I think that
may not be the most difficult part.  There is another (scientific, not
engineering) goal of AI; to create models of the mind, preferably at
the symbolic, rather than the physical level.

If Doug is right, and thinking is non-mechanical and wholly
unpredictable, then AI is doomed to fail.  However, I disagree with
that idea.  We will be able to find out for sure some time within the
next hundred years, give or take 90, via the following process.

One possible path to the AI goal Doug mentioned (building humanly
thinking machines) will be to build functionally identical copies of a
brain by replacing every individual neuron with identically
functioning nanomachines.  These nanomachines will be able to produce
a description of their configuration, which we can then use to stamp
out as many copies of that brain as we wish.  In addition, the copies
could be run at much higher speeds than neurons, producing machines
that function just like brains, but much, much faster.  [For a much
better description of this (and other even wilder) possibilities, see
K. Eric Drexler's "Engines of Creation."  I really can't do the idea
justice here.  After you've read that, you might want to subscribe to
the news group sci.nanotech.]

Of course, if it turns out that thinking relies on something other
than the physical processes in a brain (most likely electro-chemical
reactions of neurons), then this approach will not work.  I am
optimistic about this approach, though, because I know of no
scientific evidence that thinking involves anything other than
physical processes, and as I mention later, there is scientific
evidence that thinking does involve physical processes.

One unsatisfying aspect of this approach is that it might turn out to
be easier to manufacture functionally identical copies of minds via
nanotechnology than to model the mind on a symbolic level.  This is
analagous to the ability of a photocopy machine to produce copies of a
page without reading it or understanding it on any symbolic level.  We
certainly produced photocopy machines before we produced machines that
even scan text and convert it to ASCII symbols.

    Can you give a machine values, passion, emotions, and sympathies??
    Maybe, but whose values, whose passions? Yours? Mine? Adolph
    Hitler's?  All are "human".

I think that yes, machines such as I just described will come with
values, passion, emotions, and sympathies, all built in.  They could
be yours or mine, or any human's, but not Adolph Hitler's (unless he's
hiding out in Argentina or his brain has otherwise been preserved in
an undamaged state).  Unfortunately, it might be very difficult to
model these things at a level that satisfies us (symbolic rather than
physical) and that we can understand.  Maybe the super-minds we create
will be able to think up (and understand) a symbolic model.

    My hunch is that human thought is really dependent on dimensions
    of the universe which science (as we currently understand it) is
    not yet capable of fathoming.

I have yet to see any scientific (that is, repeatable) evidence that
this is the case.  I know of some evidence that this is not the case,
particularly the "mappability" of the brain.  When electrically
stimulated, certain regions of the brain reliably produce the same
sensation, trigger the same thoughts, or recall the same memories.

    I think that science cannot begin to explain the forces which move
    a person to believe or have faith. We can do some statistics on
    some of them, but I think we shall never be able to build a
    computer like Martin Luther, or Jesus Christ, or Moses.

On the contrary, science can begin to explain at least some of these
forces.  Someone could test my hypothesis (if they haven't already)
that exposure to people that "believe or have faith" causes
additional cases of people "believing."  I suspect that living in Utah
is strongly correlated with "believing" in the Church of Latter Day
Saints.  That seems to be a completely separate issue from whether or
not we will be able to build computers like particular people.

It is true that we will never be able to build copies of those
particular individuals since we have lost that patterns of neural
connections that were their brains.  However, I think we will be able
to build machines "like" them.

    . . . I suspect there is a part of the human being which is
    strongly connected to a super-natural reality which science has
    yet to get a grip on.

I suspect otherwise.  There are certainly even very basic physical and
chemical things about humans that science does not yet describe, but I
don't think we should despair of the possibility, at least until we
can prove otherwise.

Dennis.
-------
arpa:   uw-nsr!uw-warp!dennis@beaver.cs.washington.edu
usenet: {ihnp4|decvax|...}uw-beaver!uw-nsr!uw-warp!dennis