Xref: utzoo comp.ai:2729 talk.philosophy.misc:1642
Path: utzoo!utgpu!watmath!clyde!att!rutgers!mailrus!nrl-cmf!ames!sgi!arisia!quintus!ok
From: ok@quintus.uucp (Richard A. O'Keefe)
Newsgroups: comp.ai,talk.philosophy.misc
Subject: Re: Artificial Intelligence and Brain Cancer
Message-ID: <763@quintus.UUCP>
Date: 29 Nov 88 11:05:11 GMT
References: <506@soleil.UUCP>
Sender: news@quintus.UUCP
Reply-To: ok@quintus.UUCP (Richard A. O'Keefe)
Organization: Quintus Computer Systems, Inc.
Lines: 86

In article <506@soleil.UUCP> peru@soleil.UUCP (Dave Peru) writes:
>Assume the universe is deterministic
I cannot assume that the universe is deterministic and also accept
Quantum Mechanics.

>consider the following paragraph from "Beyond Einstein" by Kaku/Trainer 1987:

>"As an example, think of a cancer researcher using molecular biology to
> probe the interior of cell nuclei.  If a physicist tells him, quite
> correctly, that the fundamental laws governing the atoms in a DNA molecule
> are completely understood, he will find this information true but
> useless in the quest to conquer cancer.  The cure for cancer involves
> studying the laws of cell biology, which involve trillions upon trillions
> of atoms, too large a problem for any modern computer to solve.  Quantum
> mechanics serves only to illuminate the larger rules governing molecular
> chemistry, but it would take a computer too long to solve the Schrodinger
> equation to make any useful statements about DNA molecules and cancer."

Well, that there is such a thing as THE cure for cancer is almost certainly
false, and even if there were a unique cure, presumably studying the laws
of cell biology would be involved in *finding* the cure, rather than in the
cure itself.  As a strict matter of historical fact, Quantum Mechanics has
illuminated some of the problems rather directly (NMR is a quantum-mechanical
phenomenon; it was study of QM which suggested that biological molecules
might be vulnerable to microwave radiation and how to predict what
frequencies might be most relevant).  Also, judging only from this
paragraph, Kaku/Trainer are guilty of reductionism themselves.  (I'll leave
it to other readers to explain how.)

>Using this as an analogy, and assuming Kaku/Trainer were not talking
>about brain cancer, how big a computer is big enough for intelligence
>to evolve?

I really don't understand how that paragraph serves as an analogy for
intelligence.  Finding a cure for cancer may be a very complex process
involving many levels of explanation, but _causing_ it is something a
tiny little virus can manage easily (one containing millions, rather
than trillions, of atoms).  A flatworm can learn; it is not stretching
language too far to say that it is "intelligent" to _some_ degree.  We
can already model that much.  With Connection Machines and the like, we
might be able to do reasonable simulations of insects.  How much
intelligence do you want?  Another point:  the size of computer needed
to support a single ``intelligent'' program and the size of computer
needed to support an ``evolving'' population in which intelligence emerges
are two very different sizes.

>Can someone give me references to any articles that make "intelligent" guesses
>about how much computing power is necessary for creating artificial
>intelligence?  How many tera-bytes of memory?  How many MIPS?  Knowing the
>recent rates of technological development, how many years before we have
>machines powerful enough?

There was an article in CACM this year which included estimates of how
much memory capacity &c humans have.  The answer to the last question
is "well within a century".

>Am I wasting my time on weekends trying to create artificial intelligence
>on my home computer?

Yes, unless you enjoy that sort of thing.

>In a previous article someone made reference to what I meant by "know"
>in my statement "know how to solve problems".  If you don't KNOW what
>KNOW "means", then you don't KNOW anything.  I "mean", we have to start
>somewhere, or we can't have a science.  Without duality, science has no
>meaning.

Did the slave boy "KNOW" how to solve that geometry problem before
Socrates asked him?  If I possess all the information required to solve
a puzzle, am able to perform each of the steps in the solution, but would
not live long enough to perform all of them, do I "KNOW" how to solve it?
If I BELIEVE that I have a method which will solve the puzzle in my life-
time, but my reason for believing it is wrong, although my method is in
fact correct, do I "KNOW" how to solve it?  If I have no idea how to
solve such puzzles myself, but have a friend who always helps me solve
them, so that when presented with such a puzzle I never fail to obtain
a solution, do I "KNOW" how to solve it?  How about if the "friend" is
a computer?  How about if the "computer" is a set of rules in a book which
I cannot remember all at once, but can follow?  According to one
dictionary definition, it would be sufficient if I could recognise a
solution when I saw one.

If we "KNEW" what "KNOW" "means", we wouldn't need philosophy.

If I understood what "without duality, science has no meaning", I might
agree with it.  For now, I can only wonder at it.