Xref: utzoo comp.ai:2720 talk.philosophy.misc:1636
Path: utzoo!attcan!uunet!husc6!ukma!rutgers!soleil!peru
From: peru@soleil.UUCP (Dave Peru)
Newsgroups: comp.ai,talk.philosophy.misc
Subject: Artificial Intelligence and Brain Cancer
Message-ID: <506@soleil.UUCP>
Date: 28 Nov 88 17:08:23 GMT
Organization: GE Solid State, Somerville, NJ
Lines: 62

I know AI research is building smart-tools for smart-people, but for a moment,
let's get back to the idea of TRUE Artificial Intelligence.

Whatever your definition of intelligence, let's get back to the idea of
building a "Terminator", or a "Commander Data".

Assume the universe is deterministic, consider the following paragraph
from "Beyond Einstein" by Kaku/Trainer 1987:

"As an example, think of a cancer researcher using molecular biology to
 probe the interior of cell nuclei.  If a physicist tells him, quite
 correctly, that the fundamental laws governing the atoms in a DNA molecule
 are completely understood, he will find this information true but
 useless in the quest to conquer cancer.  The cure for cancer involves
 studying the laws of cell biology, which involve trillions upon trillions
 of atoms, too large a problem for any modern computer to solve.  Quantum
 mechanics serves only to illuminate the larger rules governing molecular
 chemistry, but it would take a computer too long to solve the Schrodinger
 equation to make any useful statements about DNA molecules and cancer."

Using this as an analogy, and assuming Kaku/Trainer were not talking
about brain cancer, how big a computer is big enough for intelligence
to evolve?

Can someone give me references to any articles that make "intelligent" guesses
about how much computing power is necessary for creating artificial
intelligence?  How many tera-bytes of memory?  How many MIPS?  Knowing the
recent rates of technological development, how many years before we have
machines powerful enough?

Am I wasting my time on weekends trying to create artificial intelligence
on my home computer?  Should I buy another 2 mega-bytes of memory?  :-)

In a previous article someone made reference to what I meant by "know"
in my statement "know how to solve problems".  If you don't KNOW what
KNOW "means", then you don't KNOW anything.  I "mean", we have to start
somewhere, or we can't have a science.  Without duality, science has no
meaning.

Do you remember the scene from the movie "Terminator" when Arnold uses a
razor blade to cut out his damaged eye, pretty good hand-eye coordination
for a machine.

How many of you out there were rooting for the Terminator?

I love the affect of the idea "Artificial Intelligence" has on society.  With
an army of "Commander Data" androids, why would any corporation keep any human
workers at all.  Of course, after a few years, a bright hard-working
bottom-line lean-and-mean android will become CEO.  Irrational silly human
beings are so inefficient, I like working seven days a week, 24 hrs/day.

Does anyone have any references to any studies on myths and misconceptions the
population may have about AI research?   I'm sure I'm not the only one that
watches sci-fi movies.  Maybe teenagers think there's no point to studying
since androids are just around the corner, maybe they're right!  With all the
money banks pump into AI research, I thought we would have an "intelligent"
stockbroker last year.

Please send responses to "talk.philosophy.misc" or send me email.

P.S. In reference to "Assume the universe is deterministic", I think the
     universe is analog and cannot be described digitally.