Path: utzoo!utgpu!water!watmath!clyde!att!osu-cis!tut.cis.ohio-state.edu!bloom-beacon!uceng.UUCP!dmocsny
From: dmocsny@uceng.UUCP (daniel mocsny)
Newsgroups: comp.ai.digest
Subject: Re: Grand Challenges
Message-ID: <19880927032326.1.NICK@INTERLAKEN.LCS.MIT.EDU>
Date: 27 Sep 88 03:23:00 GMT
Sender: daemon@bloom-beacon.MIT.EDU
Organization: The Internet
Lines: 86
Approved: ailist@ai.ai.mit.edu

---- Forwarded Message Follows ----
Return-path: <@AI.AI.MIT.EDU:ailist-request@AI.AI.MIT.EDU>
Received: from AI.AI.MIT.EDU by ZERMATT.LCS.MIT.EDU via CHAOS with SMTP id 196549; 24 Sep 88 06:48:11 EDT
Received: from BLOOM-BEACON.MIT.EDU (TCP 2224000021) by AI.AI.MIT.EDU 24 Sep 88 06:55:41 EDT
Received: by BLOOM-BEACON.MIT.EDU with sendmail-5.59/4.7 
	id ; Sat, 24 Sep 88 06:25:14 EDT
Received: from USENET by bloom-beacon.mit.edu with netnews
	for ailist@ai.ai.mit.edu (ailist@ai.ai.mit.edu)
	(contact usenet@bloom-beacon.mit.edu if you have questions)
Date: 23 Sep 88 13:39:57 GMT
From: ndcheg!uceng!dmocsny@iuvax.cs.indiana.edu  (daniel mocsny)
Organization: Univ. of Cincinnati, College of Engg.
Subject: Re: Grand Challenges
Message-Id: <266@uceng.UC.EDU>
References: <123@feedme.UUCP>
Sender: ailist-request@ai.ai.mit.edu
To: ailist@ai.ai.mit.edu

In article <123@feedme.UUCP>, doug@feedme.UUCP (Doug Salot) writes:
[ goals for computer science ]

> 2) Build a machine which can read a chapter of a physics text and
> then answer the questions at the end.  At least this one can be
> done by some humans!
> 
> While I'm sure some interesting results would come from attempting
> such projects, these sorts of things could probably be done sooner
> by tossing out ethical considerations and cloning humanoids.

A machine that could digest a physics text and then answer questions
about the material would be of atronomical value. Sure, humanoids can
do this after a fashion, but they have at least three drawbacks: 

(1) Some are much better than others, and the really good ones are
rare and thus expensive,
(2) None are immortal or particularly speedy (which limits the amount of
useful knowledge you can pack into one individual),
(3) No matter how much the previous humanoids learn, the next one
still has to start from scratch.

We spend billions of dollars piling up research results. The result,
which we call ``human knowledge,'' we inscribe on paper sheets and
stack in libraries. ``Human knowledge'' is hardly monolithic. Instead
we partition it arbitrarily and assign high-priced specialists to each
piece. As a result, ``human knowledge'' is hardly available in any
sort of general, meaningful sense. To find all the previous work
relevant to a new problem is often quite an arduous task, especially
when it spans several disciplines (as it does with increasing 
frequency). I submit that our failure to provide ourselves with
transparent, simple access to human knowledge stands as one of the
leading impediments to human progress. We can't provide such access
with a system that dates back to the days of square-rigged ships.

In my own field (chemical process design) we had a problem (synthesizing
heat recovery networks in process plants) that occupied scores of
researchers from 1970-1985. Lots of people tried all sorts of approaches
and eventually (after who knows how many grants, etc.) someone spotted
some important analogies with some problems from Operations Research work
of the '50's. We did have to develop some additional theory, but we could
have saved a decade or so with a machine that ``knew'' the literature.

Another example of an industrially significant problem in my field is
this: given a target molecule and a list of available precursors,
along with whatever data you can scrape together on possible chemical
reactions, find the best sequence of reactions to yield the target
from the precursors. Chemists call this the design of chemical syntheses,
and chemical engineers call it the reaction path synthesis problem. Since
no general method exists to accurately predict the success of a chemical
reaction, one must use experimental data. And the chemical literature
contains references to literally millions of compounds and reactions, with
more appearing every day. Researchers have constructed successful programs
to solve these types of problems, but they suffer from a big drawback: no
such program embodies enough knowledge of chemistry to be really useful.
The programs have some elaborate methods to represent to represent reaction
data, but these knowledge bases had to be hand-coded. Due to the chaos
in the literature, no general method of compiling reaction data automatically
has worked yet. Here we have an example of the literature containing 
information of enormous potential value, but it is effectively useless.

If someone handed me a machine that could digest all (or at least
large subsets) of the technical literature and then answer any
question that was answerable from the literature, I could become a
wealthy man in short order. I doubt that many of us can imagine how
valuable such a device would be. I hope to live to see such a thing.

Dan Mocsny