Path: utzoo!attcan!uunet!husc6!purdue!gatech!hubcap!gerald
From: gerald@umb.umb.edu (Gerald Ostheimer)
Newsgroups: comp.parallel
Subject: Re: parallel numerical algorithms
Summary: check out the MIT dataflow project
Message-ID: <1776@hubcap.UUCP>
Date: 31 May 88 17:26:48 GMT
Sender: fpst@hubcap.UUCP
Lines: 76
Approved: parallel@hubcap.clemson.edu

In article <1772@hubcap.UUCP> noao!mcdsun!asuvax!asuvax!nelan@HANDIES.UCAR.EDU
(George Nelan) writes:

>Perhaps, just perhaps, maybe someday, somewhere, someplace, and sometime,
>someone will invent something like this:
>
>An infinite, w.r.t. the universe of discourse defined by the data dependencies
>of a particular program, MIMD ultra-fine grained side-effect free PARALLEL
>machine for IMPLICITLY parallel programs.  I guess no side-effects => purely
>functional programs, huh?  Also, it looks like deadlock & synchronization
>(consistency) constraints => normal order (lazy) evaluation must be the
>computation model of choice [I have some references why this is so --
>sufficient interest => I'll post & discuss]; for computational power,
>be sure to allow for higher-order functions too.

Someday may be closer than you think.
You should take a look at the work on the tagged-token dataflow machine of
MIT's Computation Structures Group under Arvind. (Their work, for some reason
unbeknownst to me, did not yet receive any attention in this newsgroup.)
Their combined language/architecure approach meets the following of your
criteria:
 o  implicit parallelism defined purely by data dependencies
 o  MIMD: full-powered non-von CPU's; instruction scheduling is nifty: no
    sequential program storage, but rather the result of one instruction (a
    token) 'enables' another instruction, possibly across CPU's; enabled
    instructions are maintained (in haphazard order) in a pipeline inside
    each CPU
 o  side-effect free: to a large degree--there is actually a limited form of
    side-effects, which does, however, not affect determinacy: their language
    Id (speak 'Idd') offers something called I-structures. I-structures are,
    intuitively, arrays whose fields can be initialized exactly once, but
    possibly by side-effect. Deals quite nicely with some serious problems of
    pure functional languages (no exorbitant storage requirements, no extensive
    copying). Also makes translation of those 'dusty' Fortran programs easier.
 o  normal order evaluation
 o  higher order functions

 One of the strengths of this architecture is that it can deal very gracefully
 with network latency and memory latency. As long as the processor pipelines
 are full, all CPU's keep working concurrently. (This is probably why 
 network topology seems not to be an overly important topic in their
 literature.)

 A (possibly) surprising problem that turns up is that there can be too
 much parallelism in a program, which can overflow the pipelines and choke the
 machine.

 There are of course more problems.
 I for one never quite understood how the program is distributed over the
 CPU's. This must happen dynamically (when calling functions or entering loops,
 for example), if parallelism is to be exploited.

 Unfortunately I can't give any references to widely available comprehensive
 publications on their work, probably because sofar their work was confined to
 simulations on a network of TI Explorers, and the development of a
 1000-processor machine is just beginning.

 Of the papers I have available, two seem give a reasonable overview:

 "Future Scientific Programming on Parallel Machines" by Arvind and Ekanadham
 (the latter of IBM, Yorktown Heights), Computation Structures Group Memo 272,
 and also to appear (appeared?) in Proc. of the Int. Conf. on Supercomputing,
 Athens, 1987.

 "Executing a Program on the MIT Tagged-Token Dataflow Architecture" by Arvind
 and Nikhil, CSG Memo 271, and also to appear (appeared?) Proc. Parallel
 Architectures and Languages, Europe, Eindhoven 1987

 Not being a member of their team, I take no responsibility for the accuracy
 of the above and gladly welcome any corrections/clarifications.

-- 
Gerald				"When I use a word, it means
		 just what I choose it to mean -
				 neither more nor less"
				 -Humpty Dumpty, Through the Looking Glass