Path: utzoo!attcan2!uunet!husc6!bbn!gatech!hubcap!fpst
From: fpst@hubcap.UUCP (Steve Stevenson)
Newsgroups: comp.parallel
Subject: Re: parallel numerical algorithms
Message-ID: <1794@hubcap.UUCP>
Date: 1 Jun 88 20:26:31 GMT
Organization: Clemson University, Clemson, SC
Lines: 50
Approved: parallel@hubcap.clemson.edu

In article <1772@hubcap.UUCP> George Nelan writes:
....
>Perhaps, just perhaps, maybe someday, somewhere, someplace, and sometime,
>someone will invent something like this:
>
>An infinite, w.r.t. the universe of discourse defined by the data dependencies
>of a particular program, MIMD ultra-fine grained side-effect free PARALLEL
>machine for IMPLICITLY parallel programs.  I guess no side-effects => purely
>functional programs, huh?  Also, it looks like deadlock & synchronization
>(consistency) constraints => normal order (lazy) evaluation must be the
>computation model of choice [I have some references why this is so --
>sufficient interest => I'll post & discuss]; for computational power,
>be sure to allow for higher-order functions too.

What you describe sounds pretty much like a dataflow machine to me.
Experimental such have been built (Manchester, UK for instance) but they
don't seem to be a success. For some reason there is always a lot of talk
about them and their principles of operation before they are built but when
the hardware is there you never get to hear anything about the resulting
performance. Maybe the statistics is too embarrassing, who knows?  Another
indication of the practical problems with dataflow machines is that despite
the fact that the concept has been around for a long time (since the
seventies) there are still no commercial manufacturers out there who have
tried to build one and sell for profit.

The main problem with dataflow machines is that they use an extremely
general scheme to handle fine-grain parallelism. This will cause a
gargantuan overhead. Much of the fine-grain parallelism in a program
typically has a data-independent structure which means that the dependencies
can be analyzed *at compile-time* (as opposed to dataflow mechanisms that
handle this at run-time) and the instructions can be scheduled in advance in
a way that executes as efficient as possible on the hardware at hand. Unless
dataflow architects find ways of incorporating this I don't think dataflow
architectures ever will be cost-effective.

IEEE Computer, February 1982, is a special issue on dataflow. It is a good
introduction for the one previously unfamiliar in the field. It is
especially valuable since it not only contains articles of authors positive
to dataflow but also an article that argues that dataflow is no good:
Gajski, Padua, Kuck, "A Second Opinion on Data Flow Machines and Languages".
I think their critique of dataflow architectures still holds water (although
I don't share their opinion on data flow *languages*, but that's another
issue).

Bjorn Lisper
-- 
Steve Stevenson                            fpst@hubcap.clemson.edu
(aka D. E. Stevenson),                     fpst@clemson.csnet
Department of Computer Science,            comp.parallel
Clemson University, Clemson, SC 29634-1906 (803)656-5880.mabell