Path: utzoo!attcan!uunet!husc6!bloom-beacon!gatech!hubcap!Bjorn
From: lisper-bjorn@YALE.ARPA (Bjorn Lisper)
Newsgroups: comp.parallel
Subject: Re: parallel numerical algorithms
Message-ID: <1793@hubcap.UUCP>
Date: 1 Jun 88 18:36:33 GMT
Sender: fpst@hubcap.UUCP
Lines: 39
Approved: parallel@hubcap.clemson.edu

In article <1784@hubcap.UUCP> Larry Yaffe writes:
>(Gerald Ostheimer writes about the MIT tagged-token dataflow machine and
>the language Id...)

>    I'd like to hear more about how this language avoids excessive copying
>& wasteful memory usage.  How do standard tasks like matrix multiplication, 
>binary tree insertions, hash table management, etc. work?

Id is a definitional language. Thus is doesn't specify memory management
explicitly, its variables stand for values rather than memory locations and
the statements stand for events, defining new values from previously defined
ones. To the contrary of an imperative language the order between statements
does not imply a strict execution order, the only order required is the
partial one defined by the data dependencies between statements. It is thus
very much up to the compiler to find a schedule of the events such that
memory is not unduly wasted. (Remember that memory is merely a means of
communicating values between events.)

Definitional languages have been proposed as programming languages for
dataflow machines but they are perfectly general. Nothing prevents them from
being used on other architectures as well (including the traditional von
Neumann one).

An imperative language lets (or demands that!) the programmer specify how
the memory management is done by explicitly telling which variable (= memory
location) is to store a value resulting from an assignment. So for a certain
architecture the programmer can be smart and write assignment statements in
a way that uses memory efficiently. For a definitional language the compiler
has to be as smart as the programmer to do an equally good job. My guess is
that these languages will not be widely successful until this is the case,
amd I'm not convinced that this point has been reached yet.  But when the
compiler techniques for them have been developed to this point (and I do
think it can be done) then they will offer additional benefits such as
portability between different architectures (both serial and parallel) and
ease of program verification.

So the answer to your question is: it's up to the compiler.

Bjorn Lisper