Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Path: utzoo!mnetor!uunet!seismo!rutgers!sri-spam!ames!ucbcad!ucbvax!ucbarpa.Berkeley.EDU!baden
From: baden@ucbarpa.Berkeley.EDU (Scott B. Baden)
Newsgroups: comp.arch
Subject: Re: parallel computing
Message-ID: <19684@ucbvax.BERKELEY.EDU>
Date: Mon, 13-Jul-87 02:28:16 EDT
Article-I.D.: ucbvax.19684
Posted: Mon Jul 13 02:28:16 1987
Date-Received: Tue, 14-Jul-87 00:47:00 EDT
References: <8270@amdahl.amdahl.com> 
Sender: usenet@ucbvax.BERKELEY.EDU
Reply-To: baden@lbl-csam.arpa.UUCP (Scott Baden [CSR/Math])
Distribution: world
Organization: University of California, Berkeley
Lines: 122


I also agree with David DiNucci.. help *is* coming...
I just filed my dissertation this spring;
the subject was a programming discipline that can help
help the programm write somewhat portable software.
My approach is to provide a virtual machine-- an abstract
local memory multiprocessor-- and some simple VM operations
whose semantics are insensitive both to the application
and to various aspects of the underlying system (i.e.
whether or not memory is shared) running the VM.
The approach isn't universal, but I believe
that it can make multiprocessors more attractive than
they have been in the past for many interesting problems
in mathematical physics and engineering.
I tried my approach using a large (= "real world")
fluid's problem and ran on two very different architectures--
the Cray X-MP/416 and an Intel hypercube.
The codes were not identical, but differed primarily in 
mundane ways-- (1) the Cray supports vector-mode arithmetic
so inner loops had to be re-worked in order to vectorize;
(2) the Cray had more memory than I could use, but I didn't
really have enough memory on the cube (with 32 nodes).


I agree with Henry Spencer's remarks that parallelism
suffers from an image problem, however, I can't see that
architecture will provide all the answers.  My own
results suggest that the programmer is better off he can
remain as aloof as possible from the way that the processors
are strapped together, whether through shared memory, message
passing or whatever, and that he need not necesarily
pay a heavy performance penalty for keeping his distance
from the innards of the machine.  In short,  software is needed
to insulate the programmer from novel developments in
parallel architecture.  When a progrmamer wants
to use a new machine he shouldn't have to rewrite his code, or
he will resist the innovation.  The field is still too young
for anyone to commit to one kind of machine.
Perhaps someday architectures will become standardized,
but I think that it will be awhile before that happens
(and I'm not so sure that it ever will).


As an aside: I've found that many of the problems I encountered
in writing multiprocessor software were mundane:
lack of a good debugger, system bugs, lack of application
libraries,  and so on. In short, these machines haven't been around
for long, and are harder to use than the more mature
uniprocessor systems.  Many of the  problems have nothing
to do with the introduction of  parallellism but rather
the newness of the machines themselves.

Comments?

Scott Baden	baden@lbl-csam.arpa   ...!ucbvax!baden
					(will be forwareded to lbl)
Newsgroups: comp.arch
Subject: Re: Parallel Computing
Expires: 
References: <8270@amdahl.amdahl.com> 
Sender: 
Reply-To: baden@lbl-csam.arpa.UUCP (Scott Baden [CSR/Math])
Followup-To: 
Distribution: 
Organization: Lawrence Berkeley Laboratory
Keywords: 

I also agree with David DiNucci.. help *is* coming...
I just filed my dissertation this spring;
the subject was a programming discipline that can help
help the programm write somewhat portable software.
My approach is to provide a virtual machine-- an abstract
local memory multiprocessor-- and some simple VM operations
whose semantics are insensitive both to the application
and to various aspects of the underlying system (i.e.
whether or not memory is shared) running the VM.
The approach isn't universal, but I believe
that it can make multiprocessors more attractive than
they have been in the past for many interesting problems
in mathematical physics and engineering.
I tried my approach using a large (= "real world")
fluid's problem and ran on two very different architectures--
the Cray X-MP/416 and an Intel hypercube.
The codes were not identical, but differed primarily in 
mundane ways-- (1) the Cray supports vector-mode arithmetic
so inner loops had to be re-worked in order to vectorize;
(2) the Cray had more memory than I could use, but I didn't
really have enough memory on the cube (with 32 nodes).


I agree with Henry Spencer's remarks that parallelism
suffers from an image problem, however, I can't see that
architecture will provide all the answers.  My own
results suggest that the programmer is better off he can
remain as aloof as possible from the way that the processors
are strapped together, whether through shared memory, message
passing or whatever, and that he need not necesarily
pay a heavy performance penalty for keeping his distance
from the innards of the machine.  In short,  software is needed
to insulate the programmer from novel developments in
parallel architecture.  When a progrmamer wants
to use a new machine he shouldn't have to rewrite his code, or
he will resist the innovation.  The field is still too young
for anyone to commit to one kind of machine.
Perhaps someday architectures will become standardized,
but I think that it will be awhile before that happens
(and I'm not so sure that it ever will).


As an aside: I've found that many of the problems I encountered
in writing multiprocessor software were mundane:
lack of a good debugger, system bugs, lack of application
libraries,  and so on. In short, these machines haven't been around
for long, and are harder to use than the more mature
uniprocessor systems.  Many of the  problems have nothing
to do with the introduction of  parallellism but rather
the newness of the machines themselves.

Comments?

Scott Baden	baden@lbl-csam.arpa   ...!ucbvax!baden
					(will be forwareded to lbl)