Path: utzoo!dciem!nrcaer!scs!spl1!lll-winken!lll-tis!ames!necntc!ima!think!ephraim
From: ephraim@think.COM (ephraim vishniac)
Newsgroups: comp.lang.misc
Subject: Re: What makes a language "easy" to program in?
Message-ID: <21532@think.UUCP>
Date: 2 Jun 88 16:10:58 GMT
Article-I.D.: think.21532
References:  <10216@sol.ARPA>
Sender: usenet@think.UUCP
Reply-To: ephraim@vidar.think.com.UUCP (ephraim vishniac)
Distribution: comp
Organization: Thinking Machines Corporation, Cambridge, MA
Lines: 41

In article <1802@hubcap.UUCP> baldwin@cs.rochester.edu (Douglas Baldwin) writes:
>	Having thought a fair bit about parallel programming and why
>it's hard to do and how it ought to be done, here're my comments (for
>what they're worth).

>	Coordinating a multiprocessor to do something IS harder than
>getting a uniprocessor to do it - someone has to indicate how the
>overall application is broken up into processes, how and when those
>processes communicate, how they have to be synchronized in order to
>avoid a whole host of problems (deadlock, starvation, violation of
>data dependencies in accessing data, etc. etc. etc.). When
>programmers have to describe all of this explicitly, in addition to
>just describing the basic function they want computed, it's no wonder
>that they have a hard time. This suggests that IMPLICIT parallelism
>(i.e., parallelism detected and managed by a compiler or run-time
>system) is preferable to explicit parallelism.

The area that Baldwin seems to be ignoring in this discussion is SIMD
machines.  His initial statement ("Coordinating a multiprocessor is
harder...") is true, but his consequent ("someone has to indicate how
the overall application is broken into processes...") is parochial.

In the parallel system I use (a Connection Machine), there's no such
problem as dividing the application into processes, hence no
interprocess communication, synchronization, etc.  Instead (yes, there
is a catch!), the hard part is deciding on the division of data among
the processors.  When coding the application, most of the parallelism
is implicit: all operations on occur on all of the currently selected
processors.  In the language I use (C*), there are only slight
extensions to conventional C syntax.  The body of a typical C*
function can be understood as operating on a single set of data even
though it operates on an arbitrary number.

Sorry if the above sounds too much like an ad for the CM.  I'm not one
of the designers of the system or the language, just an in-house user
who thinks the designers did some smart things.

Ephraim Vishniac					  ephraim@think.com
Thinking Machines Corporation / 245 First Street / Cambridge, MA 02142-1214

     On two occasions I have been asked, "Pray, Mr. Babbage, if you put
     into the machine wrong figures, will the right answers come out?"