Path: utzoo!utgpu!water!watmath!clyde!bellcore!rutgers!gatech!hubcap!mrspock
From: mrspock@hubcap.UUCP (Steve Benz)
Newsgroups: comp.lang.misc
Subject: Re: What makes a language "easy" to program in?
Message-ID: <1818@hubcap.UUCP>
Date: 5 Jun 88 20:53:07 GMT
References: <10216@sol.ARPA>
Organization: Clemson University, Clemson, SC
Lines: 59


   
 ( For brevity, I use the term 'parallel' to mean MIMD and MISD
   architectures.  Hopefully the SIMD crowd will see their way
   clear to forgive the slight.  )

From article <10216@sol.ARPA>, by baldwin@cs.rochester.edu (Douglas Baldwin):
> [It's harder to write programs for parallel computers than sequential
>  computers]... This suggests that IMPLICIT parallelism (i.e., parallelism
> detected and managed by a compiler or run-time system) is preferable to
> explicit parallelism.

  There is only a fine hair of difference between
Baldwin's "explicit" parallelism and "implicit" parallelism.
"Explicit" parallel constructs mean something like "do A in parallel
with B."  "Implicit" parallel constructs mean "do A in parallel with
B or do them in sequence -- I don't care."

  While implicit parallelism makes things easier on the programmer,
it makes it harder on the "optimizer" (excuse my very liberal use
of the term.  The "optimizer" is the unit that has to determine what
would be the best way to balance load across the processing elements.)

> [imperative programming languages...]
> (i.e., the whole Algol family and its close relatives,
> object-oriented languages) [are] inherently sequential, and any attempt to
> extract parallelism from these languages fights a losing battle
> against this sequentiality.

  Just as with any other sort of optimization, there are ways to make
things easier on the optimizer.  I'll demonstrate my point from analogy,
consider these for loops (written in C):

{	int i,a[10];
	for (i = 0; i < 10; i++) a[i]=0;

	--==*==--

{	int *p,a[10];
	for (p = a+10; p != a; *--p = 0);

  I feel certain that the second for loop would run at least as fast
as the first on a PDP-11.  Some optimizers would pick up on the
fact that the first solution wasn't that great and would output
something more closely resembling the second for loop.

  This carries over to parallel systems as well.  On such
architectures, algorithms that are written in a manner more suited
to parallel architectures will run faster than those which are more
suited to sequential architectures.

  In the world of sequential processing, there are a wide variety
of programming languages to choose from.  Each is best suited to
a particular set of problems.  No one language is ideal in all
situations.  I think the same will be true for parallel languages.
Imperative languages will find their niche in parallel processing,
just as functional and logic languages will.

				- Steve Benz