Path: utzoo!utgpu!water!watmath!clyde!att!alberta!auvax!rwa
From: rwa@auvax.UUCP (Ross Alexander)
Newsgroups: comp.arch
Subject: Re: Compiler complexity (was: VAX Always Uses Fewer Instructions)
Summary: that particular paper was pretty biased...
Keywords: RISC CISC
Message-ID: <674@auvax.UUCP>
Date: 26 Jun 88 01:04:41 GMT
References: <6921@cit-vax.Caltech.Edu> <28200161@urbsdc> <10595@sol.ARPA> <20345@beta.lanl.gov>
Organization: Athabasca U., Alberta, Canada
Lines: 59

In article <20345@beta.lanl.gov>, hwe@beta.lanl.gov (Skip Egdorf) writes:
> In article <20338@beta.lanl.gov>, jlg@beta.lanl.gov (Jim Giles) writes:
> As I seem to like to dredge up ancient history in this forum, these
> thoughts sent me to my dusty closet of 'olde stuff' to fetch volume
> 1 (I also have V2) of a collection of papers produced by DEC called
> "PDP-10 Applications in Science".
> I think that I got this sometime in 1970, though most of the papers
> seem to be from around 1967-1968. The third paper in the book is
> "Selecting and Evaluating a Medium Scale Computer System"
> "A Study Undertaken for Louisiana State University at New Orleans"
> by Jon A. Stewart, Director, LSUNO Computer Research Center
> The paper is a summary of an in-depth comparison of three major
> systems of the time; the SDS Sigma-5, the IBM 360/44, and the
> DEC PDP-10. The paper is very fine at bringing back the memories
> of the good old days [...]
> 					Skip Egdorf
> 					hwe@lanl.gov

Without prejudice to either Skip E.  or Jon Stewart, I remember that
particular paper well enough; I read it in 1974 but the tone and
conclusions stick with me.  I believe the main weakness of the
evaluation is that it rested on such a small (published) sample.  A
small and somewhat artificial fragment of FORTRAN IV (written in a
FORTRAN II style) is pushed through the compilers for the different
machines; then the generated code is examined in a fashion which
nowadays would be considered naive.

I might add that the compilers were as naive as the analysis;
compiler technology was quite immature in 196[789], and the generated
code has a very student-compiler-project look about it :-).  For
instance, two 16-bit IBM rr-class instructions (quite RISCy in type)
are scored as poorer than a single 36 bit PDP instruction
accomplishing the same result by way of an
register-indirect-with-autoincrement construction.  As well, the IBM
compiler makes no attempt to avoid repeated evaluation of common
subexpressions nor does it do any strength reduction.  One tellingly
correct and important point was that the PDP was considerably less
expensive than the IBM for the same power :-) 

I myself am neutral; I've programmed both 360's and PDP-11's (similar
in spirit if not implementation details to 10's) in assembler, doing
moderately-sized pieces of code (~5,000 lines).  The 360
architecture, though ugly, can be used in the style of a RISC machine
if one just ignores all the ss-class instructions (such gewgaws as
the decimal or editing instructions); I try to avoid using IBM's ugly
linkage conventions too :-).  The 11 code is much more esthetic to
look at, but I don't think it's much denser or faster for a given
hardware approach than the equivalent 360 binary.  

To sum up, that paper wasn't very differnet in tone from a lot of
the flaming we see here on the USENET :-) and it's sort of odd to
see it quoted as part of `the literature'.  But its also sort of fun
to see it come up again :-)

--
Ross Alexander, who plays around with computers at Athabasca
University, Athabasca, Alberta   -  alberta!auvax!rwa

"Assembly of Japanese bicycle require great peace of mind."