Path: utzoo!utgpu!water!watmath!clyde!att!osu-cis!tut.cis.ohio-state.edu!mailrus!ames!umd5!decuac!felix!martin
From: martin@felix.UUCP (Martin McKendry)
Newsgroups: comp.arch
Subject: Re: RISC vs CISC on Low-End Processors
Keywords: RISC, real-time
Message-ID: <40395@felix.UUCP>
Date: 1 Jun 88 22:45:32 GMT
References: <1521@pt.cs.cmu.edu> <1532@pt.cs.cmu.edu> <476@pcrat.UUCP> <9561@sol.ARPA> <1658@pt.cs.cmu.edu> <1035@astroatc.UUCP> <10074@sol.ARPA>
Sender: daemon@felix.UUCP
Reply-To: martin@felix.UUCP (Martin McKendry)
Organization: FileNet Corp., Costa Mesa, CA
Lines: 48

In article <10074@sol.ARPA> crowl@cs.rochester.edu (Lawrence Crowl) writes:
>.ARPA> <1658@pt.cs.cmu.edu> <1035@astroatc.UUCP>
>Reply-To: crowl@cs.rochester.edu (Lawrence Crowl)
>Organization: U of Rochester, CS Dept, Rochester, NY
>Lines: 40
>
>In article <1035@astroatc.UUCP> johnw@astroatc.UUCP (John F. Wardale) writes:
>>People claim stack machines can give you fast execution and dense code.
>>I have two arguments against this:  (From "The Case Against Stack-Oriented 
>>Instruction Sets" G. Myers @ IBM (SIGCA-August 77) and other stuff.)
>
>The proposal for stack machines was in the context of low-end processors.  One
>key feature of such a machine are that there is a low bandwidth to memory.
>
>>Code Size: Several studies yield overwelming evidence that almost all code
>>    takes on of these three forms: a=b  a=a+b  a=b+c  (+ is an operation)
>
>You forgot a[i] and p->a which are compiled as expressions but do often appear 
>in the "overwelming evidence" cited above.  I think that the Burroughs stack
>machines compiled to half the size of their contemporaries.

I spent almost the entirety of 1986 working out of Burroughs World
Headquarters in Detroit, with the express purpose of evaluating such
claims as this and others, in the great 'stack machine' (Burroughs-style)
vs. current technology debate.  Rest assured that, whether or not the claim
of smaller code size was true at some time in the past (1958?) it is not
true today.  We did experiments to compare code size vs. IBM 360 etc
instructions, VAX instructions, and MIPS instructions.  In most cases
we compared Fortran, 'scientific' (Dhrystone-style) code, and Cobol.
In no case that I recall did the Burroughs instructions win by any
margin (if at all).  Of course, the ad-hoc, recursive descent, non-
optimizing Burroughs compilers might have had something to do with
it.

In fact, if there was every anything that the particular stack
architecture did better, the advantage was lost by the start of
the 1970's.  I suspect the only thing it ever did better was 'virtual
memory'.  But this was at huge cost, because the 'descriptors' (pointers
to segments/pages) contained the page presence bits, so you could not
optimize references through them.  (The hardware won't let you anyway.)
This led to indirection chains of great length that were unoptimizable
by software.  Once IBM came along with their paging, Burroughs machines 
were slower, took more code space, and cost more to build than competitive
machines.   

--
Martin S. McKendry;    FileNet Corp;	{hplabs,trwrb}!felix!martin
Strictly my opinion; all of it