Path: utzoo!utgpu!jarvis.csri.toronto.edu!mailrus!purdue!bu-cs!dartvax!eleazar.dartmouth.edu!jskuskin
From: jskuskin@eleazar.dartmouth.edu (Jeffrey Kuskin)
Newsgroups: comp.arch
Subject: Pipeline Interlock
Message-ID: <15904@dartvax.Dartmouth.EDU>
Date: 2 Oct 89 23:50:34 GMT
Sender: news@dartvax.Dartmouth.EDU
Reply-To: jskuskin@eleazar.dartmouth.edu (Jeffrey Kuskin)
Distribution: na
Organization: Dartmouth College, Hanover, NH
Lines: 32


I was recently reading a description of the Stanford MIPS RISC
processor (1982/3 vintage).  The desciption notes that this 
processor had a 5-stage pipeline but had NO pipeline interlock
logic.  Instead, all potential code for the processor had to be
pre-processed by a program called a "reorganizer" which examined
the code and removed all pipeline dependencies, either by code
reorganization or by insertion of NOPs. 
 
The rationale behind the decision to eliminate pipeline interlock
logic was, as you might guess, speed and (die) space.  

So...my question:  most (all?) of the current RISC processors
(29000, 88000, 80860, etc) include pipeline interlock logic.
Could someone familiar with these processors comment on the
implications of this decision:

    -- How much of the total chip area is occupied by
       pipeline interlock logic?

    -- If the interlock logic were eliminated, could the 
       processor cycle time be decreased?  By how much?
 
    -- How long (in person-weeks/years, whatever) did it take
       to design the interlock logic?  How long did it take to
       design the entire chip?


 
   **  Jeff Kuskin,  Dartmouth College
 
   **  E-Mail:  jskuskin@eleazar.dartmouth.edu