Path: utzoo!mnetor!uunet!lll-winken!lll-lcc!pyramid!decwrl!sgi!baskett
From: baskett@baskett
Newsgroups: comp.arch
Subject: Re: Why is SPARC so slow?
Message-ID: <8885@sgi.SGI.COM>
Date: 11 Dec 87 19:52:47 GMT
References: <8809@sgi.SGI.COM> <6964@apple.UUCP>
Sender: daemon@sgi.SGI.COM
Organization: Silicon Graphics Inc, Mountain View, CA
Lines: 37

In article <6964@apple.UUCP>, bcase@apple.UUCP (Brian Case) writes:
> ...
> >The separate instruction and data cache only run
> >at single cycle rates but they run a half cycle out of phase with each
> >other so it all works out.  (Pretty slick, don't you think?)
> 
> Yes, I do think it is pretty slick, but I also think this is a liability
> at clock speeds higher than 16 Mhz (and maybe even at 16MHz).  I am sure,
> though, that MIPS has a plan to fix this problem.  It sure seems like the
> way to go at 8 Mhz.  Preventing bus crashes (i.e. meeting real-world
> timing constraints) can be problem.

The 16 MHz MIPS parts we have work fine.  If it becomes a problem, the fix
is pretty obvious, too.

> I am sure one of their chief concerns was future ECL implementation.

I have an ECL implementation of an experimental Risc processor (board)
in my office.  My experience with the team that designed and built it
(a great group of people at DEC's Western Research Lab, by the way)
tells me that the MIPS architecture is more suitable for ECL implementation
than the SPARC architecture.  (see next comment)

> by choosing register windows (which lets them vary the number of registers,
> in window increments, for a given implementation) and a very simple
> definition otherwise, SUN simply did the best they could to make future
> implementation easy.

It may have been the best they could do but it looks like a mistake to me.
In higher performance technologies the speed of register access becomes
more and more critical so about the only thing you can do with register
windows is to scale them down.  And as the number of windows goes down,
the small gain that you might have had goes away and procedure call
overhead goes up.  Attacking the procedure call overhead problem at
compile time rather than at run time is a more scalable approach.

Forest Baskett
Silicon Graphics Computer Systems