Path: utzoo!utgpu!water!watmath!orchid!atbowler
From: atbowler@orchid.waterloo.edu (Alan T. Bowler [SDG])
Newsgroups: comp.arch
Subject: Re: Wirth's challenge (was Re: RISC
Message-ID: <12181@orchid.waterloo.edu>
Date: 17 Dec 87 01:09:08 GMT
References: <6901@apple.UUCP> <28200075@ccvaxa>
Reply-To: atbowler@orchid.waterloo.edu (Alan T. Bowler [SDG])
Organization: U. of Waterloo, Ontario
Lines: 67

In article <28200075@ccvaxa> aglew@ccvaxa.UUCP writes:
>
>As for COBOL support, well... I think we are about to
>pass the point where a scientific computer will do better
>at COBOL support than a business computer.

"About to pass the point"?  The places that used to run service
bureaus with CDC 6600's knew this years ago.  Once you get over
the fixation that your problem is so different that the hardware
designer has to tailor an instruction just for you, you realize
that what you want is something that does some basic functions
fast, and let the programmer construct the other stuff.  The
design problem is to choose the right basic operations. 

The DG Nova, PDP-8, and the CDC 6600 all gave very impressive
performances on commercial applications even though many people
claimed they were not "designed" for this type of application.

Basically this should be the "RISC" argument, but that term
seems to have been co-opted for a very narrow range of hardware
design strategies.  In particular I have seen statements that
RISC must be
  - 1 microcode cycle per clock cycle.
       Why assume a synchronous (clocked) hardware implementation?
       There have been a number successful machines with
       asychrounous CPU's
  - 1 microinstruction per instruction
       Why assume microcode at all?  Microcode is certainly a
       valuable hardware design technique, but again it is
       not manditory
  - register windows with "general purpose" registers.
       A neat idea, but again, special purpose register architectures
       have done impressive things in the past.  I've often wondered
       if most of the performance gains quoted for the "RISC"
       machines can be attributed to the fact that someone decided
       the best thing to do with the registers was to use them
       for passing the first few arguments, and that similar
       gains can be made on other machines by making the compiler
       pass the first few arguments in registers, instead of expecting
       that the callee preserves his registers.
I'm not saying any of these are bad ideas.  They clearly aren't.
It just seems that a lot of discussion is going on with assumptions
that all computers are implemented with a particular methodology,
or must have a certain architectural feature.

Those pushing the simple and fast approach must also be aware
of why machines acquire the specialized fancy instructions,
such as packed decimal.  Given that one has an existing implementation
of an architecture, there will always be some commercially
important applications that the machine is "poor" at.  (as defined
by some customer with money).  The engineer can go back to the
drawing board and re-engineer the whole machine to make it faster,
but often adding some opcodes and some hardware to do the
job.  (I am using the term extra hardware loosely, this could
mean some more gates on the cpu chip).  Sometimes no
extra hardware is needed, just an addition to the microcode
(if microcode is used in the implementation).  Of course when
the next total re-engineering does occur, tradeoffs will be made,
and because the new datapath layout will mean the some of the
instructions can't be implemented with the previous techique.
So they may be done with a long slow microcode sequence,
and it may well be that on the new machine the sequence that was
used before the new feature was added is faster at doing the job
wanted by the application than using the feature.  The reason the
feature was added was valid, it made the machine significantly
faster.  The reason for maintaining it is valid, it preserves
object code compatibility.