Path: utzoo!utgpu!attcan!uunet!mcvax!cernvax!hjm
From: hjm@cernvax.UUCP (hjm)
Newsgroups: comp.arch
Subject: Re: using (ugh!  yetch!) assembler
Message-ID: <788@cernvax.UUCP>
Date: 8 Aug 88 13:26:26 GMT
References: <6341@bloom-beacon.MIT.EDU> <60859@sun.uucp> <474@m3.mfci.UUCP> <37014@linus.UUCP> <2948@utastro.UUCP> <4365@cbmvax.UUCP>
Reply-To: hjm@cernvax.UUCP (Hubert Matthews)
Organization: CERN European Laboratory for Particle Physics, CH-1211 Geneva, Switzerland
Lines: 25

(I apologise if you've seen this posting before, but rn and vi junked it up
last time and I'm not sure what was happening...)

If one considers a computing system to be a conglomeration of software and
hardware all the way from the application program right down to the physical
hardware, then one can move from the most portable and least machine-specific
part (the program written in an HLL) to the least portable and most machine-
specific part (the hardware), defining portability in terms of the effort
required to move that part to a different environment.

As this progression is made (from portable to non-portable), the performance
of each level increases:
some things are in hardware because it goes quicker like that.
(If the performance does not increase, then the loss of portability
is not worth it and neither is the cost.)  Again, the well-known engineering
tradeoff rears its head again: portability v. performance; speed v. ease of
implementation.

So, put it hardware, put it in firmware, put it in assembler or put it in an
HLL.  Do whatever is right in your system, given the inevitable trade-offs
that cannot be avoided, because there is *no* single correct solution for
all systems.  Use whatever tools are necessary for you to meet your specs.,
but use them well.  Horses for courses, not dogma.

	Hubert Matthews