Path: utzoo!attcan!uunet!mcvax!cernvax!hjm
From: hjm@cernvax.UUCP (hjm)
Newsgroups: comp.arch
Subject: Re: Today's dumb question...
Summary: What use is multiprocessing anyway?
Message-ID: <674@cernvax.UUCP>
Date: 9 May 88 16:15:32 GMT
References: <503@xios.XIOS.UUCP> <2676@pdn.UUCP>
Reply-To: hjm@cernvax.UUCP (Hubert Matthews)
Organization: CERN European Laboratory for Particle Physics, CH-1211 Geneva, Switzerland
Lines: 62


Dear All,

     I see the thorny subjects of RISC v. CISC and scalar v. vector have reared
their ugly heads again, but in a different guise - multiprocessing!

     Allow to point out some of the ENGINEERING issues involved:

	- the cost of a computing system is primarily a function of size, 
	  weight and the number of chips or pins;

	- to go really fast and to be efficient, the hardware should be simple;

     So what am I trying to point out?  Merely that a large amount of hardware
in present-day machines is there because of difficulties in software.  For
example, take the common-place example of your local UNIX or VMS box.  Inside
these beasts is a *lot* of hardware to keep one user away from his fellow
hackers.  An equally large amount of hardware is provided for the demand-paged
virtual memory system.  Add to that a healthy(?) helping of cache chippery and
what do you get - yes, a machine built upon boards the size of a small squash
court!  None of this hardware is simple, and applies to both the uniprocessor
and the multiprocessor case.

     Now, add in the magic multiprocessor devices and all hell breaks loose on
the hardware front (not to mention the software - groan).  Everyones favourite
trick seems to be finding evermore complicated ways of getting large numbers of
CPUs to talk to the memory all at once.  Just imagine an ever increasing number
of waiters trying to get in and out of the same kitchen all at once through one
door, and you can see the mess.  OK, let's increase the number of doors ... 
in hardware terms this means separating the memory into several pages which can
be accessed simultaneously, thereby increasing the effective bandwidth of 
the memory.  Is this really admitting that shared memory is not necessary?
Surely the highest bandwidth is achieved when each processor has its own memory
which it shares with noone else?  It also makes the hardware a lot smaller.

     To summarise all of this in a few points:

	- virtual memory is useful only when an application won't fit in
	  physical memory.  But memory is cheap, so with lots of Mbytes
	  who needs it, especially if the program is written well.

	- multi-user machines are too complicated to be both fast and simple.

	- shared-memory is not necessary; it's a software issue that shouldn't
	  be solved in hardware.

     For example, 10 MIPS of computation with 4 MB of ECC RAM can be placed on
a single 4" x 6" Eurocard.  Add multi-user support, virtual memory or multiple
CPUs and the board looks like a football pitch in comparison.  Guess which is
cheaper as well!

Remember,

  S I M P L E    =   F A S T   =   E F F I C I E N T   =   C H E A P.

------------------------------------------------------------------------------

	Hubert Matthews (software junkie, surprisingly enough)

------------------------------------------------------------------------------

#include