Path: utzoo!utgpu!water!watmath!clyde!rutgers!sri-spam!ames!amelia!orville.nas.nasa.gov!fouts
From: fouts@orville.nas.nasa.gov (Marty Fouts)
Newsgroups: comp.arch
Subject: Re: Single tasking the wave of the future?
Message-ID: <18@amelia.nas.nasa.gov>
Date: 10 Dec 87 16:55:14 GMT
References: <201@PT.CS.CMU.EDU> <388@sdcjove.CAM.UNISYS.COM> <988@edge.UUCP> <1227@sugar.UUCP> <151@sdeggo.UUCP> <1423@cuuxb.ATT.COM> <439@xyzzy.UUCP> <440@xyzzy.UUCP> <36083@sun.uucp>
Sender: news@amelia.nas.nasa.gov
Reply-To: fouts@orville.nas.nasa.gov (Marty Fouts)
Lines: 23

In article <36083@sun.uucp> ram@sun.UUCP (Renu Raman, Sun Microsystems) writes:
>
>    Digressing away from the tasking issue a bit - how long will 
>    uni-processor machines keep parallel processors at bay?  Looks
>    like saturation of HW techology is nowhere near.
>

Well, as Gene Amdahl would say, it's not that simple.  Parallel
processing, at least in the sense of throwing multiple processors at a
single problem is currently difficult to use.  Software is hard to
write, easy to get wrong and nearly impossible to debug.  As long as
this stays true, parallel processor machines will have a burden beyond
price/performance to overcome before they are accepted.

As far as parallel processors for traditional multiprocessing, the
extra burden that multiple compute engines add to i/o facilities like
disk drives is such that they don't make much sense except for in very
high end machines.  I still remember the PDP 11/782 (dual processor
11/780 used as a master/slave system) which ran i/o bound workloads
slower than an 11/780 because of the introduction of processor control
overhead without upscaling i/o performance.

Saturation of usable HW technology is close.