Path: utzoo!mnetor!uunet!husc6!think!ames!necntc!necis!encore!fay From: fay@encore.UUCP (Peter Fay) Newsgroups: comp.arch Subject: Re: Single tasking the wave of the future? Message-ID: <2341@encore.UUCP> Date: 11 Dec 87 23:49:11 GMT References: <201@PT.CS.CMU.EDU> <388@sdcjove.CAM.UNISYS.COM> <988@edge.UUCP> <1227@sugar.UUCP> <151@sdeggo.UUCP> <1423@cuuxb.ATT.COM> <439@xyzzy.UUCP> <440@xyzzy.UUCP> <36083@sun.uucp> <18@amelia.nas.nasa.gov> Reply-To: fay@encore.UUCP (Peter Fay) Organization: Encore Computer Corp, Marlboro, MA Lines: 77 In article <18@amelia.nas.nasa.gov> fouts@orville.nas.nasa.gov (Marty Fouts) writes: >In article <36083@sun.uucp> ram@sun.UUCP (Renu Raman, Sun Microsystems) writes: >> >> Digressing away from the tasking issue a bit - how long will >> uni-processor machines keep parallel processors at bay? Looks >> like saturation of HW techology is nowhere near. >> > >Well, as Gene Amdahl would say, it's not that simple. Parallel >processing, at least in the sense of throwing multiple processors at a >single problem is currently difficult to use. In regard to general-purpose multiprocessors: Every fork() (or thread_create() in Mach) in every program can get scheduled on a different cpu (that includes every shell per user, daemon, background task, ... Also, don't forget all those kernel processes (or tasks, threads) running on different cpus (pagers, signal handlers, ...). How difficult is it when the O.S. does it transparently? And then there is more sophisticated mechanisms ("micro-tasking", gang scheduling, vectorizing fortran compilers) available to any user who wants more capability. >...Software is hard to >write, easy to get wrong and nearly impossible to debug... Writing software which exploits the FULL parallelism of a machine MAY be hard to do in CERAIN cases. Otherwise the (user) software is pretty much identical. Debugging is a whole other soapbox, but my experience is that debugging coding errors is not much more difficult than uniprocessors. What is hard (or "impossible" with current tools) is detecting race conditions and bottlenecks - i.e. CONCEPTUAL errors. This is one of the many time lags in parallel software tool development, not an inherent defect in architecture. Race conditions are not a common occurence for users to debug. > ...As long as >this stays true, parallel processor machines will have a burden beyond >price/performance to overcome before they are accepted. > Since this is not true, the only thing to overcome is common prejudice. Price/performance advantage (for small multiprocessor minis and up) is huge. >As far as parallel processors for traditional multiprocessing, the >extra burden that multiple compute engines add to i/o facilities like >disk drives is such that they don't make much sense except for in very >high end machines... I/O can be a bottleneck for PC's, minis and Crays. The solution is to parallelize I/O, which is what multiprocessors do. (General-purpose multis are NOT "very high-end" -- they run from $70K - $750K). > ...I still remember the PDP 11/782 (dual processor >11/780 used as a master/slave system) which ran i/o bound workloads >slower than an 11/780 because of the introduction of processor control >overhead without upscaling i/o performance. That was a long time ago in multiprocessor history. All I/O was done by one CPU (master/slave) - sequential, not parallel. It was NOT a symmetric multi like those today. >Saturation of usable HW technology is close. What does this cliche mean? By the way, I don't fault people for not understanding about multis (e.g., from past exposure or school). It takes some time for common misconceptions to catch up to current reality. -- peter fay fay@multimax.arpa {allegra|compass|decvax|ihnp4|linus|necis|pur-ee|talcott}!encore!fay