Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP Posting-Version: version B 2.10.3 4.3bsd-beta 6/6/85; site talcott.UUCP Path: utzoo!watmath!clyde!burl!ulysses!allegra!mit-eddie!think!harvard!talcott!tmb From: tmb@talcott.UUCP (Thomas M. Breuel) Newsgroups: net.micro.68k,net.micro.16k Subject: Re: Re: PDP11s vs the micros Message-ID: <489@talcott.UUCP> Date: Sat, 17-Aug-85 05:48:54 EDT Article-I.D.: talcott.489 Posted: Sat Aug 17 05:48:54 1985 Date-Received: Tue, 20-Aug-85 08:31:35 EDT References: <1617@hao.UUCP> <847@mako.UUCP> <2422@sun.uucp> <2607@sun.uucp> <492@oakhill.UUCP> Organization: Harvard University Lines: 47 Xref: watmath net.micro.68k:1064 net.micro.16k:368 In article <492@oakhill.UUCP>, davet@oakhill.UUCP (Dave Trissel) writes: |>Motorola obviously :-) views its 68020 line primarily as a way to sell |>memory chips. Between the incredible pile of trash it heaves onto the |>stack when you take a page fault, and the huge internal state of the |>68881 FPU that has to be shoveled in and out every time you context-switch |>(what's the betting Motorola's next FPU chip has DMA? :-), the memory |>market is clearly what they're aiming at. That and the cache market. |What you don't realize is the amazing performance we can get because of the |"incredible pile of trash" we heave on the stack. | |The crux of the problem is that chips which have to back-up and redo |instructions pay a nasty penalty in pipeline design. Consider the following |generic microprocessor code sequence: | | MOVE something to memory | SHIFT Reg by immediate | MUL Reg to Reg | etc. | |The MC68020 executes the MOVE and the bus unit schedules a write cycle. Then |the execution unit/pipeline happily continues executing the instruction |stream without regard to the final status of the write. Even if the write |fails (bus errors) there could be several more instructions executed (in fact |any amount until one is hit which requires the bus again.) I find this argument amusing. You just generated a page fault. That means context switch, disk driver, housekeeping, ... . Compared to all this, the overhead of your instruction re-start is going to be negligible no matter how inefficiently you do it. In addition, I tend not to believe that what you gain in cache performance makes up for the time required to push a lot onto the stack. Cache performance is going to increase in the way you describe it on writes only anyhow, since if you get a page fault on a read (which is probably the more common case) you have to wait for the page to be brought in no matter what. Finally, the thought of having a page fault pending and the CPU happily executing more instructions before the fault is serviced somehow worries me. It may play havoc with simple-minded process synchronisation techniques. Altogether, I don't buy that the 68020 gets 'amazing performance' because it pushes of the order of 20 longwords onto the stack every time it gets a page fault. Thomas.