Path: utzoo!utgpu!water!watmath!clyde!bellcore!rutgers!mailrus!husc6!purdue!decwrl!nsc!stevew From: stevew@nsc.nsc.com (Steve Wilson) Newsgroups: comp.arch Subject: Re: VM needed for rapid startup Keywords: paging virtual-memory speed Message-ID: <5135@nsc.nsc.com> Date: 31 May 88 16:27:55 GMT References: <463@cvaxa.sussex.ac.uk> <19322@beta.UUCP> <5129@nsc.nsc.com> <19496@beta.UUCP> Reply-To: stevew@nsc.UUCP (Steve Wilson) Organization: National Semiconductor, Sunnyvale Lines: 49 In article <19496@beta.UUCP> jlg@beta.UUCP (Jim Giles) writes: >In article <5129@nsc.nsc.com>, stevew@nsc.nsc.com (Steve Wilson) writes: >> [ Lotsa stuff deleted here] > >This is true for many programs, but the 'working set' idea only works >for the code image of most scientific programs. The data image is often >updated completely for each 'time-step' of a simulation, for example. >Since data is MUCH larger than code these days, this could lead to >constant page faulting. Yes, you've reached the point of thrashing which is DEATH to any application. I would suggest that you've reached a point where the application has grown beyond the capabilities of the hardware in use. For every machine design there is an "ideal" application set that the system architect has in mind. At some point, applications are going to exceed this ideal, and inherent limits of the design will surface. As an example, I can always find an application which is just a bit larger than the cache of the machine I'm running on. >This problem is an example of the original statement I made on this subject: >when the ratio of CPU speed to disk speed is high, virtual memory is not >as attractive. The problem is not solved by such schemes as having larger >page sizes or loading more than one page during page fault resolution - the >issue still arises when the CPU gets faster in the next machine generation. >And both these solutions make a page fault even more of a delay for the >program. > Please consider that the ratio CPU speed to disk speed has ALWAYS been this way. Manufacturers have gone to extremes over the years to defeat this limit, or at least minimize it! As an example, look at the HEAD-PER- TRACK disks used by Burroughs on their medium systems(B2000 series). These disk drives are large, and have a small storage capability, yet they reduced the latency due to disk I/O down to the rotational latency of the drive. >The basic issue here is that perhaps there are types of programming >environments that don't benefit from virtual memory - especially if large >memory machines are available. I'll certainly concede that there are problems which will benefit from a custom hardware/software environment. However, in the world of multi-programming systems, VM as applied to the "typical" applications that a given environment is crafted for is beneficial. Steve Wilson National Semiconductor [ Universal disclaimer goes here ]