Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP Posting-Version: version B 2.10.2 9/5/84; site polaris.UUCP Path: utzoo!linus!philabs!polaris!herbie From: herbie@polaris.UUCP (Herb Chong) Newsgroups: net.arch Subject: Virtual memory and stuff Message-ID: <232@polaris.UUCP> Date: Sun, 27-Oct-85 12:34:17 EST Article-I.D.: polaris.232 Posted: Sun Oct 27 12:34:17 1985 Date-Received: Sun, 3-Nov-85 14:18:22 EST Distribution: net Organization: IBM TJ Watson RC Lines: 145 >From: dvadura@watdaisy.UUCP (Dennis Vadura) >Perhaps the question to ask is do we need disk paging? >With large memories becoming available rolling pages out to disk may become >unneccessary, but the concept of virtual memory and its associated attributes >is probably still useful. it depends upon the ratio of number of users * average virtual memory each --------------------------------------------- total real memory available if this ratio is small, then no disk paging is required. if this ratio is large, then disk paging is required. a 32M processor running 400 users at 2M each must page to disk. a 256M processor running the same users will still page, but not nearly as much. remember that the processors with these large memories usually are very fast and support many users. if only a few users are supported, then paging is something that is unneccesary. >From: rcd@opus.UUCP (Dick Dunn) >First response: How near? 1 Mbit chips are real but not quite big-time >commercial stuff yet (that means: not CHEAP yet), but suppose that they >are. 256 Mb = 256*8 = 2K of these chips, which is a fair space-heater in >any technology. In larger machines, maybe yes; we're a few years away in >small machines. IBM has at least two announced models of CPU's available with 256M real, though it's treated in a weird fashion. it is not as fast as the usual main memory. for those of you who remember the s/360's with core, some models had a small (64K?) amount of fast core and the rest was slow core. the same things are true these days. >VM sets the "hard limit" of >a process address space independently of the actual physical memory on the >machine, so you don't have to go out and buy more memory to run a program >with a large address space--it just runs slower. (Yes, in some cases it >runs intolerably slower. If that happens, go buy more memory, obviously.) the original reason for VM was what dick says. remember though that today's main reason is support a lot of users with medium max memory requirements but small working sets. why use expensive main memory for idle pages when you can put a dispatchable user in there instead. of course, there has to be enough CPU power around to make processes dispatchable often enough. >Third response: Decrease program startup? (Tentative.) If you insist on >everything being in physical memory, you gotta load the whole program >before you start execution. Might take a long time--the case of interest >is where a program has gobs of seldom-used code. The counter to this >response is that if a program has poor locality of reference--which is >common during startup!--the VM paging behavior is essentially to load a >large part of the program but in random order, which can make it take >longer than loading all of it sequentially. it turns out that for many programs (observation only, no proof) that single block read of the binary is faster than page faulting it in. it depends on how smart your program loader/process starter is and your disk hardware. >Fourth response: Maybe VM is appropriate to a certain balance of process >size, memory cost and size, and backing store cost/speed. You could argue >that larger machines are now outside the domain of that particular set of >tradeoffs. Smaller machines are not. larger machines ten to run many users. an IBM 3084 processor with 256M would be supporting about ~500 users online (order of magnitude figure) so you still need a lot of paging. also, what about memory mapped filesystems? you certainly don't want a large file in memory all the time unless the ratio i mentioned above will support it. >From: brooks@lll-crg.ARpA (Eugene D. Brooks III) >I'm sorry I was not precise enough. The question was meant to be do we need >disk paging? The much needed firewall protection and address space shareing >for programs in a multiprocessor can be provided by a simple {base,limit} >segmentation scheme. One or course needs several sets of such registers >to establish the several segments, code, static data, stack, shared static >data, ... that one needs in a program. Do we really need the page oriented >virtual memory systems that occur in todays micros and mini computers? If >we have more than enough physical memory, do we need the overhead associated >with the page mapping hardware? It is difficult to make such hardware operate >at supercomputer speeds and poses severe difficulties for non bus oriented >architectures (large N multiprocessors). if you are supporting many users or many processes, then you will still need disk paging. >From: chuck@dartvax.UUCP (Chuck Simmons) >Another reason for virtual memory is that segmented architechtures can >make programming easier. For example, some programs want to have multiple >stacks (eg our PL1 compiler). By setting up each stack as a segment, >the compiler programmer can easily and efficiently allocate storage on >any stack. Our current pl1 compiler, written before we had a segmented >architechture, must spend a lot of time worrying about stack collisions. >When one of these occur, it must shuffle the stacks around in core to move >them away from each other. does this make interprocess communication messy when you have to talk about data in segments here and there? >From: franka@mmintl.UUCP (Frank Adams) >What leads either of you to believe that 256M will be enough to run your >programs? Memory used by programs expands to use the space available. >There was a time, not so long ago, when 256K was a lot of memory, and >people didn't understand how any program could use more than 16M. If >memory becomes sufficiently cheap, there are time/space tradeoffs which >can be made to use large blocks of it. if you go into large commercial DP shops, you will find that to be a major concern of theirs. large terminal networks require massive amounts of memory to keep track of them all. large database systems will try to keep as much data in memory as possible to speed up transaction processing. there is never enough memory. >Or, for a second response, if memory becomes cheap enough, what do you >need *disks* for? You will need a hardware solution to preserve memory >in the event of system/power crashes, of course. you could always use non-volatile semiconductor memory with a sufficent battery backup to keep it alive until you could put it onto tape (1/2 8-)). >From: rentsch@unc.UUCP (Tim Rentsch) >It is interesting to note that 10 years ago or so, all large systems >had virtual memory whereas small systems did not. i don't think all is right, i think more than 50% would be more accurate. >Now the largest systems (e.g., Cray 2) do not have virtual memory, >whereas it is more and more common for small systems ("microprocessors", >and I use the term in quotes) to have virtual memory. remember that a Cray-2 is meant to be used to run only a few programs at a time. the available real memory per process is much higher than for a general purpose computer system. the Cray is hardly representative of the "typical" computer system. Herb Chong... I'm still user-friendly -- I don't byte, I nybble.... New net address -- VNET,BITNET,NETNORTH,EARN: HERBIE AT YKTVMH UUCP: {allegra|cbosgd|cmcl2|decvax|ihnp4|seismo}!philabs!polaris!herbie CSNET: herbie.yktvmh@ibm-sj.csnet ARPA: herbie.yktvmh.ibm-sj.csnet@csnet-relay.arpa