Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version B 2.10 5/3/83; site utzoo.UUCP
Path: utzoo!henry
From: henry@utzoo.UUCP (Henry Spencer)
Newsgroups: net.arch
Subject: Re: Why Virtual Memory
Message-ID: <6086@utzoo.UUCP>
Date: Sat, 26-Oct-85 21:26:59 EDT
Article-I.D.: utzoo.6086
Posted: Sat Oct 26 21:26:59 1985
Date-Received: Sat, 26-Oct-85 21:26:59 EDT
References: <480@seismo.CSS.GOV>, <384@unc.unc.UUCP>
Organization: U of Toronto Zoology
Lines: 23

> It is interesting to note that 10 years ago or so, all large systems
> had virtual memory whereas small systems did not.
> 
> Now the largest systems (e.g., Cray 2) do not have virtual memory,
> whereas it is more and more common for small systems...
> to have virtual memory.

Virtual memory has always meant some speed penalty, although clever design
can minimize it.  Even 10-year-old big machines run more quickly with
address translation switched off, as witness IP Sharp [big APL timesharing
firm] which runs its monster Amdahl unmapped and sees about a 15% speed
improvement as a result.  (They can get away with this because they run no
directly-executable user code.)  Machines specializing in absolute maximum
speed generally will not use virtual memory and hence will often be built
without it.  Machines running more general applications will have it if
they can afford it, which nowadays means they will almost always have it.
The pattern is not a wheel of reincarnation, it's gradual diffusion of the
technology downward coupled with falling memory prices and the realization
that "real memory for real performance" dictates avoiding virtual memory
when speed totally dominates design.
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry