Path: utzoo!attcan!uunet!husc6!purdue!decwrl!nsc!stevew
From: stevew@nsc.nsc.com (Steve Wilson)
Newsgroups: comp.arch
Subject: Re: VM needed for rapid startup
Message-ID: <5144@nsc.nsc.com>
Date: 6 Jun 88 18:12:25 GMT
References: <19730@beta.UUCP> <4332@killer.UUCP>
Reply-To: stevew@nsc.UUCP (Steve Wilson)
Organization: National Semiconductor, Sunnyvale
Lines: 33

In article <4332@killer.UUCP> elg@killer.UUCP (Eric Green) writes:
> [5 levels of redirection removed]
>
>Clarification: I was not addressing virtual memory vs. no virtual memory. I
>was addressing the "problem" of ratio of CPU speed to disk speed.
>
>The stereotypical example of a program that does not run well under VM is the
>large scientific array processing program. Such a program marches from one end
>of a huge array to the other, and by the time the last pages are accessed, the
>first pages have already been paged out of RAM (meaning that the next pass
>results in the whole array being paged back into RAM).
>

What your talking about is certainly a problem, but I would argue it has
more to do with HOW the VM is implemented on a specific processor, than 
whether VM itself interfears.

If a "typical" scientific application requires 64Mb of array space to
fit the entire data set of the problem into physical memory AND
such memory is present, then the correct solution is to implement the
VM hardware in such a way to allow all 64Mb to be mapped at the same
time.  Thus, paging is prevented in the "typical" case.

Now the real trick is to define what the "typical" scientific application is....
If your reponse is that such an animal's attributes can't be defined, then
I'm not sure there is a place to begin making other design trade-offs
beyond this single issue.


Steve Wilson
National Semiconductor

[Universal disclaimer goes here!]