Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version B 2.10.2 9/18/84; site ames.UUCP
Path: utzoo!linus!philabs!cmcl2!seismo!hao!ames!eugene
From: eugene@ames.UUCP (Eugene Miya)
Newsgroups: net.arch
Subject: Re: Virtual memory and stuff
Message-ID: <1235@ames.UUCP>
Date: Sat, 2-Nov-85 18:14:35 EST
Article-I.D.: ames.1235
Posted: Sat Nov  2 18:14:35 1985
Date-Received: Tue, 5-Nov-85 05:34:38 EST
References: <232@polaris.UUCP>
Distribution: net
Organization: NASA-Ames Research Center, Mtn. View, CA
Lines: 63

I mailed my initial response directly to Eugene Brooks, and I am
surprised that there is so much flaming on this topic.
I think the "applications fill the available memory" argument is
the correct one.  The problem with these massive memory machines
comes with Unix.  Consider in the not to distant future machine with more
memory than the C-2 (which borders on this problem).  For argument
sake let's say 2-4 Gigawords of physical memory.  Consider two
user processes which take up slightly more than 50% of physical
memory.  Let's suppose process 1's time is up and must get swapped out.
Well, how long does it take to swap 2+ GWs? [16 GBs]  Think about this
at IBM channel speeds and we are talking whole seconds to swap by which time
with an unmodified scheduler it's p2's time to get swapped out, but p1
is still being swapped out.  The problem is (as others mentioned)
the slow spindle rate of the disks.  Some manufacturers have adopted
disk stripping techniques to write bits out in parallel to several disks
simultaneously, but we are reaching limitations in this.

Solution? Partial swaping [or paging].  There are major I/O problems
coming.  The concept of staging memories and Extended Core as evidenced
by older IBM and CDC architectures, and newer ex-Denelcor and Cray (SSD)
ideas places too much memory management on users of large memory.

> IBM has at least two announced models of CPU's available with 256M real,
> though it's treated in a weird fashion.

Not quite in the same league.  The C-2 is 256 Mega WORDs, those IBMs are
BYTES: they differ by a factor of 8. Practically an order of magnitude.
As you pointed out in another article scale-up is a problem.

> 
> >Third response:  Decrease program startup?  (Tentative.)

I think this is valid, but many here at Ames won't agree.
 
> there is never enough memory.

RIGHT!
 
> remember that a Cray-2 is meant to be used to run only a few programs
> at a time.  the available real memory per process is much higher
> than for a general purpose computer system.  the Cray is hardly
> representative of the "typical" computer system.
> 
> Herb Chong...
 
The first statment above is not true inspite of the 1st argument regarding
virtual memory I mentioned.  That was just a sample case.
Consider, as a Cray rep pointed out at CUG in Montreal, this is
not a 256 MW machine (because of power 2 truncation), it closer to 268 MW
(decimal): 12 mega words or 96 Megabytes of memory to support interactive
work for how many users?  Much larger and faster than any VAX.  If we want to
think typical, that's okay, but I think thinking like that will keep
IBM behind Cray in performance (nothing personal).  Would you submit
your favorite card deck to an IBM PC to keep "thruput" high? ;-)
[This later comment was an early IBM justification against interactive
computating. I also once saw an ad by Univac again "user-friendly"
computing for the same reasons.]

From the Rock of Ages Home for Retired Hackers:
--eugene miya
  NASA Ames Research Center
  {hplabs,ihnp4,dual,hao,decwrl,allegra}!ames!aurora!eugene
  emiya@ames-vmsb