Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Path: utzoo!mnetor!uunet!seismo!ll-xn!ames!sdcsvax!celerity!ps
From: ps@celerity.UUCP (Pat Shanahan)
Newsgroups: comp.arch
Subject: Re: Phys vs Virtual Addr Caches
Message-ID: <231@celerity.UUCP>
Date: Mon, 20-Jul-87 12:12:41 EDT
Article-I.D.: celerity.231
Posted: Mon Jul 20 12:12:41 1987
Date-Received: Tue, 21-Jul-87 05:30:51 EDT
References:  <1762@encore.UUCP>
Reply-To: ps@celerity.UUCP (Pat Shanahan)
Organization: Celerity Computing, San Diego, Ca.
Lines: 33

In article <1762@encore.UUCP> corbin@encore.UUCP (Steve Corbin) writes:
...
>
>
>The `Segmented Address Space` architecture of Prime systems solves the
>problem of multiple cached entries of the same data and doesn't require
>the address map identifier in the cache.  It works as follows:
>
>	A specific number of segments in the virtual space are used for
>	sharing and are common to the address space of all processes in
>	the system.  For example, if segment 1000 is a share segment then
>	multiple processes virtual segment 1000 will map to the same
>	physical segment in memory.  Thus sharing is achieved, duplicate
>	cached entries of the same data is avoided and the mapping for
>	the shared data is maintained in one table.
>
...
>Steve		{ihnp4, allegra, linus} ! encore ! corbin
>
>Stephen Corbin
>{ihnp4, allegra, linus} ! encore ! corbin

I'm curious about this. How would one use this to implement, for example,
System V shared memory? The shared memory interfaces seem to allow for
processes to attach the same block of shared memory at different addresses,
and for different processes to use the same virtual address for different
blocks of shared memory.

-- 
	ps
	(Pat Shanahan)
	uucp : {decvax!ucbvax || ihnp4 || philabs}!sdcsvax!celerity!ps
	arpa : sdcsvax!celerity!ps@nosc