Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Path: utzoo!mnetor!uunet!seismo!mcnc!gatech!amdcad!tim
From: tim@amdcad.AMD.COM (Tim Olson)
Newsgroups: comp.arch
Subject: Re: Phys vs Virtual Addr Caches
Message-ID: <17540@amdcad.AMD.COM>
Date: Thu, 16-Jul-87 12:10:38 EDT
Article-I.D.: amdcad.17540
Posted: Thu Jul 16 12:10:38 1987
Date-Received: Sat, 18-Jul-87 07:21:48 EDT
References: <3904@spool.WISC.EDU>
Reply-To: tim@amdcad.UUCP (Tim Olson)
Organization: Advanced Micro Devices, Inc., Sunnyvale, Ca.
Lines: 28

In article <3904@spool.WISC.EDU> lm@cottage.WISC.EDU (Larry McVoy) writes:
+-----
| Here's a question.  Why do people build their caches to respond to physical
| addresses instead of virtual addresses?  Another way to state the question
| is: why not put the VM -> PM translation logic next to (in parallel with)
| the data cache, rather than before it?
+-----

The potential benefit of this (assuming an external mmu) is a decrease
in the latency from virtual address valid to cache access.  However,
there are also problems:

	1)	Cache tags must include a process-id field (more RAM for
		the tags, larger tag comparators) or the cache
		must be flushed on every context switch (very expensive
		for large caches.)

	2)	It is very hard to provide for cache consistency in a
		multiprocessor (or even uniprocessor + i/o, but less so)
		environment; it basically requires a reverse-mapping
		from physical address -> virtual address.

All in all, if you can hide the address translation time in a pipeline
stage, you are probably better off using physical caches.

	-- Tim Olson
	Advanced Micro Devices
	(tim@amdcad.amd.com)