Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version B 2.10.2 9/18/84; site mips.UUCP
Path: utzoo!watmath!clyde!bonnie!akgua!whuxlm!harpo!decvax!decwrl!Glacier!mips!mash
From: mash@mips.UUCP (John Mashey)
Newsgroups: net.arch
Subject: Re: Re: Cache Revisited
Message-ID: <170@mips.UUCP>
Date: Wed, 21-Aug-85 04:56:53 EDT
Article-I.D.: mips.170
Posted: Wed Aug 21 04:56:53 1985
Date-Received: Sat, 24-Aug-85 15:09:59 EDT
Distribution: net
Organization: MIPS Computer Systems, Mountain View, CA
Lines: 21

This is a response to question from huguet@LOCUS.UCLA.EDU [sorry, mail
kept bouncing] about an earlier assertion of mine:

> f) Use of optimizing compilers that put things in registers, often driving
> the hit rate down [yes, down], although the speed is improved and there are
> less total memory references.

I don't know of any published numbers to back this up.  The effect has
been seen in [unpublished] simulations; might be a good topic for research.
It does make sense, at least for data cache (instruction cache effects may
vary wildly).  The better an optimizer is, the more likely it is to put
frequently-used variables in registers, thus reducing the number of
references to that are likely to be cache hits.  Consider the ultimate
case: a smart compiler and a machine with many registers, such that
most code sequences fetch a variable just once, so that most data references
are cache misses.  Passing arguments in registers also drives the hit rate down.
-- 
-john mashey
UUCP: 	{decvax,ucbvax,ihnp4}!decwrl!mips!mash
DDD:  	415-960-1200
USPS: 	MIPS Computer Systems, 1330 Charleston Rd, Mtn View, CA 94043