Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP Posting-Version: version B 2.10.2 9/18/84; site masscomp.UUCP Path: utzoo!watmath!clyde!bonnie!masscomp!carter From: carter@masscomp.UUCP (Jeff Carter) Newsgroups: net.arch Subject: Re: 11/08/85 Dhrystone Benchmark Results Message-ID: <831@masscomp.UUCP> Date: Wed, 13-Nov-85 12:29:27 EST Article-I.D.: masscomp.831 Posted: Wed Nov 13 12:29:27 1985 Date-Received: Thu, 14-Nov-85 00:52:31 EST References: <1129@hou2h.UUCP> <643@cornell.UUCP> Reply-To: carter@masscomp.UUCP (Jeff Carter) Distribution: net.arch Organization: Masscomp - Westford, MA Lines: 45 Summary: In article <643@cornell.UUCP> jqj@cornell.UUCP (J Q Johnson) writes: >that remains in my mind with respect to Dhrystone benchmarks is the effect >of cache. Has anyone looked at Dhrystone benchmarks with this in mind? >How much does a typical cache architecture (say a 4K 2-way associative >cache, or the onboard cache on a 68020) effect Dhrystone performance? > I ran the Dhrystone for the MASSCOMP 5000-series machines, all of which are based on the 68020. The primary differences between the CPU modules is the cache architecture and the Translation Buffer. The results for these machines are extracted from Richardson's article: * MC 5400 68020-16.67MHz RTU V3.0 cc (V4.0) 3952 4054 * MC 5600/5700 68020-16.67MHz RTU V3.0 cc (V4.0) 4504 4746 The MC-5400 uses a cache with the following characteristics: 8KByte size Direct-Mapped Write-Through 8 Byte Block size Cache by Process Virtual Address The MC-5600 and MC-5700 use a cache with the following: 8KByte size 2-Way Associative Write-Through 8 Byte Block Size Cache by Physical Address Of course, both use the '020 internal instruction cache. There are several other vendors using '020s with different cache architectures represented in the results, it is informative to examine these. Both systems are zero wait states on read cache hit, and zero wait states on write (Regardless of cache hit/miss). The translation buffer is quite different in these 2 systems, but due to the small program size, I doubt if this has any effect. The effect of the 2-way cache is (probably) to cache the instructions in one cache bank and the data in the other bank (not really, but it serves as a nice model) In a direct-mapped cache, the instructions and data can tend to kick each other out as the program loops. Jeff Carter MASSCOMP 1 Technology Park Westford, MA 01886 ...!{ihnp4|decvax|allegra}!masscomp!carter