Path: utzoo!attcan!uunet!steinmetz!davidsen
From: davidsen@steinmetz.ge.com (William E. Davidsen Jr)
Newsgroups: comp.arch
Subject: Re: RISC bashing at USENIX
Message-ID: <11504@steinmetz.ge.com>
Date: 12 Jul 88 18:17:30 GMT
References: <6888@ico.ISC.COM> <11496@steinmetz.ge.com> <6965@ico.ISC.COM>
Reply-To: davidsen@crdos1.UUCP (bill davidsen)
Organization: General Electric CRD, Schenectady, NY
Lines: 30

In article <6965@ico.ISC.COM> rcd@ico.ISC.COM (Dick Dunn) writes:

| >   We have looked at the NN benchmarks for a number of machines (I
| > obviously can't say which ones), and my personal reaction is that they
| > are reasonable and valid for business applications...
| 
| OK, so which benchmarks are the good ones?  Note that the one that EE Times
| gave such prominent coverage was one of the simplest--a loop with just 4
| calculations (+-*/) on 16-bit integers, running 1 to 15 copies at a time.

  The decision is yours... NN gives the result of the test and what it
measures. I don't disagree that considering (any) one benchmark as an
indicator is probably a waste, but with a selection of results you can
compare two (or more) machines in those areas which apply to your
situation.

  I have a UNIX benchmark suite which I have run on a number of machines
for my personal edification. It measures some raw performance numbers
such as the speed of arithmetic for all data types, trancendental
functions, test and branch for int and float, disk access and transfer
times for large and small files, speed of bit fiddling such as Grey to
binary, etc. Then I measure speed of compile, performance under
multitasking load, speed of pipes and system calls, and a few other
things. The *one* thing I measure which consistently represents the
overall performance of the machine is the real time to run the entire
benchmark suite.
-- 
	bill davidsen		(wedu@ge-crd.arpa)
  {uunet | philabs | seismo}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me