Path: utzoo!attcan!uunet!mcvax!ukc!dcl-cs!aber-cs!pfw
From: pfw@aber-cs.UUCP (Paul Warren)
Newsgroups: comp.lang.ada
Subject: Re: performance benchmarking
Message-ID: <1086@aber-cs.UUCP>
Date: 10 Aug 89 08:22:19 GMT
References: <275@ccu.UManitoba.CA>
Organization: UCW,Aberystwyth,WALES,UK
Lines: 35

In article <275@ccu.UManitoba.CA>, roseman@ccu.UManitoba.CA (roseman) writes:
> The problem with that is its almost impossible to get any accurate
> measurements that way - you've got all the little Unix daemons popping in
> and out and using up some time.  We have tests which vary from 0 usecs
> to almost 4 (per iteration that is), which is most unacceptable!
> 
> What can you do to correct things?  Run tests 25 (e.g.) times and take
> the best?  The average?  Increase the iteration count to some ridiculous
> amount to try to compensate?

One good way of doing avoiding all the little daemons is to suspend
"cron", which is responsible for getting them running periodically.
Also, things like running the tests during periods of low use, especially
if your machine is part of a network.

Large iteration counts also help.  How are you measuring the times?
Are you using the unix command "time", or are you using CALENDAR.CLOCK
or some other means?  When measuring times under Unix, you can learn
quite a lot from the user cpu, the system cpu and the elapsed time.

A colleague of mine wrote a package for timing portions of code.
You declare a marker for every fragment, and make a call to start
recording the time and another one to stop at the appropriate
points.  We used it to measure "tools" written on top of CAIS,
and both the "tools" and the CAIS implementation were heavily
instrumented using this package.  If anyone is interested I'll
post it.



-- 
Paul Warren,				tel +44 970 622439
Computer Science Department,		pfw%cs.aber.ac.uk@uunet.uu.net (ARPA)
University College of Wales,		pfw@uk.ac.aber.cs (JANET)
Aberystwyth, Dyfed, United Kingdom. SY23 3BZ.