Path: utzoo!utgpu!water!watmath!clyde!att!osu-cis!tut.cis.ohio-state.edu!osupyr.mast.ohio-state.edu!vkr
From: vkr@osupyr.mast.ohio-state.edu (Vidhyanath K. Rao)
Newsgroups: comp.sys.amiga
Subject: Re: priorities (was Re: (none))
Message-ID: <635@osupyr.mast.ohio-state.edu>
Date: 25 Jun 88 21:05:19 GMT
References: <1814@van-bc.UUCP> <128@quintus.UUCP> <4601@killer.UUCP>
Organization: Ohio State Math-Stats Dept
Lines: 23

In article <4601@killer.UUCP>, rgj@killer.UUCP (Randy Jouett) writes:

> In article <128@quintus.UUCP> pds@quintus.UUCP (Peter Schachte) writes:
>> Maybe this is a naive question, but why is it a mistake to make a clock
>> program run at priority 20?[...]  Who misses a couple of milliseconds
>> every second?

> [...] it does not require a high process priority for accuracy.
The question is how often the clock program wakes up. Any program that
must do something at regular intervals is supposed to the timer.device
alarms. If that is the case the arguments are correct. 

The norm for such programs on 'lesser' machines is to put something in
the interrupt server for the interrupt that occurs every 1/60th of a sec.
Whether a clock programmer is a twit depends on this. Actually these
type of programs on the C64 were written by twits: They screw up the
vectors and the server chains. 

I haven't seen the source to any of these clock programs. So the comments
above are not directed against anybody in particular.

Finally, if the code executed takes only 2-3ms, the priority is irrelevent.
How do you set clock to 2ms accuracy? Do you have a Cessium clock or like?