Path: utzoo!mnetor!uunet!husc6!im4u!ut-sally!nather
From: nather@ut-sally.UUCP (Ed Nather)
Newsgroups: comp.arch
Subject: Re: RISC != real-time control
Message-ID: <11545@ut-sally.UUCP>
Date: 9 May 88 15:06:51 GMT
References: <1534@pt.cs.cmu.edu> <832@swlabs.UUCP>
Organization: U. Texas CS Dept., Austin, Texas
Lines: 33
Summary: critical timing

In article <832@swlabs.UUCP>, jack@swlabs.UUCP (Jack Bonn) writes:
> From article <1534@pt.cs.cmu.edu>, by schmitz@FAS.RI.CMU.EDU (Donald Schmitz):
> > In article <1521@pt.cs.cmu.edu> koopman@A.GP.CS.CMU.EDU (Philip Koopman) writes:
> > 
> > This may be straying somewhat from the original point, but what sort of
> > applications really have such exact timing deadlines?  
> 
> We had a 2.5 MHz Z-80 and a periodic interrupt whose period was 1 msec.
> Doesn't leave much time for background processing.
> 

Our data acquisition system for time-series analysis of variable stars also had
1 msec interrupts, imposed on a Nova minicomputer, ca. 5 usec add time reg to 
reg.  If your interrupt routine chews up 100 usec, you still have 90% of the
CPU left to do "background" processing (I always thought of it as "forground,"
because it's what the user sees -- keyboard response, display, etc.)  That
meant keeping the interrupt routine short in the worst case, and allowing ONLY
the timing interrupt -- all other I/O was polled or DMA.  That allowed us to
specify the worst case condition -- when everything was active all at once --
and verify we'd never lose an interrupt. It was a disaster if we did: we'd get
data that looked fine but was actually wrong.  Not as dramatic as slinging
molten glass at someone, of course, but still awful.

I suspect time-critical software design will become more and more common as
computers get faster, just because you can consider software control where
only hardware was fast enough before.


-- 
Ed Nather
Astronomy Dept, U of Texas @ Austin
{allegra,ihnp4}!{noao,ut-sally}!utastro!nather
nather@astro.AS.UTEXAS.EDU