Path: utzoo!utgpu!water!watmath!clyde!ima!spdcc!ftp!dab
From: dab@ftp.COM (Dave Bridgham)
Newsgroups: comp.sys.ibm.pc
Subject: Re: Turbo C Debugger (really: profiling under DOS vs. Unix)
Message-ID: <222@ftp.COM>
Date: 7 Jul 88 15:35:19 GMT
References: <11590@agate.BERKELEY.EDU> <1803@akgua.ATT.COM> <12338@mimsy.UUCP>
Organization: FTP Software Inc., Cambridge, MA
Lines: 20
In-reply-to: jds@mimsy.UUCP's message of 6 Jul 88 20:07:33 GMT
Posting-Front-End: GNU Emacs 18.47.2 of Thu Aug 13 1987 on ftp (berkeley-unix)

In article <12338@mimsy.UUCP> jds@mimsy.UUCP (James da Silva) writes:

   The lack of multitasking and protection allow you to get very accurate
   results free of any distortions.  The PC's clock chip keeps track of time in
   1 microsecond increments.  This counter can be read (for example) at the
   entry and exit of each C function.

While the timer chip does have a 16 bit counter that's being
incremented about once a microsecond, there's a problem.  The chip has
a bug.  When you read that counter what you get is the value shifted
left one bit.  In other words, you lose the most significant bit.  I
kludged around this problem by comparing the current value of the
counter with the value I read on the previous call.  If it's less and
I havn't taken a timer interrupt in between, then I set the high bit
on the value I return.  This of course isn't guarenteed to work, but
if the routine is called often enough (more than every 27
milliseconds) then it should stay accurate.  If anyone knows a way
around this bug in the chip, I'd really like to hear about it.

						David Bridgham