Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Path: utzoo!linus!decvax!harpo!seismo!hao!hplabs!sri-unix!drockwel@bbn-vax
From: drockwel@bbn-vax@sri-unix.UUCP
Newsgroups: net.unix-wizards
Subject: Re: Why is the "real time" so much  greater than the "cpu time"
Message-ID: <3142@sri-arpa.UUCP>
Date: Fri, 15-Jul-83 23:54:00 EDT
Article-I.D.: sri-arpa.3142
Posted: Fri Jul 15 23:54:00 1983
Date-Received: Sun, 17-Jul-83 17:02:16 EDT
Lines: 14

From:  Dennis Rockwell 

Your question indicates a basic misapprehension:  both the "user time"
and the "sys time" given by the time command are CPU times.  The user
time is that time spent in your program, and the sys time is the time
the operating system spent supporting your program.  Thus, if your
"user time" is close to the "real time", then your system is pretty
well tuned, at least for CPU-bound processes.

Of course, the terms I used above are subject to quantization errors;
to be more accurate, the CPU times are the proportion of the times that
the clock ticked while your program had control of the CPU.  IO time
gets added in (as sys time);  also, if the clock ticks while the system
is servicing a device interrupt, that time gets added to yours, even if the device that interrupted has nothing to do with you.