Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP Posting-Version: version B 2.10 beta 3/9/83; site desint.UUCP Path: utzoo!linus!philabs!cmcl2!seismo!hao!hplabs!sdcrdcf!trwrb!desint!geoff From: geoff@desint.UUCP (Geoff Kuenning) Newsgroups: net.unix-wizards Subject: Load control and intelligence in schedulers Message-ID: <151@desint.UUCP> Date: Thu, 11-Oct-84 00:05:09 EDT Article-I.D.: desint.151 Posted: Thu Oct 11 00:05:09 1984 Date-Received: Sat, 13-Oct-84 01:25:41 EDT References: <2342@sdcc3.UUCP> Organization: his home computer, Thousand Oaks, CA Lines: 50 (Discussing the UCSD load-control mechanism) >The real advantage to this approach is >that kernel based approaches can not easily distinguish between a vi and a >compile, causing interactive jobs to become unuseable. > Keith Muller Gee, when I was in college (early 70's) our big CDC 6500 ran a "kernel" scheduler that did a real good job at that, on a dynamic basis (i.e., a big vi operation like a huge global substitution ran at "background" priority). The scheduler had multiple layers: an input queue for batch jobs, a "pool" of 40 potentially runnable jobs, and 7 "control points" (read partitions) for jobs actually in memory and available for CPU usage. (The limit of 7 processes in memory would have been far too small had we not had slowish core for swapping, especially since 3 were permanently occupied by system processes). Borrowing an idea that worked really well at Purdue, most processes ran under a fairly standard priority-adjustment scheme, where I/O improved priority and CPU usage decreased it. However, any job that blocked for *terminal* I/O got a short-term and big boost in priority when that I/O completed. (The length that the priority boost lasts depends on CPU speed--I think we used a few CPU seconds. The idea is to pick a number more than what an editor usually needs before it reads more from the terminal, but less than the amount of time taken by your typical compile.) Once this limit expired, process priority dropped drastically and becomes subject to the standard scheduling algorithms. The other trick was to have a scheduler that was smart about picking the 40 potentially-runnable jobs and about bringing the 7 into memory. The biggest improvement in a Unix system (where it is hard to control the number of potentially-runnable jobs without something like the UCSD load-control system) would come from tuning the swapping scheduler better. A swap takes a large amount of time; you want to make that time pay off by picking a process that will stay out for a long time, so that the amount of time spent swapping is small by comparison. In addition, you would like to pick a process that is consuming a lot of the resource you need--memory, I/O, or CPU--which requires better per-process statistics (especially on I/O rates) than most Unixes keep. Even the best scheduler cannot be perfect. Ours had operator commands to change process priorities and lock them into or out of memory. Many is the time I have seen a good operator clear up a thrashing system by either forcing an offending process to completion or by swapping it out until the load level had dropped. Now if we could only package good old Toshio and ship him with each 4.2 system...:-) -- Geoff Kuenning First Systems Corporation ...!ihnp4!trwrb!desint!geoff