Xref: utzoo news.admin:2830 news.software.b:1431
Path: utzoo!utgpu!water!watmath!clyde!att!pacbell!lll-tis!helios.ee.lbl.gov!nosc!ucsd!ucsdhub!esosun!seismo!uunet!van-bc!sl
From: sl@van-bc.UUCP (pri=-10 Stuart Lynne)
Newsgroups: news.admin,news.software.b
Subject: Re: a "nice" rnews...
Keywords: cpu hog, high load average
Message-ID: <1822@van-bc.UUCP>
Date: 25 Jun 88 05:48:09 GMT
References: <1751@tolerant.UUCP>
Reply-To: sl@van-bc.UUCP (Stuart Lynne)
Organization: Wimsey Associates, Vancouver, BC.
Lines: 41

In article <1751@tolerant.UUCP> jane@tolerant.UUCP (Jane Medefesser) writes:
>Is there a way I can "nice" rnews to a lower priority? I thought
>about renaming rnews to nrnews then making rnews a shell script
>that does a "nice -5 nrnews" but I suspect it's not that simple.

van-bc is a small 68010 box at 10mhz so I can sympathize. 

I run news with SPOOLNEWS defined. This means that nothing ever takes to
long when it runs from uuxqt. 

Crontab starts up script every fifteen minutes or so:

	15,30,45,0 * * * * /bin/su - news -c "nice -20 
		/local/lib/news/newshourly"

This keeps news unbatching well in the background. Newshourly also does a
check to see if there is already a copy running so there is never more than
one. Batching is done in a similiar fashion, and it also checks to ensure
that no unbatching is going on. 

To keep uucico from having to many problems I have all uucp logins
start up with pri=-5 (that decreases their nice value, increasing their
priority).  

Another thing to look out for is scramble inode free lists. News makes large
numbers of small files which are deleted and reallocated repeatedly. A great
way to get a random free list which is death on performance. fsck -s or -S
every once in a while helps a lot.

To preserve disk space for outgoing multiple feeds you can trick news in to 
creating one outgoing batched file which you trick uucp in to sending to
remote systems by forging a link to the file rather than creating a copy.
Once the links are made delete the original file. Then as uucico delivers
the files it destroys the links. When the last site gets theirs it just
quietly disappears. This saves disk space *and* having to batch the same
crud up more than once. My only overhead to add another full feed site is
the cpu cycles to deliver it. 


-- 
Stuart.Lynne@wimsey.bc.ca {ubc-cs,uunet}!van-bc!sl     Vancouver,BC,604-937-7532