Path: utzoo!utgpu!water!watmath!clyde!mcdchg!chinet!les
From: les@chinet.UUCP (Leslie Mikesell)
Newsgroups: comp.unix.questions
Subject: Re: Rename bug?
Message-ID: <5748@chinet.UUCP>
Date: 3 Jun 88 21:10:10 GMT
References: <9312@eddie.MIT.EDU> <467@aiva.ed.ac.uk> <9341@eddie.MIT.EDU> <2144@rpp386.UUCP> <5702@chinet.UUCP>  <5730@chinet.UUCP> <22180@labrea.Stanford.EDU>
Reply-To: les@chinet.UUCP (Leslie Mikesell)
Organization: Chinet - Public Access Unix
Lines: 23

In article <22180@labrea.Stanford.EDU> karish@denali.stanford.edu (Chuck Karish) writes:
>
>    In article <5730@chinet.UUCP> les@chinet.UUCP (Leslie Mikesell) writes:
>	This leaves an interval between unlink("xx") and link("TMP4653","xx")
>	when an attempt to open("xx") will fail.  I work with a system that

>Why not have each program that uses the shared files lock them
>while they're in use?

Well, some of the programs are "cat", "pg", "compress", "uucp" and a print
spooler as well as my interactive access program.  But the real reason
I didn't consider locks is that the 10 minute updates cannot be deferred
until no one happens to be reading the previous copy.  I suppose I could
make a copy each time someone reads a file which would limit the time the
file needs to be locked, but that would cause a real performance problem.
This is currently running on a 3B2 (actually 2 of them with the files
duplicated over RFS) and the updating machine stays pretty busy now.
Trying the open() twice seemed like the only reasonable thing to do,
although it does take a bit longer to respond to invalid requests.

How do other people deal with files that are frequently updated? 

Les Mikesell