Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP Posting-Version: version B 2.10.3 alpha 4/15/85; site ucbvax.ARPA Path: utzoo!watmath!clyde!burl!ulysses!ucbvax!info-vax From: info-vax@ucbvax.ARPA Newsgroups: fa.info-vax Subject: Re: compressing disk space Message-ID: <8725@ucbvax.ARPA> Date: Wed, 3-Jul-85 04:06:53 EDT Article-I.D.: ucbvax.8725 Posted: Wed Jul 3 04:06:53 1985 Date-Received: Thu, 4-Jul-85 04:11:01 EDT Sender: daemon@ucbvax.ARPA Organization: University of California at Berkeley Lines: 51 From: Kevin Carosso> VMS has always had problems with fragmented disk packs. This isn't necessarilly true. As everything else in life, this depends on the situation... I have a fairly large configuration, and never perform file-system backup/restore cycles. I keep an eye on my disk fragmentation with the REPORT=DISK function of the SPM utility, so I know I'm not really fragmenting. I do however, take some precautions that may not be feasible for all sites. I guess I should stress that I get off so easy because I am not crunched for disk space. Our system seems to hover right around 75% full, without too much bouncing up and down. There is a steady upward trend, but we have been adding disks slowly over the last few years to keep things in check. Because I have the disk space, I have been rather liberal with cluster factors on my disks. My system disk, an RP07, is clustered at 10. According to SPM, I'm using about 91% of the allocated space with real data. The rest is wasted on the cluster size. This is not unreasonable, and seems to keep the fragmentation down and performance up. My user disks are Eagles, and I have them clustered at 5. Due to the nature of user-type files, however, I generally only see something like 71% of allocated space going to actual data. This could be unacceptable in many situations but, again, since I have the space to spare I'd rather use a little of it to make my job easier and keep file-system performance consistently good. Before I get too many flames from disk-poor system managers, I gotta say that I've been there too (at school, of course, where else?) and certainly understand the trials and tribulations of weekly disk r&r's when the cluster-size is 1 and you're lucky if ya got 1000 free blocks out there.... Also, there was a time when I had several 100000 block database files out there, that were fragmented as hell... This meant for much excitement during the 4.0 upgrade.... (gggggrrrrr!!!!!) That, more than anything I've experienced, points out the simple fact that fragmentation is also a function of the file sizes you've got out there. While I wouldn't consider my disks fragmented under normal use, they sure behaved differently from the point of view of the 10000 to 100000+ block file... (oracle now lives on a different disk, with it's databases allocated contiguously before anything else went on the disk!) /Kevin Carosso engvax!kvc @ CIT-VAX.ARPA Hughes Aircraft Co.