Path: utzoo!utgpu!watmath!clyde!att!rutgers!mailrus!ames!pasteur!agate!saturn!rick@seismo.CSS.GOV
From: rick@seismo.CSS.GOV (Rick Adams)
Newsgroups: comp.os.research
Subject: Re: OSF and operating system research, and other topics
Message-ID: <5598@saturn.ucsc.edu>
Date: 28 Nov 88 21:13:39 GMT
Sender: usenet@saturn.ucsc.edu
Organization: Center for Seismic Studies, Arlington, VA
Lines: 37
Approved: comp-os-research@jupiter.ucsc.edu


> Essentially the idea is that anything fsck can do, the file system does
> automatically.  It's adding "incrementally", i.e. not locking the whole
> disk while you fix things up, that's hard.

Moving fsck into the filesystem code is only renaming fsck, not getting
rid of it.

Whats so horrible about the current BSD filesystem? It's already got
duplicate copies of the superblock. It can rebuild the free block
bitmap if necessary, so you can say that it too is only a performance
win.

What about the cost/performance tradeoffs of these great 'database
techniques'?  I'm not willing to shadow every disk drive I have. Buying
30 extra gigabytes of disk to insure filesystem consistancy is not very
reasonable.

To use Andy Tanenbaum's example, "What happens if there is an
earthquake and your entire computer room falls into a fissure and is
suddenly relocated to the center of the earth?". I suspect you lose
big.  (Tanenbaum discusses distributed filesystems as a possible
solution to this)

What price? This is totally passed over in the name of fixing something
that is not necessarily broken in the first place (Note I'm only
talking about the BSD filesystem, the Sys5 filesystem can be considered
broken if you wish)

E.g. I'm not willing to give up the huge performance gain of having
lots of disk blocks cached in memory for the infinitessimal increase in
disk stability.

The OSF "announcement" clearly wins the prize for buzzwords per
square inch, but what is it really saying?

---rick