Path: utzoo!utgpu!watmath!clyde!att!rutgers!apple!bionet!agate!saturn!shapiro@iznogoud.inria.fr
From: shapiro@iznogoud.inria.fr (Marc Shapiro)
Newsgroups: comp.os.research
Subject: Re: OSF and operating system research, and other topics
Message-ID: <5623@saturn.ucsc.edu>
Date: 2 Dec 88 12:07:18 GMT
Sender: usenet@saturn.ucsc.edu
Lines: 67
Approved: comp-os-research@jupiter.ucsc.edu

In article <5598@saturn.ucsc.edu> rick@seismo.CSS.GOV (Rick Adams) writes:
>Moving fsck into the filesystem code is only renaming fsck, not getting
>rid of it.
1) fsck is very slow for large systems.  My Sun server has a mere
   gigabyte attached to it and rebooting takes ages.
2) getting rid of fsck is not the only advantage of doing updates
   atomically.

>Whats so horrible about the current BSD filesystem?
   It's not bad (except that it's too complex).  OSF proposes a
   filesystem where the size of any partition can be chnaged online.
   I think that's a *big* win.

>What about the cost/performance tradeoffs of these great 'database
>techniques'?
   This of course is the big question.  These techniques have been
   around for a while now and I expect we (i.e. the comp.os.research
   community) now know how to implement them right.  A write-ahead log
   implementation allows to do atomic updates without duplicating all
   the data on disk (i.e. you duplicate new data, in the log, only for
   the short period of time where you are not sure of the outcome of
   the transaction; then you can re-use the log). However you then
   lose the benefit of shadow disks: that even a head crash on a
   single disk doesn't delete your data.

   Using a write-ahead log shouldn't necessarily slow you down w.r.t.
   asynchronous updates, because updates are spooled to the log.  Only
   the commit record needs to be written synchronously.

> I'm not willing to shadow every disk drive I have. Buying
>30 extra gigabytes of disk to insure filesystem consistancy is not very
>reasonable.
   If I understood correctly, the OSF proposal is to update filesystem
   *metadata* (superblocks, inode tables, and directories) atomically;
   not user data.

>To use Andy Tanenbaum's example, "What happens if there is an
>earthquake and your entire computer room falls into a fissure and is
>suddenly relocated to the center of the earth?". I suspect you lose
>big.
   I just checked the fsck man page; I didn't find the option to deal
   with this kind of situation.  (:-)

> (Tanenbaum discusses distributed filesystems as a possible
>solution to this)
   You're saying that you must duplicate all your data on to two disks
   (or other media) which are in 2 places far away enough from each
   other that no single earthquake will swallow them both.  You were
   talking about the cost?

>The OSF "announcement" clearly wins the prize for buzzwords per
>square inch, but what is it really saying?
   I guess we wil find out when their kernel becomes available.  If
   they deliver what they promise, and the performance is not a lot
   worse than existing Unixes on comparable configurations, then I
   think we should applaud, and demand to have access to the sources
   to play with.

						Marc Shapiro

INRIA, B.P. 105, 78153 Le Chesnay Cedex, France.  Tel.: +33 (1) 39-63-53-25
e-mail: shapiro@sor.inria.fr or: ...!mcvax!inria!shapiro

						Marc Shapiro

INRIA, B.P. 105, 78153 Le Chesnay Cedex, France.  Tel.: +33 (1) 39-63-53-25
e-mail: shapiro@sor.inria.fr or: ...!mcvax!inria!shapiro