Path: utzoo!utgpu!jarvis.csri.toronto.edu!mailrus!tut.cis.ohio-state.edu!brutus.cs.uiuc.edu!wuarchive!texbell!vector!attctc!chasm
From: chasm@attctc.Dallas.TX.US (Charles Marslett)
Newsgroups: comp.os.minix
Subject: Re: Disk performance under Minix
Summary: DOS uses a different delayed write cache
Message-ID: <9040@attctc.Dallas.TX.US>
Date: 19 Aug 89 02:25:47 GMT
References: <21290@louie.udel.EDU> <18613@princeton.Princeton.EDU> <2150@netcom.UUCP>
Organization: The Unix(R) Connection, Dallas, Texas
Lines: 50

In article <2150@netcom.UUCP>, hinton@netcom.UUCP (Greg Hinton) writes:
> In article <18613@princeton.Princeton.EDU> nfs@notecnirp.UUCP (Norbert Schlenker) writes:
> Considering that DOS uses a write-through cache -- i.e. writes are NEVER
> delayed -- I don't see how MINIX's delayed-write cache could possibly
> contribute to a slowdown in output.  All other things being equal, doesn't
> a delayed-write cache guarantee higher throughput?

Note that DOS uses a write through cache for the data -- not the FAT (which
is its equivalent of inodes).  As a result, the FAT is normally accessed only
when the internal cache buffer is otherwise needed (for a write), when the
current block of the FAT contains no more unused clusters (for a read) or
when the "other" floppy is accessed (in a single floppy system), or when
the file is closed (the most common).  As a result,
if BUFFERS is not inadequately low, output of a sequential file to the disk
will be very nearly as fast as is possible on the hardware..

This is equivalent to caching all inode writes in a delayed write buffer
and writing all the data blocks directly through the cache.  Just about
the inverse of the Minix philosophy.  It is rather dangerous except for the
"intelligence" built into it -- a file close operation "fixes" the disk
so that the only messed up data on the disk is the current output files
should the system crash, and in most cases, the effect of a crash is that
no clusters are allocated and the data written to the disk is lost.  Some
additional code in the MSDOS fs tries to not reuse clusters that were
recently freed, so even some of the potentially dangerous mixes of operations
can still be defused.  [At the expense of a bit of allocation complexity ;^]

> In article <2131@ditsyda.oz> evans@ditsyda.oz (Bruce Evans) writes:
> >(Remove the
> >WRITE_IMMED's from all but the super block and map blocks in fs/buf.h.)
> >It had an immediate huge effect on the time for "rm *" in a bug directory.
>    . . . .
> >I also changed the cache flushing method so all dirty blocks on a device
> >are flushed whenever one needs to be flushed.
> 
> Bruce, do you have actual timings to show how much performance is increased?
> Does it approach DOS' performance?

It actually should, except for the possible increased fragmentation of a
freelist based file system as opposed to a map based one.

> -- 
> Greg Hinton
> INET: hinton@netcom.uucp
> UUCP: ...!uunet!apple!netcom!hinton


Charles Marslett
STB Systems, Inc.   <-- apply all standard disclaimers
chasm@attctc.dallas.tx.us