Path: utzoo!utgpu!water!watmath!clyde!rutgers!rochester!cornell!batcomputer!pyramid!voder!cullsj!gupta From: gupta@cullsj.UUCP (Yogesh Gupta) Newsgroups: comp.os.misc Subject: Re: Contiguous files; extent based file systems Summary: Random access to data in large files Message-ID: <177@cullsj.UUCP> Date: 17 Dec 87 22:02:25 GMT References: <561@amethyst.ma.arizona.edu> <3228@tut.cis.ohio-state.edu> <9828@mimsy.UUCP> Distribution: na Organization: Cullinet Software, San Jose, CA Lines: 14 In article <9828@mimsy.UUCP>, chris@mimsy.UUCP (Chris Torek) writes: > Personally, I have never cared whether my files were contiguous. All > I care about is that they be reasonably fast to access. > I definitely agree. However, I find that if I create a 100MB file under Unix (BSD 4.2, System V Rel 1), the overhead in randomly accessing various parts of it is too high (due to indirect inode structures). Any comments? What if I know that my file will be of the order of 100MB and then declare an extent size of 1MB (wastage of 0.5%, on the average) in an extent based file system? -- Yogesh Gupta | If you think my company will let me Cullinet Software, Inc. | speak for them, you must be joking.