Xref: utzoo unix-pc.general:1783 comp.sys.att:4862
Path: utzoo!utgpu!watmath!clyde!att!osu-cis!n8emr!uncle!jbm
From: jbm@uncle.UUCP (John B. Milton)
Newsgroups: unix-pc.general,comp.sys.att
Subject: Re: Large files on the Unix PC
Keywords: unixpc, unix, files
Message-ID: <432@uncle.UUCP>
Date: 3 Dec 88 07:22:41 GMT
References: <5466@rphroy.UUCP>
Reply-To: jbm@uncle.UUCP (John B. Milton)
Organization: U.N.C.L.E.
Lines: 32

In article <5466@rphroy.UUCP> tkacik@rphroy.UUCP (Tom Tkacik) writes:
>The documentation for the UnixPC says that there is a 1 Meg limit on the
>size of files.  I have used files that are larger than this.
>The largest was about 1.4Meg.  Does anyone know what the real limit is?
>Am I playing with fate by having files of this size?
>I thought that Unix files were able to be much larger than this.
>What is it about the UnixPC that makes a limit like this?

The real limit on file sizes has to do with the longest list of blocks your
files system can create.

Things that affect this:
 The size of your blocks (512)
 How many bytes are used to number the blocks (3)

 blocks listed in the i-node          (13)
 blocks listed in the first indirect  (170)
 blocks listed in the second indirect (170*170=28900)
 blocks listed in the third indirect  (170*170*170=4913000)
For a total of 4942083 blocks of 512 bytes=2530346496 bytes
The number of bytes in the file is also saved in the i-node as a 32 bit number.

The default limit (ulimit(2)) for users on the UNIXpc is 2147483647.
I HAVE created 2 gig files on the UNIXpc, so I know it works.

Because of the indirect blocks, the du(1) of a file is larger than the size of
the file when the size exceeds 13 blocks and allocates an indirect block

John
-- 
John Bly Milton IV, jbm@uncle.UUCP, n8emr!uncle!jbm@osu-cis.cis.ohio-state.edu
(614) h:294-4823, w:764-2933;  Got any good 74LS503 circuits?