Path: utzoo!mnetor!uunet!mcvax!ukc!its63b!aiva!richard
From: richard@aiva.ed.ac.uk (Richard Tobin)
Newsgroups: comp.lang.c
Subject: Re: stdio error detection
Message-ID: <206@aiva.ed.ac.uk>
Date: 1 Dec 87 13:14:13 GMT
References: <289@cresswell.quintus.UUCP> <6748@brl-smoke.ARPA> <290@cresswell.quintus.UUCP>
Reply-To: richard@uk.ac.ed.aiva (Richard Tobin)
Organization: AI Applications Institute, Edinburgh University
Lines: 25
Keywords: errno fclose fopen stdio errors

In article <290@cresswell.quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe)
points out some problems in determining what went wrong if a standard io
call failed.

>One of the things that bothers me about fclose() is this:
>	suppose f is a valid pointer to an open output stdio stream,
>	but that fclose(f) returns EOF -- perhaps because of a write
>	error when flushing remaining buffered output, or perhaps
>	because of an unlucky interrupt (check man 2 close).
>	IS f CLOSED?
>When this is inside a loop processing a couple of hundred files, one
>after the other, if f is *not* closed I can run out of streams despite
>having taken care to close everything as soon as possible.

On a unix system (I don't know about others) you could use open()
followed by fdopen() instead of fopen().  Then you'd know the underlying
file descriptor, so that you could call close() after fclose() just
to be sure.  (There are also other unportable ways to get the file
descriptor associated with a stream.)  That will stop you running out
of file descriptors, at least.

-- 
Richard Tobin,                         JANET: R.Tobin@uk.ac.ed             
AI Applications Institute,             ARPA:  R.Tobin%uk.ac.ed@nss.cs.ucl.ac.uk
Edinburgh University.                  UUCP:  ...!ukc!ed.ac.uk!R.Tobin