Path: utzoo!attcan!uunet!husc6!think!ames!mailrus!tut.cis.ohio-state.edu!rutgers!uwvax!umn-d-ub!umn-cs!randy From: randy@umn-cs.CS.UMN.EDU (Randy Orrison) Newsgroups: comp.sources.bugs Subject: NULL again (was Re: Patch #2 to Pcomm v1.1) Message-ID: <7972@umn-cs.CS.UMN.EDU> Date: 28 Sep 88 03:11:33 GMT References: <7782@bcsaic.UUCP> <25069@teknowledge-vaxc.ARPA> <12246@steinmetz.ge.com> Reply-To: randy@cctb.mn.org (Randy Orrison) Organization: Chemical Computer Thinking Battery, St. Paul, MN Lines: 231 In article <12246@steinmetz.ge.com> davidsen@crdos1.UUCP (bill davidsen) wrote: | Bear in mind that NULL is not always zero, but rather that zero cast |to a pointer type is always NULL. No. The computer's internal representation of a null pointer may not be the same as the internal representation of the integer 0. However, 'NULL' can always be #defined as '0'; see below for the reasons for this. Don't confuse the convenient symbol 'NULL' with the internal representation of a pointer that doesn't point to anything. | Comparing NULL with data types other |than pointers may (a) produce slow code or (b) produce code which |doesn't work correctly. I would suggest that: | | if (lock_path != NULL && *lock_path != '\0') | |is easier to read and will avoid having the char->int->pointer |conversion done at runtime. Points (a) and (b) are both wrong. Explicitly typing out 'NULL' and '\0' should NOT affect the code generated (in either speed or accuracy), since leaving them out implies comparison to 0, which is a constant and so any type conversion can be done at compile time. (Note that buggy compilers may get this wrong, but then... they're buggy.) This, however, doesn't argue with the fact that the latter version (quoted above) is easier to read. I heartily agree. Here, for the doubting, are the definitive articles on the NULL subject, with due thanks to Chris Torek: :::::::::::::: save/216 :::::::::::::: Path: mimsy!chris From: chris@mimsy.UUCP (Chris Torek) Newsgroups: comp.lang.c Subject: Why NULL is 0 Summary: you have seen this before, but this one is for reference Message-ID: <10576@mimsy.UUCP> Date: 9 Mar 88 02:26:10 GMT References: <2550049@hpisod2.HP.COM> <7412@brl-smoke.ARPA> <3351@chinet.UUCP> <10574@mimsy.UUCP> Organization: U of Maryland, Dept. of Computer Science, Coll. Pk., MD 20742 Lines: 73 (You may wish to save this, keeping it handy to show to anyone who claims `#define NULL 0 is wrong, it should be #define NULL'. I intend to do so, at any rate.) Let us begin by postulating the existence of a machine and a compiler for that machine. This machine, which I will call a `Prime', or sometimes `PR1ME', for obscure reasons such as the fact that it exists, has two kinds of pointers. `Character pointers', or objects of type (char *), are 48 bits wide. All other pointers, such as (int *) and (double *), are 32 bits wide. Now suppose we have the following C code: main() { f1(NULL); /* wrong */ f2(NULL); /* wrong */ exit(0); } f1(cp) char *cp; { if (cp != NULL) *cp = 'a'; } f2(dp) double *dp; { if (dp != NULL) *dp = 2.2; } There are two lines marked `wrong'. Now suppose we were to define NULL as 0. Clearly both calls are then wrong: both pass `(int)0', when the first should be a 48 bit (char *) nil pointer and the second a 32 bit (double *) nil pointer. Someone claims we can fix that by defining NULL as (char *)0. Suppose we do. Then the first call is correct, but the second now passes a 48 bit (char *) nil pointer instead of a 32 bit (double *) nil pointer. So much for that solution. Ah, I hear another. We should define NULL as (void *)0. Suppose we do. Then at least one call is not correct, because one should pass a 32 bit value and one a 48 bit value. If (void *) is 48 bits, the second is wrong; if it is 32 bits, the first is wrong. Obviously there is no solution. Or is there? Suppose we change the calls themselves, rather than the definition of NULL: main() { f1((char *)0); f2((double *)0); exit(0); } Now both calls are correct, because the first passes a 48 bit (char *) nil pointer, and the second a 32 bit (double *) nil pointer. And if we define NULL with #define NULL 0 we can then replace the two `0's with `NULL's: main() { f1((char *)NULL); f2((double *)NULL); exit(0); } The preprocessor changes both NULLs to 0s, and the code remains correct. On a machine such as the hypothetical `Prime', there is no single definition of NULL that will make uncasted, un-prototyped arguments correct in all cases. The C language provides a reasonable means of making the arguments correct, but it is not via `#define'. -- In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163) Domain: chris@mimsy.umd.edu Path: uunet!mimsy!chris :::::::::::::: save/234 :::::::::::::: Path: mimsy!chris From: chris@mimsy.UUCP (Chris Torek) Newsgroups: comp.lang.c Subject: Re: NULL etc. Message-ID: <12290@mimsy.UUCP> Date: 2 Jul 88 20:36:44 GMT References: <6966@cup.portal.com> <3458@rpp386.UUCP> Organization: U of Maryland, Dept. of Computer Science, Coll. Pk., MD 20742 Lines: 91 >In article <6966@cup.portal.com> Paul_L_Schauble@cup.portal.com asks: >>is #define NULL (char *)0 really portable?? C's untyped nil pointer, which MUST be given a type before it can be used correctly, is written as `0' (and `0L', and possibly using constant integer expressions, depending on whose definition you use; but `0' suffices and must work). After it has been given a type (`(char *)0') it becomes a nil pointer of that type. Once it has a type (if we ignore some fine points in the dpANS, many of which are unlikely to be implemented in current C compilers) it may not be used as a nil pointer of another type. Hence (char *)0 is a nil pointer to char, and as such may not be used as a nil pointer to int, or a nil pointer to struct tcurts, or indeed as anything other than a pointer to char. It may work---indeed, it is more likely to work than to fail---but it is incorrect and unportable, and should (and does in PCC) draw at least a warning from the compiler. There are only two ways that the untyped nil pointer can acquire a type, namely assignment and comparison. Casts are a special case of assignment, as are arguments to functions that have prototypes in scope. Where this causes the most trouble is in arguments to functions that do not have prototypes in scope, or for which the prototype does not specify a type for that argument: e.g., execl(): f() { void execl(char *, ...); execl("prog", "prog", "arg1", "arg2", ___); } The only correct way to fill in the blank is with (char *)0 (or possibly (char *)0L and similar tricks; outside of obfuscated C contests, these tricks are not worth considering). The dpANS has at present one more instance of an `untyped' nil pointer, namely `(void *)0'. The differences between using `0' and `(void *)0' as a `generic nil' are, first, that while 0 is also an integer constant, (void *)0 is not, and second, that (void *)0 is also a typed nil pointer (ouch!---more below). Suppose that NULL is defined as either `0' or `(void *)0'---one of the two untyped nil pointers---but that we do not know which one. Which of the following calls are correct? /* defintions before the fragments (note lack of prototypes) */ void f1(cp) char *cp; { } void f2(ip) int *ip; {
} void f3(vp) void *vp; {
} ... f1(NULL); /* call 1 */ f1((char *)NULL); /* call 2 */ f2(NULL); /* call 3 */ f2((int *)NULL); /* call 4 */ f3(NULL); /* call 5 */ f3((void *)NULL); /* call 6 */ It is easy to see that calls 2, 4, and 6 (which cast their arguments and hence provide types) are correct. The surprise is that while calls 1, 3, and 5 are all wrong if NULL is defined as `0', calls 1 and 5 are both correct, or at least will both work, if NULL is defined as `(void *)0'. Call 3 is wrong in any case. We can get away with `f1((void *)0)' only because of a technicality: the dpANS says that (void *) and (char *) must have the same representation (which more or less means `must be the same type'), and because (void *) is a v