Path: utzoo!utgpu!watmath!clyde!att!osu-cis!tut.cis.ohio-state.edu!husc6!yale!wald-david
From: wald-david@CS.YALE.EDU (david wald)
Newsgroups: comp.lang.c
Subject: Re: pointers, tests, casts
Message-ID: <44803@yale-celray.yale.UUCP>
Date: 5 Dec 88 00:22:10 GMT
References: <11130@dartvax.Dartmouth.EDU> <44100016@hcx3> <9038@smoke.BRL.MIL>
Sender: root@yale.UUCP
Reply-To: wald-david@CS.YALE.EDU (david wald)
Organization: Yale University Computer Science Dept, New Haven CT  06520-2158
Lines: 26

In article <9038@smoke.BRL.MIL> gwyn@brl.arpa (Doug Gwyn (VLD/VMB) ) writes:
>In article <44100016@hcx3> shirono@hcx3.SSD.HARRIS.COM writes:
>>Even in the K&R1 days, the only valid definition of NULL has been
>>#define NULL 0
>
>True of pre-ANSI C, but an ANSI C implementation can use either that
>definition or
>#define NULL ((void*)0)
>I recommend the former even for ANSI C implementations.  The added
>complexity buys just one thing, which is possible type mismatch
>checking, but I don't think that is significant enough to justify
>the change.

I may be sorry in the morning for asking this, but:

Isn't the latter generally preferable, given its possible use as a
parameter for a function with no prototype in scope?  Further, isn't the
former dangerous in this case, given that there is no guarantee for NULL
and (int)0 to have the same representation?



============================================================================
David Wald                                              wald-david@yale.UUCP
						       waldave@yalevm.bitnet
============================================================================