Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version B 2.10.1 6/24/83; site hocda.UUCP
Path: utzoo!watmath!clyde!floyd!harpo!ulysses!mhuxl!houxm!hocda!hom
From: hom@hocda.UUCP (H.MORRIS)
Newsgroups: net.lang.c
Subject: Re: "sizeof(int) < sizeof(char *): a defense"
Message-ID: <390@hocda.UUCP>
Date: Thu, 22-Mar-84 15:53:39 EST
Article-I.D.: hocda.390
Posted: Thu Mar 22 15:53:39 1984
Date-Received: Fri, 23-Mar-84 21:04:41 EST
References: <664@sun.uucp>
Organization: Bell Labs, Holmdel
Lines: 18

That reminds me of why I finally got un-pissed-off at people who
write 68000 compilers that way; namely the part of the spec that
says every "integerish" data type like char, etc. gets widened
to int when passed in a function or when arithmetic gets performed
on it.  Until then my attitude was "if you want it to run fast,
declare a `short'".  So I can see the point of having 16 bit ints
and 32 bit pointers.  I did some coercing back and forth between
integer and pointer types and to make it portable among a class
of "reasonable" machines, used the following type:
#ifdef mc68	/* or mc68000? */
typedef unsigned long	PTRasINT;
#else
typedef unsigned PTRasINT;
#endif
Thus for instance in subtracting pointers that might differ by more
than 16 bits worth, you can say
	diff = (PTRasINT)s - (PTRasINT)t;
Hal Morris ...hocad!hom