Path: utzoo!attcan!utgpu!watmath!att!dptg!rutgers!cs.utexas.edu!uunet!mcvax!ukc!axion!tigger!raph
From: raph@tigger.planet.bt.co.uk (Raphael Mankin)
Newsgroups: comp.lang.c
Subject: Re^2: Turbo C 2.0 vs MSC 5.1
Message-ID: <527@tigger.planet.bt.co.uk>
Date: 28 Jul 89 08:17:43 GMT
References: <644@octopus.UUCP> <3607@cps3xx.UUCP> <7368@cg-atla.UUCP>
Organization: RT5111, BTRL, Martlesham Heath, England
Lines: 26


MSC 'make' has a completely different logic from Unix 'make'. The
differences, though, are not something to go into here.

MSC implements a strange logic in multiplication. If you do something like

	int	i, j;
	long	l;
	...
	l = i*j;

MSC will compute i*j as 32 bits, discard the upper 16 bits and then sign
extend the 16 bits back to 32 bits. If you want to avoid the loss of precision
you have to use a cast to force a 32 by 32 multiplication. e.g.
	l = i * (long)j;

What do other compilers do in the way of preserving or losing
arithmetic precision?.

I a Coral 66 compiler that I wrote some 17 years ago I went to great
lengths to preserve arithmetic precision, including transforming
things like
		a/b/c/d/e
into
		a/(b*c*d*e)
and re-ordering factors so as to do division as late as possible.