Path: utzoo!attcan!utgpu!jarvis.csri.toronto.edu!mailrus!csd4.milw.wisc.edu!cs.utexas.edu!usc!bloom-beacon!bu-cs!buengc!bph
From: bph@buengc.BU.EDU (Blair P. Houghton)
Newsgroups: comp.misc
Subject: Re: IEEE floating point format
Summary: This is redirected from comp.lang.c.
Message-ID: <3707@buengc.BU.EDU>
Date: 11 Aug 89 17:55:45 GMT
References: <2170002@hpldsla.HP.COM> <9697@alice.UUCP> <3554@buengc.BU.EDU> <9725@alice.UUCP> <3591@buengc.BU.EDU> <152@servio.UUCP>
Reply-To: bph@buengc.bu.edu (Blair P. Houghton)
Followup-To: comp.misc
Organization: Boston Univ. Col. of Eng.
Lines: 60

In article <152@servio.UUCP> penneyj@servio.UUCP (D. Jason Penney) writes:
>In article <3591@buengc.BU.EDU> bph@buengc.bu.edu (Blair P. Houghton) writes:
>>Next question:  do C compilers (math libraries, I expect I should mean)
>>on IEEE-FP-implementing machines generally limit doubles to normalized
>>numbers, or do they blithely allow precision to waft away in the name
>>of a slight increase in the number-range?
>
>This is an interesting question.  The early drafts of IEEE P754 had a
>"warning mode" -- When "warning mode" was set, an operation with
>normal operands that produced a subnormal result 
>("subnormal" is the preferred term instead of "denormalized" now, by the way),
>an exception was signalled.

Ulp!  You mean it does it absolutely silently now?  No provision at all
for a hardware (or software) portabl-ized belief that an implementation
will always perk up when the bits start to disappear??  I'm less impressed.

>It was eventually removed because 1) Checking for this condition
>was expensive, and 2) it did not seem to be very useful.

Checking for this condition requires but an n-input OR'ring of the
bits of the exponent.  I can't imagine they consider it to be
expensive at all in relation to the expense of reimplementing
hardware to handle the conversion from non-subnormalizing to
subnormalizing numbers.

Sometimes I wonder at standards committees' ability to rationalize
the tweaks in the face of the earthquake that is their existence...

>I won't give a full discussion of the benefit of gradual underflow, but
>note that with truncating underflow, it is possible to have two floating 
>point values X and Y such that X != Y and yet (X - Y) == 0.0, 
>thus vitiating such precautions as,
>
>if (X == Y)
>  perror("zero divide");
>else
>  something = 1.0 / (X - Y);
>
>[Example thanks to Professor Kahan...]

People tell me Donald Knuth likes it, too, for this reason.
I find it a bit retentive, myself.  The "two numbers" in question
fall into the range of having their LSB's dangling off the edge
of the range of exponents, which at this point is in the neighborhood
of -1000.

Further, there are many more numbers where ( X == Y ), and one is
foolish to ever expect that one can do a division by ( X - Y ) and
_not_ first have to check for a zero divisor.  Therefore, by
implementing subnormalizaton, you remove the erroneous determination
that ( X == Y ) for a small portion of the numbers, but you do not
vitiate the expense of coding the precaution to check-before-you-divide.

				--Blair
				  "But, I'm not a God of computing,
				   as is Knuth (and, I presume, Kahan,
				   though I don't know the name), so
				   feel free to discard my opinions
				   without regard."