Path: utzoo!utgpu!jarvis.csri.toronto.edu!rutgers!tut.cis.ohio-state.edu!pt.cs.cmu.edu!MATHOM.GANDALF.CS.CMU.EDU!lindsay
From: lindsay@MATHOM.GANDALF.CS.CMU.EDU (Donald Lindsay)
Newsgroups: comp.arch
Subject: Re: hardware complex arithmetic support
Message-ID: <5906@pt.cs.cmu.edu>
Date: 18 Aug 89 15:26:40 GMT
References:  <1672@crdgw1.crd.ge.com> <4781@freja.diku.dk>
Organization: Carnegie-Mellon University, CS/RI
Lines: 28

In article <4781@freja.diku.dk> njk@freja.diku.dk (Niels J|rgen Kruse) writes:
>As far as i can tell, the main advantage of hardware support
>for complex arithmetic is the greater encoding density allowed
>by a dedicated storage format for complex numbers.
>
>Consider that it is meaningless from a numerical viewpoint to
>represent one component of a complex number with greater
>accuracy than the other.
>
>This means that a dedicated storage format need only have *one*
>exponent. 

On a general-purpose machine, a special format has less value, since
some conventional floating point format must also be supported.

However, on special-purpose machines, there is a long history of this
sort of thing. In particular, many FFT boxes have been built with
"block floating point". This takes your suggestion - a single
exponent per complex number - and extends it, so that there is a
single exponent for an entire data vector.

I believe that this worked adequately for FFTs, where the data is
relatively homogenous. The block exponent is essentially "automatic
gain control", as the analog people used to say. Mostly, this was a
halfway house between the speed/price of integer boxes, and the
convenience/generality of floating point boxes.
-- 
Don		D.C.Lindsay 	Carnegie Mellon School of Computer Science