Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version B 2.10.1 6/24/83 SMI; site ur-laser.uucp
Path: utzoo!linus!philabs!cmcl2!seismo!rochester!ur-laser!nitin
From: nitin@ur-laser.uucp (Nitin Sampat)
Newsgroups: net.graphics
Subject: FFT of image in sections ?
Message-ID: <360@ur-laser.uucp>
Date: Tue, 6-Aug-85 13:16:53 EDT
Article-I.D.: ur-laser.360
Posted: Tue Aug  6 13:16:53 1985
Date-Received: Thu, 8-Aug-85 00:23:02 EDT
Organization: Lab for Laser Energetics, Univ. of Rochester
Lines: 25

FFT's as we know exhibit a nonlinear increase in computation time as
the size of the image increases.  Also, the hardware determines the
CPU time you get from the computer. If your record size is too large
for the core the computer starts paging, copying to and fro from disk,
therby increasing the processing time.  One solution to this could
be to process the image in sub-sections. Each sub-section of the image 
could be so chosen that the record size is large enough to fit in core
memory at one time.  This would eliminate any paging. Can this be done
and if so how does one go about chosing the sub-sections ?
After all, theory defines that the image is a function f(x,y) with 
a period N(=no. of points). If we divide the image into sections,
we are in effect defying the basis of linear system theory.  I was
told that the trick is to overlap one section onto the other is some
fashion, after the FFT operation.  Does anybody have any experience with
this ?

I guess the question I am asking is this :

We know that processing small images takes less time.  Well, how does 
one go about breaking up a large image and process it in sections,
and then most importantly, how does one put all these sections back 
to get the FFT of the original image ?

				nitin
		{seismo,allegra}!rochester!ur-laser!nitin.uucp