Path: utzoo!utgpu!jarvis.csri.toronto.edu!mailrus!tut.cis.ohio-state.edu!ucbvax!husc6!mit-eddie!mit-amt!halazar
From: halazar@mit-amt.MEDIA.MIT.EDU (Michael Halle)
Newsgroups: comp.graphics
Subject: Re: Accessing Volumetric Data
Message-ID: <503@mit-amt.MEDIA.MIT.EDU>
Date: 16 Aug 89 23:22:27 GMT
References: <9968@phoenix.Princeton.EDU>
Reply-To: halazar@media-lab.media.mit.edu
Organization: MIT Media Lab, Cambridge, MA
Lines: 61
In-reply-to: markv@phoenix.Princeton.EDU's message of 16 Aug 89 20:11:45 GMT


Well, I can give you an answer that applies particular subset of
volume rendering...

If you are computing multiple views of a data set non-interactively,
(as opposed to rendering one view "interactively", or rather, as fast
as possible), it makes sense to othogonalize your data so that the
transformation that you are performing from view to view to view never
forces more data to be read from disk.  For example, if you are
spinning around a center axis, compute the entire sequence a scanline
at a time, because no extra data will have to be read from disk during
the calculation of all views of that scanline.  Most "flyby"s can be
othogonalized in such a way.  It seems to be general experience that
an image is most meaningful when it can be seen from several
viewpoints (either a loop sequence or, in our case, a holographic
stereogram), so rendering many images is ideally the rule, not the
exception.

In a virtual memory environment, this operation can be done implicitly
by sorting rays (using Levoy-like "ray tracing" volume rendering
approach) or by sorting or resampling data planes (for the Pixar-like
composite-rows-of-voxels approach).  The image sequence can then be
rendered in strips of some size such that the working set is smaller
than the machine's physical memory size.  If the sorting step is done
correctly, data will not be swapped in except when a new strip is
being rendered.  A special added bonus of this approach, known as
sequence-based rendering, is that large data sets that won't fit into
memory at once can be rendered.

Also, depending on the complexity of the light model that you are using,
many of the calculations are redundant from view to view, so calculation
of these parameters need be done only once.  For example, if the viewer
is flying around an object, and the light sources are fixed in space
and located at infinity, the ambient and diffuse calculations performed
for the first view can be used for all subsequent views...only the
ray projection/compositing needs to be redone.  Specular highlights, 
however, need to be recomputed for each view because they depend on 
the viewer's location in space.

If a subsampled (in some space) rendition of the object exists, the
sequence can be previewed interactively at low resolution (maybe
without specular highlights), then batch rendered in "high quality
mode".  Similarly, if the transparency of any non-opaque parts of the
object can be reduced, more voxels can be "culled" from the
calculation (fully transparent voxels and any voxels always behind
completely opaque elements never have to be read into memory except
once to identify them).

In general, compression is only possible at the expense of something.
If you want to be ably to examine a highly transparent, complex data
set from any angle interactively with a high quality light model, you
are going to have to pay the disk access piper.  Constrain any of the
above degrees of freedom, and you can save some costs.  Of course, new
and different algorithms and faster machines may rework the cost
equation somewhat.

Hope this helps a little.

						--Michael Halle
						  Spatial Imaging Group
						  MIT Media Laboratory