Path: utzoo!utgpu!watmath!clyde!att!rutgers!netnews.upenn.edu!eniac.seas.upenn.edu!ranjit
From: ranjit@eniac.seas.upenn.edu (Ranjit Bhatnagar)
Newsgroups: comp.graphics
Subject: Re: Village Idiot Asks About Raytracing
Summary: Another Village Idiot Pretends to Answer
Keywords: ray tracing stochastic distributed
Message-ID: <6520@netnews.upenn.edu>
Date: 3 Dec 88 22:28:10 GMT
References: <7818@well.UUCP>
Sender: news@netnews.upenn.edu
Reply-To: ranjit@eniac.seas.upenn.edu.UUCP (Ranjit Bhatnagar)
Organization: University of Pennsylvania
Lines: 100


Leo Schwab (ewhac@well.uucp) writes:
>	I was doing a few gedanken experiments with raytracing, and came up
>with a few questions.  Realize that I've never written a raytracer.

Well, I've never written one either, so ignore everything I say.
>
>[Will a mirror shine reflected light on other objects?]
>Does it depend on whose software I'm using?

It depends on the software - most ray tracers do NOT model reflected light,
because of the tremendous increase in complexity.  The standard shadow
model casts a ray from the object's surface to each light source, to
see if there's an object in the way.  To test for reflected light,
you have to cast rays in EVERY DIRECTION, just in case there's a mirror
in that direction that might be reflecting light on this part of
the object surface.  Even if you optimize it to cast rays only
at known mirrors, you still need to cast an infinite number towards
each mirror.  

A nice application of stochastic techniques is to cast a moderate
number of rays in RANDOM directions, hoping that they will hit a mirror
if there's one to hit.  If the jitter is done well, then the effect
will not be bad.

Radiosity models take a completely different approach to this problem.
See e.g. "A Radiosity Solution for Complex Environments" - Cohen and
Greenberg, SIGGRAPH 85, or "A Radiosity Method for Non-Diffuse Environments"
- Immel and Cohen, SIGGRAPH 86.  These guys don't discuss mirrors,
but you can sort of see how the radiosity method could handle them.
Radiosity worries about the general problem of light reflected off
of surfaces in the environment - in the real world, light doesn't
just come straight from the bulb!
>
>	Suppose I have a flat surface, a light source, and an object in the
>shape of a convex lens above the surface under the light.  Suppose further
>that the object is set up to be perfectly clear, and refracts light like
>glass.  Question:  Will the light beneath the lens object be intensely
>focused on the surface below, just like a real lens?

Answer is the same as above - it could be done by casting zillions of rays, 
but it's computationally expensive.  Stochastic sampling can certainly help,
though.
>
>	Finally, has anyone come up with a raytracer whose refraction model
>takes into account the varying indicies of refraction of different light
>frequencies?  In other words, can I find a raytracer that, when looking
>through a prism obliquely at a light source, will show me a rainbow?
>
This is even nastier than the first two problems, because very few
rendering systems really model light like it is in the real world.
We don't usually think about it, because the RGB approximation LOOKS
the same to us as real light, but if you wanted to get rainbows,
you would have to take into account that visible light is a continuum
of wavelengths that can be mixed arbitrarily.

In the graphics world, the sun is really a red sun, a green sun, and
a blue sun that happen to occupy the same position - if you were to
look at it through a simulated prism, you would get not a rainbow,
but a red spot, a green spot, and a blue spot.  I think work has been
done in rendering that models the entire visible spectrum instead of
the RGB approximation - you can imagine that that would be really
nasty, because every color vector, instead of being described by
three numbers, would be described by a continuum (perhaps approximated
as 500 numbers or something like that).  So, while wavelength-based
refraction could be modeled by extending current refraction techniques,
nobody does it because it would reveal the artificiality of the RGB
approximation.

Come to think of it, it's even worse than that!  You don't know the
color of a ray until it has "cashed out" completely - all its descendants
have been resolved.  But you can't resolve the ray until you know its
color, because the color will affect its trajectory!  Based on this,
I would say that frequency-based refraction is VERY difficult using
eye-based ray-tracing.  I expect one would cast a sort of "average"
ray, based on a single color, and then use relaxation techniques to
bend and twist the ray based on its color, and change its color
based on the trajectory, until it reaches a stable state.  Unfortunately,
in any reasonable complex scene, there's probably no numerical
technique that could reliably find the resultant ray even from a
very well-chosen "guess", because of all the discontinuities and
nonlinearities involved.

The other two problems are soluble, though, and stochastic (also
called "distributed") ray tracing is really neat way to get good
approximations to that and other fun problems (it turns out to
be applicable to simulation of motion blur, out-of-focus cameras,
lens depth of field, fuzzy reflections (like in a formica
tabletop), and all kinds of amazing stuff).  A nice introduction
is "Stochastic Sampling in Computer Graphics" - Rob Cook, in
ACM Transactions on Graphics, v5#1, Jan 86.


	- Ranjit


   
"Trespassers w"   ranjit@eniac.seas.upenn.edu	mailrus!eecae!netnews!eniac!...
       -- I'm not a drug enforcement agent, but I play one for TV --
 Giant SDI lasers burn 1,000 points of light in Willie Horton - Dave Barry