Path: utzoo!utgpu!watmath!clyde!att!osu-cis!tut.cis.ohio-state.edu!mailrus!cornell!uw-beaver!uoregon!markv From: markv@uoregon.uoregon.edu (Mark VandeWettering) Newsgroups: comp.graphics Subject: Re: Village Idiot asks about Ray Tracing Message-ID: <3324@uoregon.uoregon.edu> Date: 6 Dec 88 08:43:35 GMT References: <859@amethyst.ma.arizona.edu> Reply-To: markv@drizzle.UUCP (Mark VandeWettering) Organization: University of Oregon, Computer Science, Eugene OR Lines: 103 In article <859@amethyst.ma.arizona.edu> chris@spock.ame.arizona.edu (Chris Ott) writes: }ewhac@well.UUCP (Leo 'Bols Ewhac' Schwab) writes: }> I was doing a few gedanken experiments with raytracing, and came up }> with a few questions. Realize that I've never written a raytracer. } I'm writing a ray tracer right now and have some experience. }> Suppose I have an object, a light source, and a flat surface set up }> as a perfect mirror. Suppose further that I have a thing between the object }> and light source preventing direct illumination of the object. Suppose }> further still that the "mirror" is set up to reflect the light from the }> light source to the object. Question: Will the object be illuminated? }> Does it depend on whose software I'm using? The question is actually trickier than you might believe. Even diffusely reflecting objects reflect light, hence at every point on the surface of an object, the illumination is a function of its own radiosity and the radiosity of every patch visible AFTER ANY NUMBER OF BOUNCES. Hence, there is not a great way of knowing when to quit tracing rays. For instance, you hit a triangle. You need to find the amount of light energy reaching that triangle from EVERY direction, not just the direction of the reflected ray. Techniques of MonteCarlo integration, distributed ray tracing, and radiosity all attempt to deal with this fundamental problem. } Software that does true ray tracing would definitely illuminate the }object. For example, the ray could be sent from the eye through a specific }pixel on the screen to the object, bounce off the object into the mirror, }and finally, off the mirror into a light source. Then, given the color of }the object and the light source, the pixel's color can be computed. At }least mine works this way. The way my code looks, it seems as if this }would be intrinsically part of ray tracing, i.e. I didn't have to make }a special case for mirrors. Solving the general problems of "light bleeding" or diffuse interreflections is very difficult and a current research topic. Consider the problem rephrased another way. Given a description of the objects, surfaces and lights in a scene, you are trying to determine what your light would see by observing the scene. Unfortunately, we can't model the path of every photon in the scene. We use the fact that we are only interested in the miniscule portion of light rays in the scene that actually "hit" your eye. To reconstruct the interplay of light from the eye position seems to be the goal of current state of raytracing research. }# It depends on the software - most ray tracers do NOT model reflected light, }# because of the tremendous increase in complexity. The standard shadow }# model casts a ray from the object's surface to each light source, to }# see if there's an object in the way. To test for reflected light, }# you have to cast rays in EVERY DIRECTION, just in case there's a mirror }# in that direction that might be reflecting light on this part of }# the object surface. Even if you optimize it to cast rays only }# at known mirrors, you still need to cast an infinite number towards }# each mirror. }# }# A nice application of stochastic techniques is to cast a moderate }# number of rays in RANDOM directions, hoping that they will hit a mirror }# if there's one to hit. If the jitter is done well, then the effect }# will not be bad. } This does not sound correct to me. My understanding is that the only }rays we are interested in are the ones that the eye can see, so we just }need to cast a ray (more than one, if we want some reasonable anti-aliasing) }from the eye-point through each pixel. At least that's the way I did it }and it gives very realistic results. Any comments? } Unfortunately, it is correct. Consider the problems associated with diffuse inter-reflections that I mentioned above. The light shone back to the eye is not merely based on light from a small number of directions, but from and infinite number of directions. }> The point of the above two questions is to find out if, in general, }> raytracers handle illumination from light bounced off of or refracted }> through other objects. } Yes. No. Kajiya's "Rendering Equation" produced some of the effects that you want however, and better computational methods should result in the kind of images that you desire. }> Finally, has anyone come up with a raytracer whose refraction model }> takes into account the varying indicies of refraction of different light }> frequencies? In other words, can I find a raytracer that, when looking }> through a prism obliquely at a light source, will show me a rainbow? } } This could be tough. The red, green, and blue components of monitors }only simulate the full color spectrum. On a computer, yellow is a mixture }of red and green. In real life, yellow is yellow. You'd have to cast a }large number of rays and use a large amount of computer time to simulate }a full color spectrum. (Ranjit pointed this out in his article and went }into much greater detail). Actually, this problem seems the easiest. We merely have to trace rays of differing frequency (perhaps randomly sampled) and use Fresnel's equation to determine refraction characteristics. If you are trying to model phase effects like diffraction, you will probably have a much more difficult time. Mark VandeWettering