Bringing ray tracing to games has been a hot topic of conversation lately. I know that I've had several conversations over the last few years with co-workers about when the first ray traced games would appear. The idea of using ray tracing isn't original or really unique as we use rays for all kinds of things in modern games. What I'm talking about here though is specifically a renderer that uses a typical ray tracing algorithm to generate images instead of a rasterization based "traditional" renderer. I thought it might be interesting to think about some of the implications of this, the results of which you see in front of you.
There are a lot of interesting things that using a ray tracer allows. One of the biggest advantages in my opinion should come from the authoring pipeline. Current raster based engines are very tweaky when it comes to certain things. For instance having different types of shaders work together can be tricky when transparency, refraction, reflections and shadows are involved. A ray tracer would allow reflections, refractions, order independent transparency, volumetrics, better shadowing and implicit and procedural surfaces. It would also allow all of these things to interact together pretty much seamlessly. This will really make it easier and more creative to build virtual environments and starts removing some of the limitations that make content creation more difficult. This is probably the best improvement that this kind of technique offers us although there are other interesting benefits as well.
One of the funnier objections that I've heard to the idea is that ray tracing doesn't solve the global illumination problem. Well of course it doesn't! But does rasterization actually do this any better? Global illumination is an extremely hairy problem that is probably going to require a real breakthrough to solve effectively (unless we crush it with pure compute horsepower at some point). Ray tracing does however give you an excellent playground to deal with global illumination issues. It simply requires the kind of data structures that also lend themselves to helping to accelerate energy transfer (more on this later).
Of course the big elephant in the room that I haven't mentioned is performance. At the end of the day ray tracing has to be at least somewhat competitive with rasterization. I would argue that there are several things that make this not as important as it seems in the longer term. First of all we are starting to top out on resolution. Yes this will slowly increase but we are at the point now where more resolution per inch isn't going to terribly helpful. TV is going to stick with 1920x1080 for a long time if the past is any indication. This implies that we might want to start scaling up in image quality and possibly even geometric complexity where ray tracing has an edge. Isn't it time to start spending cycles on something more interesting than resolution? In addition ray tracing opens up a whole new set of techniques like implicit surfaces and certain types of procedural geometry that just get hackier and hackier in a rasterization setup. One other thing to note is that most of the time spent in a rasterizer these days is in the shading of the individual pixels. The ray tracer won't have any advantage here but this implies that you could use one of several techniques to figure out what to shade if all of them are only a fraction of the shading time they might end up being very similar in performance. This will only become more true as shaders get more complex and we want more complex interactions between them. Overall I think it's clear that ray tracing is within spitting distance of being fast enough to use.
Earlier I mentioned that ray tracing requires the same kind of data structures needed for global illumination computations. Sound energy can also be calculated using these data structures. Sound is one of the areas that games could be vastly improved. Using the same hardware and data structures to calculate a much more realistic sound-scape seems like a real tangible benefit that could make a game stand out.
A set of hybrid techniques using elements of both ray tracing and rasterization has been tossed around by quite a few people. This has a lot of drawbacks in my opinion although I could see it being used in certain cases. The general idea here is to fill in your initial frame buffer using rasterization and then generate secondary rays during the shading of that first pass. I'm sure there are other ideas as well. The main drawback is that now your scaling is limited by what you can rasterize instead of what you can ray trace. You also need to maintain two sets of data structures. One is some sort of scene used for ray tracing and the other is some sort of scene that you can feed into the rasterizer. You could probably share things like vertex buffers and textures easily. However, you would still need to write and maintain two separate code bases to traverse and render these data structures. You would also be eliminating certain classes of ray tracing techniques like implicit surfaces. Some people may use techniques like this to partially start the transition to new techniques or in limited cases (e.g. using ray tracing for volumetric fog in part of the scene).
I believe that building worlds procedurally is going to be commonplace in the coming years (I will write down some thoughts on this in the future). Modern ray tracing algorithms seem to map well to the upcoming generation of GPU's that are extremely programmable. Taking advantage of the natural coherence in scenes vastly reduces the memory bandwidth requirements. Given the advances in tool sets, the advance of hardware speed and parallel processors and level of innovation we consistently see in games I think it will be feasible and profitable to ship a ray traced game in the near future.