6.3 Environment Camera

One advantage of ray tracing compared to scan line or rasterization-based rendering methods is that it’s easy to employ unusual image projections. We have great freedom in how the image sample positions are mapped into ray directions, since the rendering algorithm doesn’t depend on properties such as straight lines in the scene always projecting to straight lines in the image.

In this section, we will describe a camera model that traces rays in all directions around a point in the scene, giving a 2D view of everything that is visible from that point. Consider a sphere around the camera position in the scene; choosing points on that sphere gives directions to trace rays in. If we parameterize the sphere with spherical coordinates, each point on the sphere is associated with a left-parenthesis theta comma phi right-parenthesis pair, where theta element-of left-bracket 0 comma pi right-bracket and phi element-of left-bracket 0 comma 2 pi right-bracket . (See Section 5.5.2 for more details on spherical coordinates.) This type of image is particularly useful because it represents all of the incident light at a point on the scene. (One important use of this image representation is environment lighting—a rendering technique that uses image-based representations of light in a scene.) Figure 6.14 shows this camera in action with the San Miguel model. theta values range from 0 at the top of the image to  pi at the bottom of the image, and  phi values range from 0 to 2 pi , moving from left to right across the image.

<<EnvironmentCamera Declarations>>= 
class EnvironmentCamera : public Camera { public: <<EnvironmentCamera Public Methods>> 
EnvironmentCamera(const AnimatedTransform &CameraToWorld, Float shutterOpen, Float shutterClose, Film *film, const Medium *medium) : Camera(CameraToWorld, shutterOpen, shutterClose, film, medium) { } Float GenerateRay(const CameraSample &sample, Ray *) const;
};

Figure 6.14: The San Miguel model rendered with the EnvironmentCamera, which traces rays in all directions from the camera position. The resulting image gives a representation of all light arriving at that point in the scene and can be used for the image-based lighting techniques described in Chapters 12 and 14.

The EnvironmentCamera derives directly from the Camera class, not the ProjectiveCamera class. This is because the environmental projection is nonlinear and cannot be captured by a single 4 times 4 matrix. This camera is defined in the files cameras/environment.h and cameras/environment.cpp.

<<EnvironmentCamera Public Methods>>= 
EnvironmentCamera(const AnimatedTransform &CameraToWorld, Float shutterOpen, Float shutterClose, Film *film, const Medium *medium) : Camera(CameraToWorld, shutterOpen, shutterClose, film, medium) { }

<<EnvironmentCamera Method Definitions>>= 
Float EnvironmentCamera::GenerateRay(const CameraSample &sample, Ray *ray) const { <<Compute environment camera ray direction>> 
Float theta = Pi * sample.pFilm.y / film->fullResolution.y; Float phi = 2 * Pi * sample.pFilm.x / film->fullResolution.x; Vector3f dir(std::sin(theta) * std::cos(phi), std::cos(theta), std::sin(theta) * std::sin(phi));
*ray = Ray(Point3f(0, 0, 0), dir, Infinity, Lerp(sample.time, shutterOpen, shutterClose)); ray->medium = medium; *ray = CameraToWorld(*ray); return 1; }

To compute the left-parenthesis theta comma phi right-parenthesis coordinates for this ray, NDC coordinates are computed from the raster image sample position and then scaled to cover the left-parenthesis theta comma phi right-parenthesis range. Next, the spherical coordinate formula is used to compute the ray direction, and finally the direction is converted to world space. (Note that because the y direction is “up” in camera space, here the y and z coordinates in the spherical coordinate formula are exchanged in comparison to usage elsewhere in the system.)

<<Compute environment camera ray direction>>= 
Float theta = Pi * sample.pFilm.y / film->fullResolution.y; Float phi = 2 * Pi * sample.pFilm.x / film->fullResolution.x; Vector3f dir(std::sin(theta) * std::cos(phi), std::cos(theta), std::sin(theta) * std::sin(phi));