12.5 Infinite Area Lights
Another useful kind of light is an infinitely far-away area light source that surrounds the entire scene. One way to visualize this type of light is as an enormous sphere that casts light into the scene from every direction. One important use of infinite area lights is for environment lighting, where an image of the illumination in an environment is used to illuminate synthetic objects as if they were in that environment. Figure 12.16 compares illuminating a car model with standard area lights to illuminating it with two environment maps that simulate illumination from the sky at different times of day. The increase in realism is striking.
pbrt provides three implementations of infinite area lights of progressive complexity. The first describes an infinite light with uniform emitted radiance; the second instead takes an image that represents the directional distribution of emitted radiance, and the third adds capabilities for culling parts of such images that are occluded at the reference point, which can substantially improve sampling efficiency.
12.5.1 Uniform Infinite Lights
A uniform infinite light source is fairly easy to implement; some of the details will be helpful for understanding the infinite light variants to follow.
Emitted radiance is specified as usual by both a spectrum and a separate scale. (The straightforward constructor that initializes these is not included in the text.)
All the infinite light sources, including UniformInfiniteLight, store a bounding sphere of the scene that they use when computing their total power and for sampling rays leaving the light.
Infinite lights must implement the following Le() method to return their emitted radiance for a given ray. Since the UniformInfiniteLight emits the same amount for all rays, the implementation is trivial.
We can see the use of the allowIncompletePDF parameter for the first time in the SampleLi() method. If it is true, then UniformInfiniteLight immediately returns an unset sample. (And its PDF_Li() method, described a bit later, will return a PDF of zero for all directions.) To understand why it is implemented in this way, consider the direct lighting integral
For a uniform infinite light, the incident radiance function is a constant times the visibility term; the constant can be pulled out of the integral, leaving
There is no reason for the light to participate in sampling this integral, since BSDF sampling accounts for the remaining factors well. Furthermore, recall from Section 2.2.3 that multiple importance sampling (MIS) can increase variance when one of the sampling techniques is much more effective than the others. This is such a case, so as long as calling code is sampling the BSDF and using MIS, samples should not be generated here. (This is an application of MIS compensation, which was introduced in Section 2.2.3.)
If sampling is to be performed, the light generates a sample so that valid Monte Carlo estimates can still be computed. This task is easy—all directions are sampled with uniform probability. Note that the endpoint of the shadow ray is set in the same way as it was by the DistantLight: by computing a point that is certainly outside of the scene’s bounds.
The PDF_Li() method must account for the value of allowIncompletePDF so that the PDF values it returns are consistent with its sampling method.
The total power from an infinite light can be found by taking the product of the integral of the incident radiance over all directions times an integral over the area of the disk, along the lines of DistantLight::Phi().
12.5.2 Image Infinite Lights
ImageInfiniteLight is a useful infinite light variation that uses an Image to define the directional distribution of emitted radiance. Given an image that represents the distribution of incident radiance in a real-world environment (sometimes called an environment map), this light can be used to render objects under the same illumination, which is particularly useful for applications like visual effects for movies, where it is often necessary to composite rendered objects with film footage. (See the “Further Reading” section for information about techniques for capturing this lighting data from real-world environments.) Figure 12.17 shows the image radiance maps used in Figure 12.16.
The image that specifies the emission distribution should use the equal-area octahedral parameterization of directions that was defined in Section 3.8.3. The LightBase::renderFromLight transformation can be used to orient the environment map.
Like UniformInfiniteLights, ImageInfiniteLights also need the scene bounds; here again, the Preprocess() method (this one not included in the text) stores the scene’s bounding sphere after all the scene geometry has been created.
The ImageInfiniteLight constructor contains a fair amount of boilerplate code that we will skip past. (For example, it verifies that the provided image has channels named “R,” “G,” and “B” and issues an error if it does not.) The interesting parts of it are gathered in the following fragment.
The image maps used with ImageInfiniteLights often have substantial variation along different directions: consider, for example, an environment map of the sky during daytime, where the relatively small number of directions that the sun subtends are thousands of times brighter than the rest of the directions. Therefore, implementing a sampling method for ImageInfiniteLights that matches the illumination distribution can significantly reduce variance in rendered images compared to sampling directions uniformly. To this end, the constructor initializes a PiecewiseConstant2D distribution that is proportional to the image pixel values.
A second sampling distribution is computed based on a thresholded version of the image where the average pixel value is subtracted from each pixel’s sampling weight. The use of both of these sampling distributions will be discussed in more detail shortly, with the implementation of the SampleLi() method.
Before we get to the sampling methods, we will provide an implementation of the Le() method that is required by the Light interface for infinite lights. After computing the 2D coordinates of the provided ray’s direction in image coordinates, it defers to the ImageLe() method.
ImageLe() returns the emitted radiance for a given point in the image.
There is a bit more work to do for sampling an incident direction at a reference point according to the light’s emitted radiance.
The first step is to generate an image sample with probability proportional to the image pixel values, which is a task that is handled by the PiecewiseConstant2D Sample() method. If SampleLi() is called with allowIncompletePDF being true, then the second sampling distribution that was based on the thresholded image is used. The motivation for doing so is the same as when UniformInfiniteLight::SampleLi() does not generate samples at all in that case: here, there is no reason to spend samples in parts of the image that have a relatively low contribution. It is better to let other sampling techniques (e.g., BSDF sampling) generate samples in those directions when they are actually important for the full function being integrated. Light samples are then allocated to the bright parts of the image, where they are more useful.
It is a simple matter to convert from image coordinates to a rendering space direction wi.
The PDF returned by PiecewiseConstant2D::Sample() is with respect to the image’s domain. To find the corresponding PDF with respect to direction, the change of variables factor for going from the unit square to the unit sphere must be applied.
Finally, as with the DistantLight and UniformInfiniteLight, the second point for the shadow ray is found by offsetting along the wi direction far enough until that resulting point is certainly outside of the scene’s bounds.
Figure 12.18 illustrates how much error is reduced by sampling image infinite lights well. It compares three images of a dragon model illuminated by the morning skylight environment map from Figure 12.17. The first image was rendered using a simple uniform spherical sampling distribution for selecting incident illumination directions, the second used the full image-based sampling distribution, and the third used the compensated distribution—all rendered with 32 samples per pixel. For the same number of samples taken and with negligible additional computational cost, both importance sampling methods give a much better result with much lower variance.
Most of the work to compute the PDF for a provided direction is handled by the PiecewiseConstant2D distribution. Here as well, the PDF value it returns is divided by to account for the area of the unit sphere.
The ImageInfiniteLight::Phi() method, not included here, integrates incident radiance over the sphere by looping over all the image pixels and summing them before multiplying by a factor of to account for the area of the unit sphere as well as by the area of a disk of radius sceneRadius.
12.5.3 Portal Image Infinite Lights
ImageInfiniteLights provide a handy sort of light source, though one shortcoming of that class’s implementation is that it does not account for visibility in its sampling routines. Samples that it generates that turn out to be occluded are much less useful than those that do carry illumination to the reference point. While the expense of ray tracing is necessary to fully account for visibility, accounting for even some visibility effects in light sampling can significantly reduce error.
Consider the scene shown in Figure 12.19, where all the illumination is coming from a skylight environment map that is visible only through the windows. Part of the scene is directly illuminated by the sun, but much of it is not. Those other parts are still illuminated, but by much less bright regions of blue sky. Yet because the sun is so bright, the ImageInfiniteLight ends up taking many samples in its direction, though all the ones where the sun is not visible through the window will be wasted. In those regions of the scene, light sampling will occasionally choose a part of the sky that is visible through the window and occasionally BSDF sampling will find a path to the light through the window, so that the result is still correct in expectation, but many samples may be necessary to achieve a high-quality result.
The PortalImageInfiniteLight is designed to handle this situation more effectively. Given a user-specified portal, a quadrilateral region through which the environment map is potentially visible, it generates a custom sampling distribution at each point being shaded so that it can draw samples according to the region of the environment map that is visible through the portal. For an equal number of samples, this can be much more effective than the ImageInfiniteLight’s approach, as shown in Figure 12.19(b).
Given a portal and a point in the scene, there is a set of directions from that point that pass through the portal. If we can find the corresponding region of the environment map, then our task is to sample from it according to the environment map’s emission. This idea is illustrated in Figure 12.20. With the equal-area mapping, the shape of the visible region of the environment map seen from a given point can be complex. The problem is illustrated in Figure 12.21(a), which visualizes the visible regions from two points in the scene from Figure 12.19.
The PortalImageInfiniteLight therefore uses a different parameterization of directions that causes the visible region seen through a portal to always be rectangular. Later in this section, we will see how this property makes efficient sampling possible.
The directional parameterization used by the PortalImageInfiniteLight is based on a coordinate system where the and axes are aligned with the edges of the portal. Note that the position of the portal is not used in defining this coordinate system—only the directions of its edges. As a first indication that this idea is getting us somewhere, consider the vectors from a point in the scene to the four corners of the portal, transformed into this coordinate system. It should be evident that in this coordinate system, vectors to adjacent vertices of the portal only differ in one of their or coordinate values and that the four directions thus span a rectangular region in . (If this is not clear, it is a detail worth pausing to work through.) We will term directional representations that have this property as rectified.
The components of vectors in this coordinate system still span , so it is necessary to map them to a finite 2D domain if the environment map is to be represented using an image. It is important that this mapping does not interfere with the axis-alignment of the portal edges and that rectification is preserved. This requirement rules out a number of possibilities, including both the equirectangular and equal-area mappings. Even normalizing a vector and taking the and coordinates of the resulting unit vector is unsuitable given this requirement.
A mapping that does work is based on the angles and that the and coordinates of the vector respectively make with the axis, as illustrated in Figure 12.22. These angles are given by
We can ignore vectors with negative components in the rectified coordinate system: they face away from the portal and thus do not receive any illumination. Each of and then spans the range and the pair of them can be easily mapped to image coordinates. The environment map resampled into this parameterization is shown in Figure 12.21(b), with the visible regions for the same two points in the scene indicated.
We will start the implementation of the PortalImageInfiniteLight with its ImageFromRender() method, which applies this mapping to a vector in the rendering coordinate system wRender. (We will come to the initialization of the portalFrame member variable in the PortalImageInfiniteLight constructor later in this section.) It uses pstd::optional for the return value in order to be able to return an invalid result in the case that the vector is coplanar with the portal or facing away from it.
We will find it useful to be able to convert sampling densities from the parameterization of the image to be with respect to solid angle on the unit sphere. The appropriate factor can be found following the usual approach of computing the determinant of the Jacobian of the mapping function, which is based on Equation (12.1), and then rescaling the coordinates to image coordinates in . The result is a simple expression when expressed in terms of :
If a non-nullptr duv_dw parameter is passed to this method, this factor is returned.
The inverse transformation can be found by working in reverse. It is implemented in RenderFromImage(), which also optionally returns the same change of variables factor.
Because the mapping is rectified, we can find the image-space bounding box of the visible region of the environment map from a given point using the coordinates of two opposite portal corners. This method also returns an optional value, for the same reasons as for ImageFromRender().
Most of the PortalImageInfiniteLight constructor consists of straightforward initialization of member variables from provided parameter values, checking that the provided image has RGB channels, and so forth. All of that has not been included in this text. We will, however, discuss the following three fragments, which run at the end of the constructor.
The portal itself is specified by four vertices, given in the rendering coordinate system. Additional code, not shown here, checks to ensure that they describe a planar quadrilateral. A Frame for the portal’s coordinate system can be found from two normalized adjacent edge vectors of the portal using the Frame::FromXY() method.
The constructor also resamples a provided equal-area image into the rectified representation at the same resolution. Because the rectified image depends on the geometry of the portal, it is better to take an equal-area image and resample it in the constructor than to require the user to provide an already-rectified image. In this way, it is easy for the user to change the portal specification just by changing the portal’s coordinates in the scene description file.
At each rectified image pixel, the implementation first computes the corresponding light-space direction and looks up a bilinearly interpolated value from the equal-area image. No further filtering is performed. A better implementation would use a spatially varying filter here in order to ensure that there was no risk of introducing aliasing due to undersampling the source image.
The image coordinates in the equal-area image can be found by determining the direction vector corresponding to the current pixel in the rectified image and then finding the equal-area image coordinates that this direction maps to.
Given the rectified image, the next step is to initialize an instance of the WindowedPiecewiseConstant2D data structure, which performs the sampling operation. (It is defined in Section A.5.6.) As its name suggests, it generalizes the functionality of the PiecewiseConstant2D class to allow a caller-specified window that limits the sampling region.
It is worthwhile to include the change of variables factor at each pixel in the image sampling distribution. Doing so causes the weights associated with image samples to be more uniform, as this factor will nearly cancel the same factor when a sample’s PDF is computed. (The cancellation is not exact, as the factor here is computed at the center of each pixel while in the PDF it is computed at the exact sample location.)
The light’s total power can be found by integrating radiance over the hemisphere of directions that can be seen through the portal and then multiplying by the portal’s area, since all light that reaches the scene passes through it. The corresponding PortalImageInfiniteLight::Phi() method is not included here, as it boils down to being a matter of looping over the pixels, applying the change of variables factor to account for integration over the unit sphere, and then multiplying the integrated radiance by the portal’s area.
In order to compute the radiance for a ray that has left the scene, the coordinates in the image corresponding to the ray’s direction are computed first. The radiance corresponding to those coordinates is returned if they are inside the portal bounds for the ray origin, and a zero-valued spectrum is returned otherwise. (In principle, the Le() method should only be called for rays that have left the scene, so that the portal check should always pass, but it is worth including for the occasional ray that escapes the scene due to a geometric error in the scene model. This way, those end up carrying no radiance rather than causing a light leak.)
The ImageLookup() method returns the radiance at the given image and wavelengths. We encapsulate this functionality in its own method, as it will be useful repeatedly in the remainder of the light’s implementation.
As before, the image’s color space must be known in order to convert its RGB values to spectra.
SampleLi() is able to take advantage of the combination of the rectified image representation and the ability of WindowedPiecewiseConstant2D to sample a direction from the specified point that passes through the portal, according to the directional distribution of radiance over the portal.
WindowedPiecewiseConstant2D’s Sample() method takes a Bounds2f to specify the sampling region. This is easily provided using the ImageBounds() method. It may not be able to generate a valid sample—for example, if the point is on the outside of the portal or lies on its plane. In this case, an unset sample is returned.
After image coordinates are converted to a direction, the method computes the sampling PDF with respect to solid angle at the reference point represented by ctx. Doing so just requires the application of the change of variables factor returned by RenderFromImage().
The remaining pieces are easy at this point: ImageLookup() provides the radiance for the sampled direction and the endpoint of the shadow ray is found in the same way that is done for the other infinite lights.
Also as with the other infinite lights, the radius of the scene’s bounding sphere is stored when the Preprocess() method, not included here, is called.
Finding the PDF for a specified direction follows the way in which the PDF was calculated in the sampling method.
First, ImageFromRender() gives the coordinates in the portal image for the specified direction.
Following its Sample() method, the WindowedPiecewiseConstant2D::PDF() method also takes a 2D bounding box to window the function. The PDF value it returns is normalized with respect to those bounds and a value of zero is returned if the given point is outside of them. Application of the change of variables factor gives the final PDF with respect to solid angle.