7.9 Film and the Imaging Pipeline
The type of film or sensor in a camera has a dramatic effect on the way that incident light is transformed into colors in an image. In pbrt, the Film class models the sensing device in the simulated camera. After the radiance is found for each camera ray, the Film implementation determines the sample’s contribution to the pixels around the point on the film plane where the camera ray began and updates its representation of the image. When the main rendering loop exits, the Film writes the final image to a file.
For realistic camera models, Section 6.4.7 introduced the measurement equation, which describes how a sensor in a camera measures the amount of energy arriving over the sensor area over a period of time. For simpler camera models, we can consider the sensor to be measuring the average radiance over a small area over some period of time. The effect of the choice of which measurement to take is encapsulated in the weight for the ray returned by Camera::GenerateRayDifferential(). Therefore, the Film implementation can proceed without having to account for these variations, as long as it scales the provided radiance values by these weights.
This section introduces a single Film implementation that applies the pixel reconstruction equation to compute final pixel values. For a physically based renderer, it’s generally best for the resulting images to be stored in a floating-point image format. Doing so provides more flexibility in how the output can be used than if a traditional image format with 8-bit unsigned integer values is used; floating-point formats avoid the substantial loss of information that comes from quantizing images to 8 bits.
In order to display such images on modern display devices, it is necessary to map these floating-point pixel values to discrete values for display. For example, computer monitors generally expect the color of each pixel to be described by an RGB color triple, not an arbitrary spectral power distribution. Spectra described by general basis function coefficients must therefore be converted to an RGB representation before they can be displayed. A related problem is that displays have a substantially smaller range of displayable radiance values than the range present in many real-world scenes. Therefore, pixel values must be mapped to the displayable range in a way that causes the final displayed image to appear as close as possible to the way it would appear on an ideal display device without this limitation. These issues are addressed by research into tone mapping; the “Further Reading” section has more information about this topic.
7.9.1 The Film Class
Film is defined in the files core/film.h and core/film.cpp.
A number of values are passed to the constructor: the overall resolution of the image in pixels; a crop window that may specify a subset of the image to render; the length of the diagonal of the film’s physical area, which is specified to the constructor in millimeters but is converted to meters here; a filter function; the filename for the output image and parameters that control how the image pixel values are stored in files.
In conjunction with the overall image resolution, the crop window gives the bounds of pixels that need to be actually stored and written out. Crop windows are useful for debugging or for breaking a large image into small pieces that can be rendered on different computers and reassembled later. The crop window is specified in NDC space, with each coordinate ranging from 0 to 1 (Figure 7.47).
Film::croppedPixelBounds stores the pixel bounds from the upper-left to the lower-right corners of the crop window. Fractional pixel coordinates are rounded up; this ensures that if an image is rendered in pieces with abutting crop windows, each final pixel will be present in only one of the subimages.
Given the pixel resolution of the (possibly cropped) image, the constructor allocates an array of Pixel structures, one for each pixel. The running weighted sums of spectral pixel contributions are represented using XYZ colors (Section 5.2.1) and are stored in the xyz member variable. filterWeightSum holds the sum of filter weight values for the sample contributions to the pixel. splatXYZ holds an (unweighted) sum of sample splats. The pad member is unused; its sole purpose is to ensure that the Pixel structure is 32 bytes in size, rather than 28 as it would be otherwise (assuming 4-byte Floats; otherwise, it ensures a 64-byte structure). This padding ensures that a Pixel won’t straddle a cache line, so that no more than one cache miss will be incurred when a Pixel is accessed (as long as the first Pixel in the array is allocated at the start of a cache line).
Two natural alternatives to using XYZ colors to store pixel values would be to use Spectrum values or to store RGB color. Here, it isn’t worthwhile to store complete Spectrum values, even when doing full spectral rendering. Because the final colors written to the output file don’t include the full set of Spectrum samples, converting to a tristimulus value here doesn’t represent a loss of information versus storing Spectrums and converting to a tristimulus value on image output. Not storing complete Spectrum values in this case can save a substantial amount of memory if the Spectrum has a large number of samples. (If pbrt supported saving SampledSpectrum values to files, then this design choice would need to be revisited.)
We have chosen to use XYZ color rather than RGB to emphasize that XYZ is a display-independent representation of color, while RGB requires assuming a particular set of display response curves (Section 5.2.2). (In the end, we will, however, have to convert to RGB, since few image file formats store XYZ color.)
With typical filter settings, every image sample may contribute to 16 or more pixels in the final image. Particularly for simple scenes, where relatively little time is spent on ray intersection testing and shading computations, the time spent updating the image for each sample can be significant. Therefore, the Film precomputes a table of filter values so that we can avoid the expense of virtual function calls to the Filter::Evaluate() method as well as the expense of evaluating the filter and can instead use values from the table for filtering. The error introduced by not evaluating the filter at each sample’s precise location isn’t noticeable in practice.
The implementation here makes the reasonable assumption that the filter is defined such that , so the table needs to hold values for only the positive quadrant of filter offsets. This assumption is true for all of the Filters currently available in pbrt and is true for most filters used in practice. This makes the table one-fourth the size and improves the coherence of memory accesses, leading to better cache performance.
The Film implementation is responsible for determining the range of integer pixel values that the Sampler is responsible for generating samples for. The area to be sampled is returned by the GetSampleBounds() method. Because the pixel reconstruction filter generally spans a number of pixels, the Sampler must generate image samples a bit outside of the range of pixels that will actually be output. This way, even pixels at the boundary of the image will have an equal density of samples around them in all directions and won’t be biased with only values from toward the interior of the image. This detail is also important when rendering images in pieces with crop windows, since it eliminates artifacts at the edges of the subimages.
Computing the sample bounds involves accounting for the half-pixel offsets when converting from discrete to continuous pixel coordinates, expanding by the filter radius, and then rounding outward.
GetPhysicalExtent() returns the actual extent of the film in the scene. This information is specifically needed by the RealisticCamera. Given the length of the film diagonal and the aspect ratio of the image, we can compute the size of the sensor in the and directions. If we denote the diagonal length by and the width and height of the film sensor by and , then we know that . We can define the aspect ratio of the image by , so , which gives . Solving for gives
The implementation of GetPhysicalExtent() follows directly. The returned extent is centered around .
7.9.2 Supplying Pixel Values to the Film
There are three ways that the sample contributions can be provided to the film. The first is driven by samples generated by the Sampler over tiles of the image. While the most straightforward interface would be to allow renderers to provide a film pixel location and a Spectrum with the contribution of the corresponding ray directly to the Film, it’s not easy to provide a high-performance implementation of such a method in the presence of multi-threading, where multiple threads may end up trying to update the same portion of the image concurrently.
Therefore, Film defines an interface where threads can specify that they’re generating samples in some extent of pixels with respect to the overall image. Given the sample bounds, GetFilmTile() in turn returns a pointer to a FilmTile object that stores contributions for the pixels in the corresponding region of the image. Ownership of the FilmTile and the data it stores is exclusive to the caller, so that thread can provide sample values to the FilmTile without worrying about contention with other threads. When it has finished work on the tile, the thread passes the completed tile back to the Film, which safely merges it into the final image.
Given a bounding box of the pixel area that samples will be generated in, there are two steps to compute the bounding box of image pixels that the sample values will contribute to. First, the effects of the discrete-to-continuous pixel coordinate transformation and the radius of the filter must be accounted for. Second, the resulting bound must be clipped to the overall image pixel bounds; pixels outside the image by definition don’t need to be accounted for.
The FilmTile constructor takes a 2D bounding box that gives the bounds of the pixels in the final image that it must provide storage for as well as additional information about the reconstruction filter being used, including a pointer to the filter function values tabulated in <<Precompute filter weight table>>.
For each pixel, both a sum of the weighted contributions from the pixel samples (according to the reconstruction filter weights) and a sum of the filter weights is maintained.
Once the radiance carried by a ray for a sample has been computed, the Integrator calls FilmTile::AddSample(). It takes a sample and corresponding radiance value as well as the weight for the sample’s contribution originally returned by Camera::GenerateRayDifferential(). It updates the stored image using the reconstruction filter with the pixel filtering equation.
To understand the operation of FilmTile::AddSample(), first recall the pixel filtering equation:
It computes each pixel’s value as the weighted sum of nearby samples’ radiance values, using both a filter function and the sample weight returned by the Camera to compute the contribution of the radiance value to the final pixel value. Because all of the Filters in pbrt have finite extent, this method starts by computing which pixels will be affected by the current sample. Then, turning the pixel filtering equation inside out, it updates two running sums for each pixel that is affected by the sample. One sum accumulates the numerator of the pixel filtering equation, and the other accumulates the denominator. At the end of rendering, the final pixel values are computed by performing the division.
To find which pixels a sample potentially contributes to, FilmTile::AddSample() converts the continuous sample coordinates to discrete coordinates by subtracting from and . It then offsets this value by the filter radius in each direction (Figure 7.48), transforms it to the tile coordinate space, and takes the ceiling of the minimum coordinates and the floor of the maximum, since pixels outside the bound of the extent are unaffected by the sample. Finally, the pixel bounds are clipped to the bounds of the pixels in the tile. While the sample may theoretically contribute to pixels outside the tile, any such pixels must be outside the image extent.
Given the bounds of pixels that are affected by this sample, it’s now possible to loop over all of those pixels and accumulate the filtered sample weights at each of them.
Each discrete integer pixel has an instance of the filter function centered around it. To compute the filter weight for a particular sample, it’s necessary to find the offset from the pixel to the sample’s position in discrete coordinates and evaluate the filter function. If we were evaluating the filter explicitly, the appropriate computation would be
Instead, the implementation retrieves the appropriate filter weight from the table.
To find the filter weight for a pixel given the sample position , this routine computes the offset and converts it into coordinates for the filter weights lookup table. This can be done directly by dividing each component of the sample offset by the filter radius in that direction, giving a value between 0 and 1, and then multiplying by the table size. This process can be further optimized by noting that along each row of pixels in the direction, the difference in , and thus the offset into the filter table, is constant. Analogously, for each column of pixels, the offset is constant. Therefore, before looping over the pixels here it’s possible to precompute these indices and store them in two 1D arrays, saving repeated work in the loop.
Now at each pixel, the and offsets into the filter table can be found for the pixel, leading to the offset into the array and thus the filter value.
For each affected pixel, we can now add its weighted spectral contribution and the filter weight to the appropriate value in the pixels array.
The GetPixel() method takes pixel coordinates with respect to the overall image and converts them to coordinates in the film tile before indexing into the pixels array. In addition to the version here, there is also a const variant of the method that returns a const FilmTilePixel &.
Rendering threads present FilmTiles to be merged into the image stored by Film using the MergeFilmTile() method. Its implementation starts by acquiring a lock to a mutex in order to ensure that multiple threads aren’t simultaneously modifying image pixel values. Note that because MergeFilmTile() takes a std::unique_ptr to the tile, ownership of the tile’s memory is transferred when this method is called. Calling code should therefore no longer attempt to add contributions to a tile after calling this method. Storage for the FilmTile is freed automatically at the end of the execution of MergeFilmTile() when the tile parameter goes out of scope.
When merging a tile’s contributions in the final image, it’s necessary for calling code to be able to find the bound of pixels that the tile has contributions for.
For each pixel in the tile, it’s just necessary to merge its contribution into the values stored in Film::pixels.
It’s also useful for some Integrator implementations to be able to just provide values for all of the pixels in the entire image all at once. The SetImage() method allows this mode of operation. Note that the number of elements in the array pointed to by the image parameter should be equal to croppedPixelBounds.Area(). The implementation of SetImage() is a straightforward matter of copying the given values after converting them to XYZ color.
Some light transport algorithms (notably bidirectional path tracing, which is introduced in Section 16.3) require the ability to “splat” contributions to arbitrary pixels. Rather than computing the final pixel value as a weighted average of contributing splats, splats are simply summed. Generally, the more splats that are around a given pixel, the brighter the pixel will be. The Pixel::splatXYZ member variable is declared to be of AtomicFloat type, which allows multiple threads to concurrently update pixel values via the AddSplat() method without additional synchronization.
7.9.3 Image Output
After the main rendering loop exits, the Integrator’s Render() method generally calls the Film::WriteImage() method, which directs the film to do the processing necessary to generate the final image and store it in a file. This method takes a scale factor that is applied to the samples provided to the AddSplat() method. (See the end of Section 16.4.5 for further discussion of this scale factor’s use with the MLTIntegrator.)
This method starts by allocating an array to store the final RGB pixel values. It then loops over all of the pixels in the image to fill in this array.
Given information about the response characteristics of the display device being used, the pixel values can be converted to device-dependent RGB values from the device-independent XYZ tristimulus values. This conversion is another change of spectral basis, where the new basis is determined by the spectral response curves of the red, green, and blue elements of the display device. Here, weights to convert from XYZ to the device RGB based on the sRGB primaries are used; sRGB is a standardized color space that is supported by virtually all 2015-era displays and printers.
As the RGB output values are being initialized, their final values from the pixel filtering equation are computed by dividing each pixel sample value by Pixel::filterWeightSum. This conversion can lead to RGB values where some components are negative; these are out-of-gamut colors that can’t be represented with the chosen display primaries. Various approaches have been suggested to deal with this issue, ranging from clamping to 0, offsetting all components to lie within the gamut, or even performing a global optimization based on all of the pixels in the image. Reconstructed pixels may also end up with negative values due to negative lobes in the reconstruction filter function. Color components are clamped to 0 here to handle both of these cases.
It’s also necessary to add in the contributions of splatted values for this pixel to the final value.
The final pixel value is scaled by a user-supplied factor (or by 1, if none was specified); this can be useful when writing images to 8-bit integer image formats to make the most of the limited dynamic range.
The WriteImage() function, defined in Section A.2, handles the details of writing the image to a file. If writing to an 8-bit integer format, it applies gamma correction to the floating-point pixel values according to the sRGB standard before converting them to integers. (See the “Further Reading” section at the end of Chapter 10 for more information about gamma correction.)