## 7.8 Image Reconstruction

Given carefully chosen image samples, we need to convert the samples and their computed radiance values into pixel values for display or storage. According to signal processing theory, we need to do three things to compute final values for each of the pixels in the output image:

- Reconstruct a continuous image function from the set of image samples.
- Prefilter the function to remove any frequencies past the Nyquist limit for the pixel spacing.
- Sample at the pixel locations to compute the final pixel values.

Because we know that we will be resampling the function at only the pixel locations, it’s not necessary to construct an explicit representation of the function. Instead, we can combine the first two steps using a single filter function.

Recall that if the original function had been uniformly sampled at a frequency greater than the Nyquist frequency and reconstructed with the sinc filter, then the reconstructed function in the first step would match the original image function perfectly—quite a feat since we only have point samples. But because the image function almost always will have higher frequencies than could be accounted for by the sampling rate (due to edges, etc.), we chose to sample it nonuniformly, trading off noise for aliasing.

The theory behind ideal reconstruction depends on the samples being
uniformly spaced. While a number of attempts have been made to extend the
theory to nonuniform sampling, there is not yet an accepted approach to
this problem. Furthermore, because the sampling rate is known to be
insufficient to capture the function, perfect reconstruction isn’t
possible. Recent research in the field of sampling theory has revisited the
issue of reconstruction with the explicit acknowledgment that perfect
reconstruction is not generally attainable in practice.
This slight shift in perspective has led to
powerful new reconstruction techniques.
See, for example, Unser (2000) for a survey of these
developments. In particular, the goal of research
in reconstruction theory has shifted from perfect reconstruction to
developing reconstruction techniques that can be shown to minimize error
between the reconstructed function and the original function,
*regardless of whether the original was band limited*.

While the reconstruction techniques used in `pbrt` are not directly built on
these new approaches, they serve to explain the experience of practitioners
that applying perfect reconstruction techniques to samples taken for image
synthesis generally does not result in the highest quality images.

`radius.x`and

`radius.y`need to be considered. Each of the image samples , denoted by open circles, is weighted by a 2D filter function, . The weighted average of all samples is the final pixel value.

To reconstruct pixel values, we will consider the problem of interpolating the samples near a particular pixel. To compute a final value for a pixel , interpolation results in computing a weighted average

where

- is the radiance value of the th sample located at
- is the sample contribution weight returned by
the
`Camera`. As described in Sections 6.4.7 and 13.6.6, the manner in which these weights are computed determines which radiometric quantity the film measures. - is the filter function.

Figure 7.38 shows a pixel at location that has a
pixel filter with extent `radius.x` in the direction and
`radius.y` in the direction. All of the samples inside the box
given by the filter extent may contribute to the pixel’s value, depending
on the filter function’s value for .

The sinc filter is not an appropriate choice here: recall that the ideal
sinc filter is prone to ringing when the underlying function has
frequencies beyond the Nyquist limit (Gibbs phenomenon), meaning edges in
the image have faint replicated copies of the edge in nearby pixels.
Furthermore, the sinc filter has *infinite support*: it doesn’t fall
off to zero at a finite distance from its center, so all of the image
samples would need to be filtered for each output pixel. In practice,
there is no single best filter function. Choosing the best one for a
particular scene takes a mixture of quantitative evaluation and qualitative
judgment.

### 7.8.1 Filter Functions

All filter implementations in `pbrt` are derived from an abstract
`Filter` class, which provides the interface for the functions
used in filtering; see Equation (7.12). The `Film`
class (described in the Section 7.9) stores a pointer
to a `Filter` and uses it to filter image sample contributions when
accumulating them into the final image.
(Figure 7.39 shows comparisons of zoomed-in regions
of images rendered using a variety of the filters from this section to
reconstruct pixel values.) The `Filter` base class is defined in the
files `core/filter.h` and
`core/filter.cpp`.

All filters are centered at the origin and define a radius beyond
which they have a value of 0; this width may be different in the and
directions. The constructor takes the radius values and stores them
along with their reciprocals, for use by the filter implementations. The
filter’s overall extent in each direction (its *support*) is twice the
value of its corresponding radius (Figure 7.40).

The sole method that `Filter` implementations need to provide is
`Evaluate()`. It takes as a parameter a 2D point that gives the
position of the sample point relative to the center of the filter.
The filter’s value at that point is returned. Code
elsewhere in the system will never call the filter function with points
outside of the filter’s extent, so filter implementations don’t need to
check for this case.

#### Box Filter

One of the most commonly used filters in graphics is the *box filter* (and,
in fact, when filtering and reconstruction aren’t addressed explicitly, the box
filter is the *de facto* result). The box filter equally weights all
samples within a square region of the image. Although computationally efficient,
it’s just about the worst filter possible. Recall from the discussion in
Section 7.1.2 that the box filter allows high-frequency
sample data to leak into the reconstructed values. This causes
postaliasing—even if the original sample values were at a high enough
frequency to avoid aliasing, errors are introduced by poor filtering.

Figure 7.41(a) shows a graph of the box filter, and Figure 7.42 shows the result of using the box filter to reconstruct two 1D functions.

For the step function we used previously to illustrate the Gibbs phenomenon, the box does reasonably well. However, the results are much worse for a sinusoidal function that has increasing frequency along the axis. Not only does the box filter do a poor job of reconstructing the function when the frequency is low, giving a discontinuous result even though the original function was smooth, but it also does an extremely poor job of reconstruction as the function’s frequency approaches and passes the Nyquist limit.

Because the evaluation function won’t be called with values outside of the filter’s extent, it can always return 1 for the filter function’s value.

#### Triangle Filter

The triangle filter gives slightly better results than the box: the weight falls off linearly from the filter center over the square extent of the filter. See Figure 7.41(b) for a graph of the triangle filter.

Evaluating the triangle filter is simple: the implementation just computes a linear function based on the width of the filter in both the and directions.

#### Gaussian Filter

Unlike the box and triangle filters, the Gaussian filter gives a reasonably good result in practice. This filter applies a Gaussian bump that is centered at the pixel and radially symmetric around it. The Gaussian’s value at the end of its extent is subtracted from the filter value, in order to make the filter go to 0 at its limit (Figure 7.43). The Gaussian does tend to cause slight blurring of the final image compared to some of the other filters, but this blurring can actually help mask any remaining aliasing in the image.

The 1D Gaussian filter function of radius is

where controls the rate of falloff of the filter. Smaller values cause a slower falloff, giving a blurrier image. The second term here ensures that the Gaussian goes to 0 at the end of its extent rather than having an abrupt cliff. For efficiency, the constructor precomputes the constant term for in each direction.

Since a 2D Gaussian function is separable into the product of two 1D
Gaussians, the implementation calls the `Gaussian()` function twice
and multiplies the results.

#### Mitchell Filter

Filter design is notoriously difficult, mixing mathematical analysis and
perceptual experiments. Mitchell and Netravali (1988) have developed a family of
parameterized filter functions in order to be able to explore this space in
a systematic manner. After analyzing test subjects’
subjective responses to images filtered with a variety of parameter values,
they developed a filter that tends to do a good job of trading off between
*ringing* (phantom edges next to actual edges in the image) and
*blurring* (excessively blurred results)—two common artifacts from poor
reconstruction filters.

Note from the graph in
Figure 7.43(b) that this filter function takes
on negative values out by its edges; it has *negative lobes*. In
practice these negative regions improve the sharpness of edges, giving
crisper images (reduced blurring). If they become too large, however,
ringing tends to start to enter the image.
Also, because the final pixel values can therefore become negative, they will eventually
need to be clamped to a legal output range.

Figure 7.44 shows this filter reconstructing the two test functions. It does extremely well with both of them: there is minimal ringing with the step function, and it does a very good job with the sinusoidal function, up until the point where the sampling rate isn’t sufficient to capture the function’s detail.

The Mitchell filter has two parameters called and . Although any values can be used for these parameters, Mitchell and Netravali recommend that they lie along the line .

The Mitchell-Netravali filter is the product of 1D filter
functions in the and directions and is therefore separable, like
the Gaussian filter. (In fact, all of the provided filters in `pbrt` are
separable.) Nevertheless, the `Filter::Evaluate()` interface does not
enforce this requirement, giving more flexibility in implementing new
filters in the future.

The 1D function used in the Mitchell filter is an even function defined over the range . This function is made by joining a cubic polynomial defined over with another cubic polynomial defined over . This combined polynomial is also reflected around the plane to give the complete function. These polynomials are controlled by the and parameters and are chosen carefully to guarantee and continuity at , , and . The polynomials are

#### Windowed Sinc Filter

Finally, the `LanczosSincFilter` class implements a filter
based on the sinc function. In practice, the sinc filter is often
multiplied by another function that goes to 0 after some distance. This
gives a filter function with finite extent, which is necessary for an
implementation with reasonable performance. An additional parameter
controls how many cycles the sinc function passes through before it
is clamped to a value of 0.
Figure 7.45 shows a graph of three cycles of
the sinc function, along with a graph of the windowing function we
use, which was developed by Lanczos. The Lanczos window is
just the central lobe of the sinc function, scaled to cover the
cycles:

Figure 7.45 also shows the filter that we will implement here, which is the product of the sinc function and the windowing function.

`LanczosSincFilter`.

Figure 7.46 shows the windowed sinc’s reconstruction results for uniform 1D samples. Thanks to the windowing, the reconstructed step function exhibits far less ringing than the reconstruction using the infinite-extent sinc function (compare to Figure 7.11). The windowed sinc filter also does extremely well at reconstructing the sinusoidal function until prealiasing begins.

The implementation computes the value of the sinc function and then multiplies it by the value of the Lanczos windowing function.