8.5 Stratified Sampler

The IndependentSampler’s weakness is that it makes no effort to ensure that its sample points have good coverage of the sampling domain. All the subsequent Samplers in this chapter are based on various ways of ensuring that. As we saw in Section 2.2.1, stratification is one such approach. The StratifiedSampler applies this technique, subdividing the left-bracket 0 comma 1 right-parenthesis Superscript d sampling domain into regions and generating a single sample inside each one. Because a sample is taken in each region, it is less likely that important features in the integrand will be missed, since the samples are guaranteed not to all be close together.

The StratifiedSampler places each sample at a random point inside each stratum by jittering the center point of the stratum by a uniform random amount so that all points inside the stratum are sampled with equal probability. The nonuniformity that results from this jittering helps turn aliasing into noise, as discussed in Section 8.1.6. The sampler also offers an unjittered mode, which gives uniform sampling in the strata; this mode is mostly useful for comparisons between different sampling techniques rather than for rendering high-quality images.

Direct application of stratification to high-dimensional sampling quickly leads to an intractable number of samples. For example, if we divided the 5D image, lens, and time sample space into four strata in each dimension, the total number of samples per pixel would be 4 Superscript 5 Baseline equals 1024 . We could reduce this impact by taking fewer samples in some dimensions (or not stratifying some dimensions, effectively using a single stratum), but we would then lose the benefit of having well-stratified samples in those dimensions. This problem with stratification is known as the curse of dimensionality.

We can reap most of the benefits of stratification without paying the price in excessive total sampling by computing lower-dimensional stratified patterns for subsets of the domain’s dimensions and then randomly associating samples from each set of dimensions. (This process is sometimes called padding.) Figure 8.22 shows the basic idea: we might want to take just four samples per pixel but still require the samples to be stratified over all dimensions. We independently generate four 2D stratified image samples, four 1D stratified time samples, and four 2D stratified lens samples. Then we randomly associate a time and lens sample value with each image sample. The result is that each pixel has samples that together have good coverage of the sample space.

Figure 8.22: We can generate a good sample pattern that reaps the benefits of stratification without requiring all the sampling dimensions to be stratified simultaneously. Here, we have split left-parenthesis x comma y right-parenthesis image position, time t , and left-parenthesis u comma v right-parenthesis lens position into independent strata with four regions each. Each is sampled independently, and then a time sample and a lens sample are randomly associated with each image sample. We retain the benefits of stratification in each stratification domain without having to exponentially increase the total number of samples.

Rendering a scene without complex lighting but including defocus blur due to a finite aperture is useful for understanding the behavior of sampling patterns. This is a case where the integral is over four dimensions—more than just the two of the image plane, but not the full high-dimensional integral when complex light transport is sampled. Figure 8.23 shows the improvement in image quality from using stratified lens and image samples versus using unstratified independent samples when rendering such a scene.

Figure 8.23: Effect of Sampling Patterns in Rendering a Purple Sphere with Defocus Blur. (a) A high-quality reference image of a blurry sphere. (b) An image generated with independent random sampling without stratification. (c) An image generated with the same number of samples, but with the StratifiedSampler, which stratified both the image and, more importantly for this image, the lens samples. Stratification gives a substantial improvement and a 3 times reduction in mean squared error.

Figure 8.24 shows a comparison of a few sampling patterns. The first is an independent uniform random pattern generated by the IndependentSampler. The result is terrible; some regions have few samples and other areas have clumps of many samples. The second is an unjittered stratified pattern. In the last, the uniform pattern has been jittered, with a random offset added to each sample’s location, keeping it inside its cell. This gives a better overall distribution than the purely random pattern while preserving the benefits of stratification, though there are still some clumps of samples and some regions that are undersampled.

Figure 8.24: Three 2D Sampling Patterns. (a) The independent uniform pattern is an ineffective pattern, with many clumps of samples that leave large sections of the image poorly sampled. (b) An unjittered pattern is better distributed but can exacerbate aliasing artifacts. (c) A stratified jittered pattern turns aliasing from the unjittered pattern into high-frequency noise while generally maintaining the benefits of stratification. (See Figure 8.26 for a danger of jittering, however.)

Figure 8.25: Comparison of Image Sampling Methods with a Checkerboard Texture. This is a difficult image to render well, since the checkerboard’s frequency with respect to the pixel spacing tends toward infinity as we approach the horizon. (a) A reference image, rendered with 256 samples per pixel, showing something close to an ideal result. (b) An image rendered with one sample per pixel, with no jittering. Note the jaggy artifacts at the edges of checks in the foreground. Notice also the artifacts in the distance where the checker function goes through many cycles between samples; as expected from the signal processing theory presented earlier, that detail reappears incorrectly as lower-frequency aliasing. (c) The result of jittering the image samples, still with just one sample per pixel. The regular aliasing of the second image has been replaced by less objectionable noise artifacts. (d) The result of four jittered samples per pixel is still inferior to the reference image but is substantially better than the previous result.

Figure 8.26: A Worst-Case Situation for Stratified Sampling. In an n times n 2D pattern, up to 2 n of the points may project to essentially the same point on one of the axes. When “unlucky” patterns like this are generated, the quality of the results computed with them usually suffers. (Here, 8 of the samples have nearly the same x value.)

Figure 8.25 shows images rendered using the StratifiedSampler and shows how jittered sample positions turn aliasing artifacts into less objectionable noise.

<<StratifiedSampler Definition>>= 
class StratifiedSampler { public: <<StratifiedSampler Public Methods>> 
StratifiedSampler(int xPixelSamples, int yPixelSamples, bool jitter, int seed = 0) : xPixelSamples(xPixelSamples), yPixelSamples(yPixelSamples), seed(seed), jitter(jitter) {} static StratifiedSampler *Create(const ParameterDictionary &parameters, const FileLoc *loc, Allocator alloc); PBRT_CPU_GPU static constexpr const char *Name() { return "StratifiedSampler"; } int SamplesPerPixel() const { return xPixelSamples * yPixelSamples; } void StartPixelSample(Point2i p, int index, int dim) { pixel = p; sampleIndex = index; dimension = dim; rng.SetSequence(Hash(p, seed)); rng.Advance(sampleIndex * 65536ull + dimension); } Float Get1D() { <<Compute stratum index for current pixel and dimension>> 
uint64_t hash = Hash(pixel, dimension, seed); int stratum = PermutationElement(sampleIndex, SamplesPerPixel(), hash);
++dimension; Float delta = jitter ? rng.Uniform<Float>() : 0.5f; return (stratum + delta) / SamplesPerPixel(); } Point2f Get2D() { <<Compute stratum index for current pixel and dimension>> 
uint64_t hash = Hash(pixel, dimension, seed); int stratum = PermutationElement(sampleIndex, SamplesPerPixel(), hash);
dimension += 2; int x = stratum % xPixelSamples, y = stratum / xPixelSamples; Float dx = jitter ? rng.Uniform<Float>() : 0.5f; Float dy = jitter ? rng.Uniform<Float>() : 0.5f; return {(x + dx) / xPixelSamples, (y + dy) / yPixelSamples}; } Point2f GetPixel2D() { return Get2D(); } Sampler Clone(Allocator alloc); std::string ToString() const;
private: <<StratifiedSampler Private Members>> 
int xPixelSamples, yPixelSamples, seed; bool jitter; RNG rng; Point2i pixel; int sampleIndex = 0, dimension = 0;
};

The StratifiedSampler constructor takes a specification of how many 2D strata should be used via specification of x and y sample counts. Parameters that specify whether jittering is enabled and a seed for the random number generator can also be provided to the constructor.

<<StratifiedSampler Public Methods>>= 

<<StratifiedSampler Private Members>>= 
int xPixelSamples, yPixelSamples, seed; bool jitter; RNG rng;

The total number of samples in each pixel is the product of the two dimensions’ sample counts.

<<StratifiedSampler Public Methods>>+=  
int SamplesPerPixel() const { return xPixelSamples * yPixelSamples; }

This sampler needs to keep track of the current pixel, sample index, and dimension for use in the sample generation methods. After recording them in member variables, the RNG is seeded so that deterministic values are returned for the sample point, following the same approach as was used in IndependentSampler::StartPixelSample().

<<StratifiedSampler Public Methods>>+=  
void StartPixelSample(Point2i p, int index, int dim) { pixel = p; sampleIndex = index; dimension = dim; rng.SetSequence(Hash(p, seed)); rng.Advance(sampleIndex * 65536ull + dimension); }

<<StratifiedSampler Private Members>>+= 
Point2i pixel; int sampleIndex = 0, dimension = 0;

The StratifiedSampler’s implementation is made more complex by the fact that its task is not to generate a full set of sample points for all of the pixel samples at once. If that was the task of the sampler, then the following code suggests how 1D stratified samples for some dimension might be generated: each array element is first initialized with a random point in its corresponding stratum and then the array is randomly shuffled.

This shuffling operation is necessary for padding, so that there is no correlation between the pixel sample index and which stratum its sample comes from. If this shuffling was not done, then the sample dimensions’ values would be correlated in a way that would lead to errors in images—for example, the first 2D sample used to choose the film location, as well as the first 2D lens sample, would always each be in the lower left stratum adjacent to the origin.

constexpr int n = ...; std::array<Float, n> samples; for (int i = 0; i < n; ++i) samples[i] = (i + rng.Uniform<Float>()) / n; std::shuffle(samples.begin(), samples.end(), rng);

In the context of pbrt’s sampling interface, we would like to perform this random sample shuffling without explicitly representing all the dimension’s sample values. The StratifiedSampler therefore uses a random permutation of the sample index to determine which stratum to sample. Given the stratum index, generating a 1D sample is easy.

<<StratifiedSampler Public Methods>>+=  
Float Get1D() { <<Compute stratum index for current pixel and dimension>> 
uint64_t hash = Hash(pixel, dimension, seed); int stratum = PermutationElement(sampleIndex, SamplesPerPixel(), hash);
++dimension; Float delta = jitter ? rng.Uniform<Float>() : 0.5f; return (stratum + delta) / SamplesPerPixel(); }

It is possible to perform the sample index permutation without representing the permutation explicitly thanks to the PermutationElement() routine, which is defined in Section B.2.8. It takes an index, a total permutation size, and a random seed, and returns the element that the given index is mapped to, doing so in such a way that a valid permutation is returned across all indices up to the permutation size. Thus, we just need to compute a consistent seed value that is the same whenever a particular dimension is sampled at a particular pixel. Hash() takes care of this, though note that sampleIndex must not be included in the hashed values, as doing so would lead to different permutations for different samples in a pixel.

<<Compute stratum index for current pixel and dimension>>= 
uint64_t hash = Hash(pixel, dimension, seed); int stratum = PermutationElement(sampleIndex, SamplesPerPixel(), hash);

Generating a 2D sample follows a similar approach, though the stratum index has to be mapped into separate x and y stratum coordinates. Given these, the remainder of the sampling operation is straightforward.

<<StratifiedSampler Public Methods>>+=  
Point2f Get2D() { <<Compute stratum index for current pixel and dimension>> 
uint64_t hash = Hash(pixel, dimension, seed); int stratum = PermutationElement(sampleIndex, SamplesPerPixel(), hash);
dimension += 2; int x = stratum % xPixelSamples, y = stratum / xPixelSamples; Float dx = jitter ? rng.Uniform<Float>() : 0.5f; Float dy = jitter ? rng.Uniform<Float>() : 0.5f; return {(x + dx) / xPixelSamples, (y + dy) / yPixelSamples}; }

The pixel sample is not handled differently than other 2D samples with this sampler, so the GetPixel2D() method just calls Get2D().

<<StratifiedSampler Public Methods>>+= 
Point2f GetPixel2D() { return Get2D(); }

With a d -dimensional stratification, the star discrepancy of jittered points has been shown to be

upper O left-parenthesis StartFraction StartRoot d log n EndRoot Over n Superscript 1 slash 2 plus 1 slash left-parenthesis 2 d right-parenthesis Baseline EndFraction right-parenthesis comma

which means that stratified samples do not qualify as having low discrepancy.

The PSD of 2D stratified samples was plotted earlier, in Figure 8.17(a). Other than the central spike at the origin (at the center of the image), power is low at low frequencies and settles in to be fairly constant at higher frequencies, which means that this sampling approach is effective at transforming aliasing into high-frequency noise.