14.5 Path Tracing

Now that we have derived the path integral form of the light transport equation, we’ll show how it can be used to derive the path-tracing light transport algorithm and will present a path-tracing integrator. Figure 14.17 compares images of a scene rendered with different numbers of pixel samples using the path-tracing integrator. In general, hundreds or thousands of samples per pixel may be necessary for high-quality results—potentially a substantial computational expense.

Figure 14.17: San Miguel Scene Rendered with Path Tracing. (1) Rendered with path tracing with 1024 samples per pixel. (2) Rendered with just 8 samples per pixel, giving the characteristic grainy noise that is the hallmark of variance. Although the second image appears darker, the average pixel values of both are actually the same; very large values in some of its pixels are being clipped to the display’s maximum brightness. (Model courtesy of Guillermo M. Leal Llaguno.)

Path tracing was the first general-purpose unbiased Monte Carlo light transport algorithm used in graphics. Kajiya (1986) introduced it in the same paper that first described the light transport equation. Path tracing incrementally generates paths of scattering events starting at the camera and ending at light sources in the scene. One way to think of it is as an extension of Whitted’s method to include both delta distribution and nondelta BSDFs and light sources, rather than just accounting for the delta terms.

Although it is slightly easier to derive path tracing directly from the basic light transport equation, we will instead approach it from the path integral form, which helps build understanding of the path integral equation and will make the generalization to bidirectional path tracing (Section 16.3) easier to understand.

14.5.1 Overview

Given the path integral form of the LTE, we would like to estimate the value of the exitant radiance from the camera ray’s intersection point normal p 1 ,

upper L left-parenthesis normal p 1 right-arrow normal p 0 right-parenthesis equals sigma-summation Underscript i equals 1 Overscript normal infinity Endscripts upper P left-parenthesis normal p Subscript Baseline overbar Subscript i Baseline right-parenthesis comma

for a given camera ray from normal p 0 that first intersects the scene at normal p 1 . We have two problems that must be solved in order to compute this estimate:

  1. How do we estimate the value of the sum of the infinite number of upper P left-parenthesis normal p Subscript Baseline overbar Subscript i Baseline right-parenthesis terms with a finite amount of computation?
  2. Given a particular upper P left-parenthesis normal p Subscript Baseline overbar Subscript i Baseline right-parenthesis term, how do we generate one or more paths normal p Subscript Baseline overbar Subscript in order to compute a Monte Carlo estimate of its multidimensional integral?

For path tracing, we can take advantage of the fact that for physically valid scenes, paths with more vertices scatter less light than paths with fewer vertices overall (this isn’t necessarily true for any particular pair of paths, just in the aggregate). This is a natural consequence of conservation of energy in BSDFs. Therefore, we will always estimate the first few terms upper P left-parenthesis normal p Subscript Baseline overbar Subscript i Baseline right-parenthesis and will then start to apply Russian roulette to stop sampling after a finite number of terms without introducing bias. Recall from Section 13.7 that Russian roulette allows us to probabilistically stop computing terms in a sum as long as we reweight the terms that are not terminated. For example, if we always computed estimates of upper P left-parenthesis normal p Subscript Baseline overbar Subscript 1 Baseline right-parenthesis , upper P left-parenthesis normal p Subscript Baseline overbar Subscript 2 Baseline right-parenthesis , and upper P left-parenthesis normal p Subscript Baseline overbar Subscript 3 Baseline right-parenthesis but stopped without computing more terms with probability q , then an unbiased estimate of the sum would be

upper P left-parenthesis normal p Subscript Baseline overbar Subscript 1 Baseline right-parenthesis plus upper P left-parenthesis normal p Subscript Baseline overbar Subscript 2 Baseline right-parenthesis plus upper P left-parenthesis normal p Subscript Baseline overbar Subscript 3 Baseline right-parenthesis plus StartFraction 1 Over 1 minus q EndFraction sigma-summation Underscript i equals 4 Overscript normal infinity Endscripts upper P left-parenthesis normal p Subscript Baseline overbar Subscript i Baseline right-parenthesis period

Using Russian roulette in this way doesn’t solve the problem of needing to evaluate an infinite sum but has pushed it a bit farther out.

If we take this idea a step further and instead randomly consider terminating evaluation of the sum at each term with probability q Subscript i ,

StartFraction 1 Over 1 minus q 1 EndFraction left-parenthesis upper P left-parenthesis normal p Subscript Baseline overbar Subscript 1 Baseline right-parenthesis plus StartFraction 1 Over 1 minus q 2 EndFraction left-parenthesis upper P left-parenthesis normal p Subscript Baseline overbar Subscript 2 Baseline right-parenthesis plus StartFraction 1 Over 1 minus q 3 EndFraction left-parenthesis upper P left-parenthesis normal p Subscript Baseline overbar Subscript 3 Baseline right-parenthesis plus midline-horizontal-ellipsis comma

we will eventually stop continued evaluation of the sum. Yet, because for any particular value of i there is greater than zero probability of evaluating the term upper P left-parenthesis normal p Subscript Baseline overbar Subscript i Baseline right-parenthesis and because it will be weighted appropriately if we do evaluate it, the final result is an unbiased estimate of the sum.

14.5.2 Path Sampling

Given this method for evaluating only a finite number of terms of the infinite sum, we also need a way to estimate the contribution of a particular term upper P left-parenthesis normal p Subscript Baseline overbar Subscript i Baseline right-parenthesis . We need i plus 1 vertices to specify the path, where the last vertex normal p Subscript i is on a light source and the first vertex normal p 0 is a point on the camera film or lens (Figure 14.18). Looking at the form of upper P left-parenthesis normal p Subscript Baseline overbar Subscript i Baseline right-parenthesis , a multiple integral over surface area of objects in the scene, the most natural thing to do is to sample vertices normal p Subscript i according to the surface area of objects in the scene, such that it’s equally probable to sample any particular point on an object in the scene for normal p Subscript i as any other point. (We don’t actually use this approach in the PathIntegrator implementation for reasons that will be described later, but this sampling technique could possibly be used to improve the efficiency of our basic implementation and helps to clarify the meaning of the path integral LTE.)

Figure 14.18: A path normal p Subscript Baseline overbar Subscript i of i plus 1 vertices from the camera at normal p Subscript , intersecting a series of positions on surfaces in the scene, to a point on the light normal p Subscript i . Scattering according to the BSDF occurs at each path vertex from normal p Subscript normal i to normal p Subscript i minus 1 such that the radiance estimate at the camera due to this path is given by the product of the path throughput upper T left-parenthesis normal p Subscript Baseline overbar Subscript i Baseline right-parenthesis and the emitted radiance from the light divided by the path sampling weights.

We could define a discrete probability over the n objects in the scene. If each has surface area upper A Subscript i , then the probability of sampling a path vertex on the surface of the i th object should be

p Subscript i Baseline equals StartFraction upper A Subscript i Baseline Over sigma-summation Underscript j Endscripts upper A Subscript j Baseline EndFraction period

Then, given a method to sample a point on the i th object with uniform probability, the PDF for sampling any particular point on object i is 1 slash upper A Subscript i . Thus, the overall probability density for sampling the point is

StartFraction upper A Subscript i Baseline Over sigma-summation Underscript j Endscripts upper A Subscript j Baseline EndFraction StartFraction 1 Over upper A Subscript i Baseline EndFraction period

And all samples normal p Subscript i have the same PDF value:

p Subscript upper A Baseline left-parenthesis normal p Subscript i Baseline right-parenthesis equals StartFraction 1 Over sigma-summation Underscript j Endscripts upper A Subscript j Baseline EndFraction period

It’s reassuring that they all have the same weight, since our intent was to choose among all points on surfaces in the scene with equal probability.

Given the set of vertices normal p 0 comma normal p 1 comma ellipsis comma normal p Subscript i minus 1 Baseline sampled in this manner, we can then sample the last vertex normal p Subscript i on a light source in the scene, defining its PDF in the same way. Although we could use the same technique used for sampling path vertices to sample points on lights, this would lead to high variance, since for all of the paths where normal p Subscript i wasn’t on the surface of an emitter, the path would have zero value. The expected value would still be the correct value of the integral, but convergence would be extremely slow. A better approach is to sample over the areas of only the emitting objects with probabilities updated accordingly. Given a complete path, we have all of the information we need to compute the estimate of upper P left-parenthesis normal p Subscript Baseline overbar Subscript i Baseline right-parenthesis ; it’s just a matter of evaluating each of the terms.

It’s easy to be more creative about how we set the sampling probabilities with this general approach. For example, if we knew that indirect illumination from a few objects contributed to most of the lighting in the scene, we could assign a higher probability to generating path vertices normal p Subscript i on those objects, updating the sample weights appropriately.

However, there are two interrelated problems with sampling paths in this manner. The first can lead to high variance, while the second can lead to incorrect results. The first problem is that many of the paths will have no contribution if they have pairs of adjacent vertices that are not mutually visible. Consider applying this area sampling method in a complex building model: adjacent vertices in the path will almost always have a wall or two between them, giving no contribution for the path and high variance in the estimate.

The second problem is that if the integrand has delta functions in it (e.g., a point light source or a perfectly specular BSDF), this sampling technique will never be able to choose path vertices such that the delta distributions are nonzero. Even if there aren’t delta distributions, as the BSDFs become increasingly glossy almost all of the paths will have low contributions since the points in f Subscript Baseline left-parenthesis normal p Subscript i plus 1 Baseline right-arrow normal p Subscript i Baseline right-arrow normal p Subscript i minus 1 Baseline right-parenthesis will cause the BSDF to have a small or zero value and again we will suffer from high variance. In a similar manner, small area light sources can also be sources of variance if not sampled explicitly.

14.5.3 Incremental Path Construction

A solution that solves both of these problems is to construct the path incrementally, starting from the vertex at the camera normal p 0 . At each vertex, the BSDF is sampled to generate a new direction; the next vertex normal p Subscript i plus 1 is found by tracing a ray from normal p Subscript i in the sampled direction and finding the closest intersection. We are effectively trying to find a path with a large overall contribution by making a series of choices that find directions with important local contributions. While one can imagine situations where this approach could be ineffective, it is generally a good strategy.

Because this approach constructs the path by sampling BSDFs according to solid angle, and because the path integral LTE is an integral over surface area in the scene, we need to apply the correction to convert from the probability density according to solid angle p Subscript omega Sub Subscript to a density according to area p Subscript upper A (recall Section 5.5):

p Subscript upper A Baseline left-parenthesis normal p Subscript i Baseline right-parenthesis equals p Subscript omega Sub Subscript Subscript Baseline StartFraction StartAbsoluteValue cosine theta Subscript i Baseline EndAbsoluteValue Over double-vertical-bar normal p Subscript i plus 1 Baseline minus normal p Subscript i Baseline double-vertical-bar squared EndFraction period

This correction causes all of the terms of the geometric term upper G left-parenthesis normal p Subscript i plus 1 Baseline left-right-arrow normal p Subscript i Baseline right-parenthesis to cancel out of upper P left-parenthesis normal p Subscript Baseline overbar Subscript i Baseline right-parenthesis except for the cosine theta Subscript i plus 1 term. Furthermore, we already know that normal p Subscript i and normal p Subscript i plus 1 must be mutually visible since we traced a ray to find normal p Subscript i plus 1 , so the visibility term is trivially equal to 1. An alternative way to think about this is that ray tracing provides an operation to importance sample the visibility component of upper G . Therefore, if we use this sampling technique but we still sample the last vertex normal p Subscript i from some distribution over the surfaces of light sources p Subscript upper A Baseline left-parenthesis normal p Subscript i Baseline right-parenthesis , the value of the Monte Carlo estimate for a path is

StartLayout 1st Row 1st Column Blank 2nd Column StartFraction upper L Subscript normal e Baseline left-parenthesis normal p Subscript i Baseline right-arrow normal p Subscript i minus 1 Baseline right-parenthesis f Subscript Baseline left-parenthesis normal p Subscript i Baseline right-arrow normal p Subscript i minus 1 Baseline right-arrow normal p Subscript i minus 2 Baseline right-parenthesis upper G left-parenthesis normal p Subscript i Baseline left-right-arrow normal p Subscript i minus 1 Baseline right-parenthesis Over p Subscript upper A Baseline left-parenthesis normal p Subscript i Baseline right-parenthesis EndFraction 2nd Row 1st Column Blank 2nd Column times left-parenthesis product Underscript j equals 1 Overscript i minus 2 Endscripts StartFraction f Subscript Baseline left-parenthesis normal p Subscript j plus 1 Baseline right-arrow normal p Subscript j Baseline right-arrow normal p Subscript j minus 1 Baseline right-parenthesis StartAbsoluteValue cosine theta Subscript j Baseline EndAbsoluteValue Over p Subscript omega Sub Subscript Subscript Baseline left-parenthesis normal p Subscript j plus 1 Baseline minus normal p Subscript j Baseline right-parenthesis EndFraction right-parenthesis period EndLayout

14.5.4 Implementation

Our path-tracing implementation computes an estimate of the sum of path contributions upper P left-parenthesis normal p Subscript Baseline overbar Subscript i Baseline right-parenthesis using the approach described in the previous subsection. Starting at the first intersection of the camera ray with the scene geometry, normal p 1 , it incrementally samples path vertices by sampling from the BSDF’s sampling distribution at the current vertex and tracing a ray to the next vertex. To find the last vertex of a particular path, normal p Subscript i , which must be on a light source in the scene, it uses the multiple importance sampling–based direct lighting code that was developed for the direct lighting integrator. By using the multiple importance sampling weights instead of p Subscript upper A Baseline left-parenthesis normal p Subscript i Baseline right-parenthesis to compute the estimate as described earlier, we have lower variance in the result for cases where sampling the BSDF would have been a better way to find a point on the light.

Beyond how lights are sampled, another small difference is that as the estimates of the path contribution terms upper P left-parenthesis normal p Subscript Baseline overbar Subscript i Baseline right-parenthesis are being evaluated, the vertices of the previous path of length i minus 1 (everything except the vertex on the emitter) are reused as a starting point when constructing the path of length  i . This means that it is only necessary to trace one more ray to construct the new path, rather than  i rays as we would if we started from scratch. Reusing paths in this manner does introduce correlation among all of the upper P left-parenthesis normal p Subscript Baseline overbar Subscript i Baseline right-parenthesis terms in the sum, which slightly reduces the quality of the result, although in practice this is more than made up for by the improved overall efficiency due to tracing fewer rays.

<<PathIntegrator Declarations>>= 
class PathIntegrator : public SamplerIntegrator { public: <<PathIntegrator Public Methods>> 
Spectrum Li(const RayDifferential &ray, const Scene &scene, Sampler &sampler, MemoryArena &arena, int depth) const; PathIntegrator(int maxDepth, std::shared_ptr<const Camera> camera, std::shared_ptr<Sampler> sampler) : SamplerIntegrator(camera, sampler), maxDepth(maxDepth) { }
private: <<PathIntegrator Private Data>> 
const int maxDepth;
};

Although Russian roulette is used here to terminate path sampling in the manner described earlier, the integrator also supports a maximum depth. It can be set to a large value if only Russian roulette should be used to terminate paths.

<<PathIntegrator Public Methods>>= 
PathIntegrator(int maxDepth, std::shared_ptr<const Camera> camera, std::shared_ptr<Sampler> sampler) : SamplerIntegrator(camera, sampler), maxDepth(maxDepth) { }

<<PathIntegrator Private Data>>= 
const int maxDepth;

A number of variables record the current state of the path. beta holds the path throughput weight, which is defined as the factors of the throughput function upper T left-parenthesis normal p Subscript Baseline overbar Subscript i minus 1 Baseline right-parenthesis —i.e., the product of the BSDF values and cosine terms for the vertices generated so far, divided by their respective sampling PDFs:

beta equals product Underscript j equals 1 Overscript i minus 2 Endscripts StartFraction f Subscript Baseline left-parenthesis normal p Subscript j plus 1 Baseline right-arrow normal p Subscript j Baseline right-arrow normal p Subscript j minus 1 Baseline right-parenthesis StartAbsoluteValue cosine theta Subscript j Baseline EndAbsoluteValue Over p Subscript omega Sub Subscript Subscript Baseline left-parenthesis normal p Subscript j plus 1 Baseline minus normal p Subscript j Baseline right-parenthesis EndFraction period

Thus, the product of beta with scattered light from direct lighting at the final vertex of the path gives the contribution for a path. (This quantity will reoccur many times in the following two chapters, and we will consistently refer to it as monospace b monospace e monospace t monospace a .) Because the effect of earlier path vertices is aggregated in this way, there is no need to store the positions and BSDFs of all of the vertices of the path, only the last one.

In the following implementation, L holds the radiance value from the running total of sigma-summation upper P left-parenthesis normal p Subscript Baseline overbar Subscript i Baseline right-parenthesis , ray holds the next ray to be traced to extend the path one more vertex, and specularBounce records if the last outgoing path direction sampled was due to specular reflection; the need to track this will be explained shortly.

<<PathIntegrator Method Definitions>>= 
Spectrum PathIntegrator::Li(const RayDifferential &r, const Scene &scene, Sampler &sampler, MemoryArena &arena, int depth) const { Spectrum L(0.f), beta(1.f); RayDifferential ray(r); bool specularBounce = false; for (int bounces = 0; ; ++bounces) { <<Find next path vertex and accumulate contribution>> 
<<Intersect ray with scene and store intersection in isect>> 
SurfaceInteraction isect; bool foundIntersection = scene.Intersect(ray, &isect);
<<Possibly add emitted light at intersection>> 
if (bounces == 0 || specularBounce) { <<Add emitted light at path vertex or from the environment>> 
if (foundIntersection) L += beta * isect.Le(-ray.d); else for (const auto &light : scene.lights) L += beta * light->Le(ray);
}
<<Terminate path if ray escaped or maxDepth was reached>> 
if (!foundIntersection || bounces >= maxDepth) break;
<<Compute scattering functions and skip over medium boundaries>> 
isect.ComputeScatteringFunctions(ray, arena, true); if (!isect.bsdf) { ray = isect.SpawnRay(ray.d); bounces--; continue; }
<<Sample illumination from lights to find path contribution>> 
L += beta * UniformSampleOneLight(isect, scene, arena, sampler);
<<Sample BSDF to get new path direction>> 
Vector3f wo = -ray.d, wi; Float pdf; BxDFType flags; Spectrum f = isect.bsdf->Sample_f(wo, &wi, sampler.Get2D(), &pdf, BSDF_ALL, &flags); if (f.IsBlack() || pdf == 0.f) break; beta *= f * AbsDot(wi, isect.shading.n) / pdf; specularBounce = (flags & BSDF_SPECULAR) != 0; ray = isect.SpawnRay(wi);
<<Account for subsurface scattering, if applicable>> 
if (isect.bssrdf && (flags & BSDF_TRANSMISSION)) { <<Importance sample the BSSRDF>> 
SurfaceInteraction pi; Spectrum S = isect.bssrdf->Sample_S(scene, sampler.Get1D(), sampler.Get2D(), arena, &pi, &pdf); if (S.IsBlack() || pdf == 0) break; beta *= S / pdf;
<<Account for the direct subsurface scattering component>> 
L += beta * UniformSampleOneLight(pi, scene, arena, sampler);
<<Account for the indirect subsurface scattering component>> 
Spectrum f = pi.bsdf->Sample_f(pi.wo, &wi, sampler.Get2D(), &pdf, BSDF_ALL, &flags); if (f.IsBlack() || pdf == 0) break; beta *= f * AbsDot(wi, pi.shading.n) / pdf; specularBounce = (flags & BSDF_SPECULAR) != 0; ray = pi.SpawnRay(wi);
}
<<Possibly terminate the path with Russian roulette>> 
if (bounces > 3) { Float q = std::max((Float).05, 1 - beta.y()); if (sampler.Get1D() < q) break; beta /= 1 - q; }
} return L; }

Each time through the for loop of the integrator, the next vertex of the path is found by intersecting the current ray with the scene geometry and computing the contribution of the path to the overall radiance value with the direct lighting code. A new direction is then chosen by sampling from the BSDF’s distribution at the last vertex of the path. After a few vertices have been sampled, Russian roulette is used to randomly terminate the path.

<<Find next path vertex and accumulate contribution>>= 
<<Intersect ray with scene and store intersection in isect>> 
SurfaceInteraction isect; bool foundIntersection = scene.Intersect(ray, &isect);
<<Possibly add emitted light at intersection>> 
if (bounces == 0 || specularBounce) { <<Add emitted light at path vertex or from the environment>> 
if (foundIntersection) L += beta * isect.Le(-ray.d); else for (const auto &light : scene.lights) L += beta * light->Le(ray);
}
<<Terminate path if ray escaped or maxDepth was reached>> 
if (!foundIntersection || bounces >= maxDepth) break;
<<Compute scattering functions and skip over medium boundaries>> 
isect.ComputeScatteringFunctions(ray, arena, true); if (!isect.bsdf) { ray = isect.SpawnRay(ray.d); bounces--; continue; }
<<Sample illumination from lights to find path contribution>> 
L += beta * UniformSampleOneLight(isect, scene, arena, sampler);
<<Sample BSDF to get new path direction>> 
Vector3f wo = -ray.d, wi; Float pdf; BxDFType flags; Spectrum f = isect.bsdf->Sample_f(wo, &wi, sampler.Get2D(), &pdf, BSDF_ALL, &flags); if (f.IsBlack() || pdf == 0.f) break; beta *= f * AbsDot(wi, isect.shading.n) / pdf; specularBounce = (flags & BSDF_SPECULAR) != 0; ray = isect.SpawnRay(wi);
<<Account for subsurface scattering, if applicable>> 
if (isect.bssrdf && (flags & BSDF_TRANSMISSION)) { <<Importance sample the BSSRDF>> 
SurfaceInteraction pi; Spectrum S = isect.bssrdf->Sample_S(scene, sampler.Get1D(), sampler.Get2D(), arena, &pi, &pdf); if (S.IsBlack() || pdf == 0) break; beta *= S / pdf;
<<Account for the direct subsurface scattering component>> 
L += beta * UniformSampleOneLight(pi, scene, arena, sampler);
<<Account for the indirect subsurface scattering component>> 
Spectrum f = pi.bsdf->Sample_f(pi.wo, &wi, sampler.Get2D(), &pdf, BSDF_ALL, &flags); if (f.IsBlack() || pdf == 0) break; beta *= f * AbsDot(wi, pi.shading.n) / pdf; specularBounce = (flags & BSDF_SPECULAR) != 0; ray = pi.SpawnRay(wi);
}
<<Possibly terminate the path with Russian roulette>> 
if (bounces > 3) { Float q = std::max((Float).05, 1 - beta.y()); if (sampler.Get1D() < q) break; beta /= 1 - q; }

The first step in the loop is to find the next path vertex by intersecting ray against the scene geometry.

<<Intersect ray with scene and store intersection in isect>>= 
SurfaceInteraction isect; bool foundIntersection = scene.Intersect(ray, &isect);

If the ray hits an object that is emissive, the emission is usually ignored, since the loop iteration at the previous path vertex performed a direct illumination estimate that already accounted for its effect. The same is true when a ray escapes into an emissive environment. However, there are two exceptions: the first is at the initial intersection point of camera rays, since this is the only opportunity to include emission from directly visible objects. The second is when the sampled direction from the last path vertex was from a specular BSDF component: in this case, the previous iteration’s direct illumination estimate could not evaluate the associated integrand containing a Dirac delta function, and we must account for it here.

<<Possibly add emitted light at intersection>>= 
if (bounces == 0 || specularBounce) { <<Add emitted light at path vertex or from the environment>> 
if (foundIntersection) L += beta * isect.Le(-ray.d); else for (const auto &light : scene.lights) L += beta * light->Le(ray);
}

When no intersection is found, the ray has escaped the scene and thus the path sampling iteration terminates. Similarly, the iteration terminates when bounces exceeds the prescribed maximum value.

<<Terminate path if ray escaped or maxDepth was reached>>= 
if (!foundIntersection || bounces >= maxDepth) break;

When emitted light should be included, the path throughput weight must be multiplied with the radiance emitted by the current path vertex (if an intersection was found) or radiance emitted by infinite area light sources, if present.

<<Add emitted light at path vertex or from the environment>>= 
if (foundIntersection) L += beta * isect.Le(-ray.d); else for (const auto &light : scene.lights) L += beta * light->Le(ray);

Before estimating the direct illumination at the current vertex, it is necessary to compute the scattering functions at the vertex. A special case arises when SurfaceInteraction::bsdf is equal to nullptr, which indicates that the current surface has no effect on light. pbrt uses such surfaces to represent transitions between participating media, whose boundaries are themselves optically inactive (i.e., they have the same index of refraction on both sides). Since the basic PathIntegrator ignores media, it simply skips over such surfaces without counting them as scattering events in the bounces counter.

<<Compute scattering functions and skip over medium boundaries>>= 
isect.ComputeScatteringFunctions(ray, arena, true); if (!isect.bsdf) { ray = isect.SpawnRay(ray.d); bounces--; continue; }

The direct lighting computation uses the UniformSampleOneLight() function, which gives an estimate of the exitant radiance from direct lighting at the vertex at the end of the current path. Scaling this value by the path throughput weight gives its overall contribution to the total radiance estimate.

<<Sample illumination from lights to find path contribution>>= 
L += beta * UniformSampleOneLight(isect, scene, arena, sampler);

Now it is necessary to sample the BSDF at the vertex at the end of the current path to get an outgoing direction for the next ray to trace. The integrator updates the path throughput weight as described earlier and initializes ray with the ray to be traced to find the next vertex in the next iteration of the for loop.

<<Sample BSDF to get new path direction>>= 
Vector3f wo = -ray.d, wi; Float pdf; BxDFType flags; Spectrum f = isect.bsdf->Sample_f(wo, &wi, sampler.Get2D(), &pdf, BSDF_ALL, &flags); if (f.IsBlack() || pdf == 0.f) break; beta *= f * AbsDot(wi, isect.shading.n) / pdf; specularBounce = (flags & BSDF_SPECULAR) != 0; ray = isect.SpawnRay(wi);

The case where a ray refracts into a material with a BSSRDF is handled specially in the fragment <<Account for subsurface scattering, if applicable>>, which is implemented in Section 15.4.3 after subsurface scattering has been discussed in more detail.

Path termination kicks in after a few bounces, with termination probability q set based on the path throughput weight. In general, it’s worth having a higher probability of terminating low-contributing paths, since they have relatively less impact on the final image. (A minimum termination probability ensures termination is possible if beta is large; for example, due to a large BSDF value divided by a low sampling probability.) If the path isn’t terminated, beta is updated with the Russian roulette weight and all subsequent upper P left-parenthesis normal p Subscript Baseline overbar Subscript i Baseline right-parenthesis terms will be appropriately affected by it.

<<Possibly terminate the path with Russian roulette>>= 
if (bounces > 3) { Float q = std::max((Float).05, 1 - beta.y()); if (sampler.Get1D() < q) break; beta /= 1 - q; }