11.3 Media
Implementations of the Medium base class provide various representations of volumetric scattering properties in a region of space. In a complex scene, there may be multiple Medium instances, each representing a different scattering effect. For example, an outdoor lake scene might have one Medium to model atmospheric scattering, another to model mist rising from the lake, and a third to model particles suspended in the water of the lake.
A key operation that Medium implementations must perform is to compute the beam transmittance, Equation (11.1), along a given ray passed to its Tr() method. Specifically, the method should return an estimate of the transmittance on the interval between the ray origin and the point at a distance of Ray::tMax from the origin.
Medium-aware Integrators using this interface are responsible for accounting for interactions with surfaces in the scene as well as the spatial extent of the Medium; hence we will assume that the ray passed to the Tr() method is both unoccluded and fully contained within the current Medium. Some implementations of this method use Monte Carlo integration to compute the transmittance; a Sampler is provided for this case. (See Section 15.2.)
The spatial distribution and extent of media in a scene is defined by associating Medium instances with the camera, lights, and primitives in the scene. For example, Cameras store a Medium pointer that gives the medium for rays leaving the camera and similarly for Lights.
In pbrt, the boundary between two different types of scattering media is always represented by the surface of a GeometricPrimitive. Rather than storing a single Medium pointer like lights and cameras, GeometricPrimitives hold a MediumInterface, which in turn holds pointers to one Medium for the interior of the primitive and one for the exterior. For all of these cases, a nullptr can be used to indicate a vacuum (where no volumetric scattering occurs.)
This approach to specifying the extent of participating media does allow the user to specify impossible or inconsistent configurations. For example, a primitive could be specified as having one medium outside of it, and the camera could be specified as being in a different medium without a MediumInterface between the camera and the surface of the primitive. In this case, a ray leaving the primitive toward the camera would be treated as being in a different medium from a ray leaving the camera toward the primitive. In turn, light transport algorithms would be unable to compute consistent results. For pbrt’s purposes, we think it’s reasonable to expect that the user will be able to specify a consistent configuration of media in the scene and that the added complexity of code to check this isn’t worthwhile.
A MediumInterface can be initialized with either one or two Medium pointers. If only one is provided, then it represents an interface with the same medium on both sides.
The function MediumInterface::IsMediumTransition() checks whether a particular MediumInterface instance marks a transition between two distinct media.
We can now provide a missing piece in the implementation of the GeometricPrimitive::Intersect() method. The code in this fragment is executed whenever an intersection with a geometric primitive has been found; its job is to set the medium interface at the intersection point.
Instead of simply copying the value of the GeometricPrimitive::mediumInterface field, we will follow a slightly different approach and only use this information when this MediumInterface specifies a proper transition between participating media. Otherwise, the Ray::medium field takes precedence.
Setting the SurfaceInteraction’s mediumInterface field in this way greatly simplifies the specification of scenes containing media: in particular, it is not necessary to tag every scene surface that is in contact with a medium. Instead, only non-opaque surfaces that have different media on each side require an explicit medium reference in their GeometricPrimitive::mediumInterface field. In the simplest case where a scene containing opaque objects is filled with a participating medium (e.g., haze), it is enough to tag the camera and light sources.
Primitives associated with shapes that represent medium boundaries generally have a Material associated with them. For example, the surface of a lake might use an instance of GlassMaterial to describe scattering at the lake surface, which also acts as the boundary between the rising mist’s Medium and the lake water’s Medium. However, sometimes we only need the shape for the boundary surface it provides to delimit a participating medium boundary and we don’t want to see the surface itself. For example, the medium representing a cloud might be bounded by a box made of triangles where the triangles are only there to delimit the cloud’s extent and shouldn’t otherwise affect light passing through them.
While such a surface that disappears and doesn’t affect ray paths could be perfectly accurately described by a BTDF that represented perfect specular transmission with the same index of refraction on both sides, dealing with such surfaces places extra burden on the Integrators (not all of which handle this type of specular light transport well). Therefore, pbrt allows such surfaces to have a Material * that is nullptr, indicating that they do not affect light passing through them; in turn, SurfaceInteraction::bsdf will also be nullptr, and the light transport routines don’t worry about light scattering from such surfaces and only account for changes in the current medium at them. Figure 11.14 has two instances of the dragon model filled with scattering media; one has a scattering surface at the boundary and the other does not.
Given these conventions for how Medium implementations are associated with rays passing through regions of space, we will implement a Scene::IntersectTr() method, which is a generalization of Scene::Intersect() that returns the first intersection with a light-scattering surface along the given ray as well as the beam transmittance up to that point. (If no intersection is found, this method returns false and doesn’t initialize the provided SurfaceInteraction.)
Each time through the loop, the transmittance along the ray is accumulated into the overall beam transmittance *Tr. Recall that Scene::Intersect() will have updated the ray’s tMax member variable to the intersection point if it did intersect a surface. The Tr() implementation will use this value to find the segment over which to compute transmittance.
The loop ends when no intersection is found or when a scattering surface is intersected. If an optically inactive surface with its bsdf equal to nullptr is intersected, a new ray is spawned in the same direction from the intersection point, though potentially in a different medium, based on the intersection’s MediumInterface field.
11.3.1 Medium Interactions
Section 2.10 introduced the general Interaction class as well as the SurfaceInteraction specialization to represent interactions at surfaces. Now that we have some machinery for describing scattering in volumes, it’s worth generalizing these representations. First, we’ll add two more Interaction constructors for interactions at points in scattering media.
For surface interactions where Interaction::n has been set, the Medium * for a ray leaving the surface in the direction w is returned by the GetMedium() method.
For interactions that are known to be inside participating media, another variant of GetMedium() that doesn’t take the unnecessary outgoing direction vector returns the Medium *.
Just as the SurfaceInteraction class represents an interaction obtained by intersecting a ray against the scene geometry, MediumInteraction represents an interaction at a point in a scattering medium that is obtained using a similar kind of operation.
MediumInteraction adds a new PhaseFunction member variable to store the phase function associated with its position.
11.3.2 Homogeneous Medium
The HomogeneousMedium is the simplest possible medium; it represents a region of space with constant and values throughout its extent. It uses the Henyey–Greenstein phase function to represent scattering in the medium, also with a constant value. This medium was used for the images in Figure 11.13 and 11.14.
Because is constant throughout the medium, Beer’s law, Equation (11.3), can be used to compute transmittance along the ray. However, implementation of the Tr() method is complicated by some subtleties of floating-point arithmetic. As discussed in Section 3.9.1, IEEE floating point provides a representation for infinity; in pbrt, this value, Infinity, is used to initialize Ray::tMax for rays leaving cameras and lights, which is useful for ray–intersection tests in that it ensures that any actual intersection, even if far along the ray, is detected as an intersection for a ray that hasn’t intersected anything yet.
However, the use of Infinity for Ray::tMax creates a small problem when applying Beer’s law. In principle, we just need to compute the parametric range that the ray spans, multiply by the ray direction’s length, and then multiply by :
The problem is that multiplying Infinity by zero results in the floating-point “not a number” (NaN) value, which propagates throughout all computations that use it. For a ray that passes infinitely far through a medium with zero absorption for a given spectral channel, the above code would attempt to perform the multiplication 0 * Infinity and would produce a NaN value rather than the expected transmittance of zero. The implementation here resolves this issue by clamping the ray segment length to the largest representable non-infinite floating-point value.
11.3.3 3D Grids
The GridDensityMedium class stores medium densities at a regular 3D grid of positions, similar to the way that the ImageTexture represents images with a 2D grid of samples. These samples are interpolated to compute the density at positions between the sample points. The implementation of the GridDensityMedium is in media/grid.h and media/grid.cpp.
The constructor takes a 3D array of user-supplied density values, thus allowing a variety of sources of data (physical simulation, CT scan, etc.). The smoke data set rendered in Figures 11.2, 11.5, and 11.10 is represented with a GridDensityMedium. The caller also supplies baseline values of , , and to the constructor, which does the usual initialization of the basic scattering properties and makes a local copy of the density values.
The Density() method of GridDensityMedium is called by GridDensityMedium::Tr(); it uses the provided samples to reconstruct the volume density function at the given point, which will already have been transformed into local coordinates using WorldToMedium. In turn, and will be scaled by the interpolated density at the point.
The grid samples are assumed to be over a canonical domain. (The WorldToMedium transformation should be used to place the GridDensityMedium in the scene.) To interpolate the samples around a point, the Density() method first computes the coordinates of the point with respect to the sample coordinates and the distances from the point to the samples below it (along the lines of what was done in the Film and MIPMap—see also Section 7.1.7).
The distances d can be used directly in a series of invocations of Lerp() to trilinearly interpolate the density at the sample point:
The D() utility method returns the density at the given integer sample position. Its only tasks are to handle out-of-bounds sample positions and to compute the appropriate array offset for the given sample. Unlike MIPMaps, where a variety of behavior is useful in the case of out-of-bounds coordinates, here it’s reasonable to always return a zero density for them: the density is defined over a particular domain, and it’s reasonable that points outside of it should have zero density.