8 Reflection Models

This chapter defines a set of classes for describing the way that light scatters at surfaces. Recall that in Section 5.6.1 we introduced the bidirectional reflectance distribution function (BRDF) abstraction to describe light reflection at a surface, the BTDF to describe transmission at a surface, and the BSDF to encompass both of these effects. In this chapter, we will start by defining a generic interface to these surface reflection and transmission functions.

Scattering from many surfaces is often best described as a spatially varying mixture of multiple BRDFs and BTDFs; in Chapter 9, we will introduce a BSDF object that combines multiple BRDFs and BTDFs to represent overall scattering from the surface. The current chapter sidesteps the issue of reflection and transmission properties that vary over the surface; the texture classes of Chapter 10 will address that problem. BRDFs and BTDFs explicitly only model scattering from light that enters and exits a surface at a single point. For surfaces that exhibit meaningful subsurface light transport, we will introduce the BSSRDF class, which models subsurface scattering, in Section 11.4 after some of the related theory is introduced in Chapter 11.

Surface reflection models come from a number of sources:

  • Measured data: Reflection distribution properties of many real-world surfaces have been measured in laboratories. Such data may be used directly in tabular form or to compute coefficients for a set of basis functions.
  • Phenomenological models: Equations that attempt to describe the qualitative properties of real-world surfaces can be remarkably effective at mimicking them. These types of BSDFs can be particularly easy to use, since they tend to have intuitive parameters that modify their behavior (e.g., “roughness”).
  • Simulation: Sometimes, low-level information is known about the composition of a surface. For example, we might know that a paint is comprised of colored particles of some average size suspended in a medium or that a particular fabric is comprised of two types of threads, each with known reflectance properties. In these cases, light scattering from the microgeometry can be simulated to generate reflection data. This simulation can be done either during rendering or as a preprocess, after which it may be fit to a set of basis functions for use during rendering.
  • Physical (wave) optics: Some reflection models have been derived using a detailed model of light, treating it as a wave and computing the solution to Maxwell’s equations to find how it scatters from a surface with known properties. These models tend to be computationally expensive, however, and usually aren’t appreciably more accurate than models based on geometric optics are for rendering applications.
  • Geometric optics: As with simulation approaches, if the surface’s low-level scattering and geometric properties are known, then closed-form reflection models can sometimes be derived directly from these descriptions. Geometric optics makes modeling light’s interaction with the surface more tractable, since complex wave effects like polarization can be ignored.

The “Further Reading” section at the end of this chapter gives pointers to a variety of such reflection models.

Before we define the relevant interfaces, a brief review of how they fit into the overall system is in order. If a SamplerIntegrator is used, the SamplerIntegrator::Li() method implementation is called for each ray. After finding the closest intersection with a geometric primitive, it calls the surface shader that is associated with the primitive. The surface shader is implemented as a method of Material subclasses and is responsible for deciding what the BSDF is at a particular point on the surface; it returns a BSDF object that holds BRDFs and BTDFs that it has allocated and initialized to represent scattering at that point. The integrator then uses the BSDF to compute the scattered light at the point, based on the incoming illumination at the point. (The process where a BDPTIntegrator, MLTIntegrator, or SPPMIntegrator is used rather than a SamplerIntegrator is broadly similar.)

Basic Terminology

In order to be able to compare the visual appearance of different reflection models, we will introduce some basic terminology for describing reflection from surfaces.

Reflection from surfaces can be split into four broad categories: diffuse, glossy specular, perfect specular, and retro-reflective (Figure 8.1). Most real surfaces exhibit reflection that is a mixture of these four types. Diffuse surfaces scatter light equally in all directions. Although a perfectly diffuse surface isn’t physically realizable, examples of near-diffuse surfaces include dull chalkboards and matte paint. Glossy specular surfaces such as plastic or high-gloss paint scatter light preferentially in a set of reflected directions—they show blurry reflections of other objects. Perfect specular surfaces scatter incident light in a single outgoing direction. Mirrors and glass are examples of perfect specular surfaces. Finally, retro-reflective surfaces like velvet or the Earth’s moon scatter light primarily back along the incident direction. Images throughout this chapter will show the differences between these various types of reflection when used in rendered scenes.

Figure 8.1: Reflection from a surface can be generally categorized by the distribution of reflected light from an incident direction (heavy lines): (1) diffuse, (2) glossy specular, (3) nearly-perfect specular, and (4) retro-reflective distributions.

Given a particular category of reflection, the reflectance distribution function may be isotropic or anisotropic. Most objects are isotropic: if you choose a point on the surface and rotate it around its normal axis at that point, the distribution of light reflected doesn’t change. In contrast, anisotropic materials reflect different amounts of light as you rotate them in this way. Examples of anisotropic surfaces include brushed metal, many types of cloth, and compact disks.

Geometric Setting

Reflection computations in pbrt are evaluated in a reflection coordinate system where the two tangent vectors and the normal vector at the point being shaded are aligned with the x , y , and z axes, respectively (Figure 8.2). All direction vectors passed to and returned from the BRDF and BTDF routines will be defined with respect to this coordinate system. It is important to understand this coordinate system in order to understand the BRDF and BTDF implementations in this chapter.

Figure 8.2: The Basic BSDF Interface Setting. The shading coordinate system is defined by the orthonormal basis vectors left-parenthesis bold s comma bold t comma bold n right-parenthesis . We will orient these vectors such that they lie along the x , y , and z axes in this coordinate system. Direction vectors omega Subscript in world space are transformed into the shading coordinate system before any of the BRDF or BTDF methods are called.

The shading coordinate system also gives a frame for expressing directions in spherical coordinates left-parenthesis theta comma phi right-parenthesis ; the angle theta is measured from the given direction to the z axis, and phi is the angle formed with the x axis after projection of the direction onto the x y plane. Given a direction vector omega Subscript in this coordinate system, it is easy to compute quantities like the cosine of the angle that it forms with the normal direction:

cosine theta equals left-parenthesis bold n Subscript Baseline dot omega Subscript Baseline right-parenthesis equals left-parenthesis left-parenthesis 0 comma 0 comma 1 right-parenthesis dot omega Subscript Baseline right-parenthesis equals omega Subscript z Baseline period

We will provide utility functions to compute this value and some useful variations; their use helps clarify BRDF and BTDF implementations.

<<BSDF Inline Functions>>= 
inline Float CosTheta(const Vector3f &w) { return w.z; } inline Float Cos2Theta(const Vector3f &w) { return w.z * w.z; } inline Float AbsCosTheta(const Vector3f &w) { return std::abs(w.z); }

The value of sine squared theta can be computed using the trigonometric identity sine squared theta plus cosine squared theta equals 1 , though we need to be careful to avoid taking the square root of a negative number in the rare case that 1 - Cos2Theta(w) is less than zero due to floating-point round-off error.

<<BSDF Inline Functions>>+=  
inline Float Sin2Theta(const Vector3f &w) { return std::max((Float)0, (Float)1 - Cos2Theta(w)); } inline Float SinTheta(const Vector3f &w) { return std::sqrt(Sin2Theta(w)); }

The tangent of the angle theta can be computed via the identity tangent theta equals sine theta slash cosine theta .

<<BSDF Inline Functions>>+=  
inline Float TanTheta(const Vector3f &w) { return SinTheta(w) / CosTheta(w); } inline Float Tan2Theta(const Vector3f &w) { return Sin2Theta(w) / Cos2Theta(w); }

We can similarly use the shading coordinate system to simplify the calculations for the sine and cosine of the phi angle (Figure 8.3). In the plane of the point being shaded, the vector omega Subscript has coordinates left-parenthesis x comma y right-parenthesis , which are given by r cosine phi and r sine phi , respectively. The radius  r is sine theta , so

StartLayout 1st Row 1st Column cosine phi 2nd Column equals StartFraction x Over r EndFraction equals StartFraction x Over sine theta EndFraction 2nd Row 1st Column sine phi 2nd Column equals StartFraction y Over r EndFraction equals StartFraction y Over sine theta EndFraction period EndLayout

Figure 8.3: The values of sine phi and cosine phi can be computed using the circular coordinate equations x equals r cosine phi and y equals r sine phi , where r , the length of the dashed line, is equal to sine theta .

<<BSDF Inline Functions>>+=  
inline Float CosPhi(const Vector3f &w) { Float sinTheta = SinTheta(w); return (sinTheta == 0) ? 1 : Clamp(w.x / sinTheta, -1, 1); } inline Float SinPhi(const Vector3f &w) { Float sinTheta = SinTheta(w); return (sinTheta == 0) ? 0 : Clamp(w.y / sinTheta, -1, 1); }

<<BSDF Inline Functions>>+=  
inline Float Cos2Phi(const Vector3f &w) { return CosPhi(w) * CosPhi(w); } inline Float Sin2Phi(const Vector3f &w) { return SinPhi(w) * SinPhi(w); }

The cosine of the angle normal upper Delta phi between two vectors’ phi values in the shading coordinate system can be found by zeroing the z coordinate of the two vectors to get 2D vectors and then normalizing them. The dot product of these two vectors gives the cosine of the angle between them. The implementation below rearranges the terms a bit for efficiency so that only a single square root operation needs to be performed.

<<BSDF Inline Functions>>+=  
inline Float CosDPhi(const Vector3f &wa, const Vector3f &wb) { return Clamp((wa.x * wb.x + wa.y * wb.y) / std::sqrt((wa.x * wa.x + wa.y * wa.y) * (wb.x * wb.x + wb.y * wb.y)), -1, 1); }

There are important conventions and implementation details to keep in mind when reading the code in this chapter and when adding BRDFs and BTDFs to pbrt:

  • The incident light direction omega Subscript normal i and the outgoing viewing direction omega Subscript normal o will both be normalized and outward facing after being transformed into the local coordinate system at the surface.
  • By convention in pbrt, the surface normal bold n Subscript always points to the “outside” of the object, which makes it easy to determine if light is entering or exiting transmissive objects: if the incident light direction omega Subscript normal i is in the same hemisphere as bold n Subscript , then light is entering; otherwise, it is exiting. Therefore, one detail to keep in mind is that the normal may be on the opposite side of the surface than one or both of the omega Subscript normal i and omega Subscript normal o direction vectors. Unlike many other renderers, pbrt does not flip the normal to lie on the same side as omega Subscript normal o .
  • The local coordinate system used for shading may not be exactly the same as the coordinate system returned by the Shape::Intersect() routines from Chapter 3; they can be modified between intersection and shading to achieve effects like bump mapping. See Chapter 9 for examples of this kind of modification.
  • Finally, BRDF and BTDF implementations should not concern themselves with whether omega Subscript normal i and omega Subscript normal o lie in the same hemisphere. For example, although a reflective BRDF should in principle detect if the incident direction is above the surface and the outgoing direction is below and always return no reflection in this case, here we will expect the reflection function to instead compute and return the amount of light reflected using the appropriate formulas for their reflection model, ignoring the detail that they are not in the same hemisphere. Higher level code in pbrt will ensure that only reflective or transmissive scattering routines are evaluated as appropriate. The value of this convention will be explained in Section 9.1.