10.5 Material Interface and Implementations

With a variety of textures available, we will turn to materials, first introducing the material interface and then a few material implementations. pbrt’s materials all follow a similar form, evaluating textures to get parameter values that are used to initialize their particular BSDF model. Therefore, we will only include a few of their implementations in the text here.

The Material interface is defined by the Material class, which can be found in the file base/material.h. pbrt includes the implementations of 11 materials; these are enough that we have collected all of their type names in a fragment that is not included in the text.

<<Material Definition>>= 
class Material : public TaggedPointer<<<Material Types>> 
CoatedDiffuseMaterial, CoatedConductorMaterial, ConductorMaterial, DielectricMaterial, DiffuseMaterial, DiffuseTransmissionMaterial, HairMaterial, MeasuredMaterial, SubsurfaceMaterial, ThinDielectricMaterial, MixMaterial
> { public: <<Material Interface>> 
using TaggedPointer::TaggedPointer; static Material Create( const std::string &name, const TextureParameterDictionary &parameters, Image *normalMap, /*const */ std::map<std::string, Material> &namedMaterials, const FileLoc *loc, Allocator alloc); std::string ToString() const; template <typename TextureEvaluator> inline BSDF GetBSDF(TextureEvaluator texEval, MaterialEvalContext ctx, SampledWavelengths &lambda, ScratchBuffer &buf) const; template <typename TextureEvaluator> BSSRDF GetBSSRDF(TextureEvaluator texEval, MaterialEvalContext ctx, SampledWavelengths &lambda, ScratchBuffer &buf) const; template <typename TextureEvaluator> bool CanEvaluateTextures(TextureEvaluator texEval) const; const Image *GetNormalMap() const; FloatTexture GetDisplacement() const; bool HasSubsurfaceScattering() const;
};

One of the most important methods that Material implementations must provide is GetBxDF(). It has the following signature:

template <typename TextureEvaluator> ConcreteBxDF GetBxDF(TextureEvaluator texEval, MaterialEvalContext ctx, SampledWavelengths &lambda) const;

There are a few things to notice in its declaration. First, it is templated based on a type TextureEvaluator. This class is used by materials to, unsurprisingly, evaluate their textures. We will discuss it further in a page or two, as well as MaterialEvalContext, which serves a similar role to TextureEvalContext.

Most importantly, note the return type, ConcreteBxDF. This type is specific to each Material and should be replaced with the actual BxDF type that the material uses. (For example, the DiffuseMaterial returns a DiffuseBxDF.) Different materials thus have different signatures for their GetBxDF() methods. This is unusual for an interface method in C++ and is not usually allowed with regular C++ virtual functions, though we will see shortly how pbrt handles the variety of them.

Each Material is also responsible for defining the type of BxDF that it returns from its GetBxDF() method with a local type definition for the type BxDF. For example, DiffuseMaterial has

using BxDF = DiffuseBxDF;

in the body of its definition.

The value of defining the interface in this way is that doing so makes it possible to write generic BSDF evaluation code that is templated on the type of material. Such code can then allocate storage for the BxDF on the stack, for whatever type of BxDF the material uses. pbrt’s wavefront renderer, which is described in Chapter 15, takes advantage of this opportunity. (Further details and discussion of its use there are in Section 15.3.9.) A disadvantage of this design is that materials cannot return different BxDF types depending on their parameter values; they are limited to the one that they declare.

The Material class provides a GetBSDF() method that handles the variety of material BxDF return types. It requires some C++ arcana, though it centralizes the complexity of handling the diversity of types returned from the GetBxDF() methods.

Material::GetBSDF() has the same general form of most of the dynamic dispatch method implementations in pbrt. (We have elided almost all of them from the text since most of them are boilerplate code.) Here we define a lambda function, getBSDF, and call the Dispatch() method that Material inherits from TaggedPointer. Recall that Dispatch() uses type information encoded in a 64-bit pointer to determine which concrete material type the Material points to before casting the pointer to that type and passing it to the lambda.

<<Material Inline Method Definitions>>= 
template <typename TextureEvaluator> BSDF Material::GetBSDF( TextureEvaluator texEval, MaterialEvalContext ctx, SampledWavelengths &lambda, ScratchBuffer &scratchBuffer) const { <<Define getBSDF lambda function for Material::GetBSDF()>> 
auto getBSDF = [&](auto mtl) -> BSDF { using ConcreteMtl = typename std::remove_reference_t<decltype(*mtl)>; using ConcreteBxDF = typename ConcreteMtl::BxDF; if constexpr (std::is_same_v<ConcreteBxDF, void>) return BSDF(); else { <<Allocate memory for ConcreteBxDF and return BSDF for material>> 
ConcreteBxDF *bxdf = scratchBuffer.Alloc<ConcreteBxDF>(); *bxdf = mtl->GetBxDF(texEval, ctx, lambda); return BSDF(ctx.ns, ctx.dpdus, bxdf);
} };
return Dispatch(getBSDF); }

getBSDF is a C++ generic lambda: when it is called, the auto mtl parameter will have a concrete type, that of a reference to a pointer to one of the materials enumerated in the <<Material Types>> fragment. Given mtl, then, we can find the concrete type of its material and thence the type of its BxDF. If a material does not return a BxDF, it should use void for its BxDF type definition. In that case, an unset BSDF is returned. (The MixMaterial is the only such Material in pbrt.)

<<Define getBSDF lambda function for Material::GetBSDF()>>= 
auto getBSDF = [&](auto mtl) -> BSDF { using ConcreteMtl = typename std::remove_reference_t<decltype(*mtl)>; using ConcreteBxDF = typename ConcreteMtl::BxDF; if constexpr (std::is_same_v<ConcreteBxDF, void>) return BSDF(); else { <<Allocate memory for ConcreteBxDF and return BSDF for material>> 
ConcreteBxDF *bxdf = scratchBuffer.Alloc<ConcreteBxDF>(); *bxdf = mtl->GetBxDF(texEval, ctx, lambda); return BSDF(ctx.ns, ctx.dpdus, bxdf);
} };

The provided ScratchBuffer is used to allocate enough memory to store the material’s BxDF; using it is much more efficient than using C++’s new and delete operators here. That memory is then initialized with the value returned by the material’s GetBxDF() method before the complete BSDF is returned to the caller.

<<Allocate memory for ConcreteBxDF and return BSDF for material>>= 
ConcreteBxDF *bxdf = scratchBuffer.Alloc<ConcreteBxDF>(); *bxdf = mtl->GetBxDF(texEval, ctx, lambda); return BSDF(ctx.ns, ctx.dpdus, bxdf);

Materials that incorporate subsurface scattering must define a GetBSSRDF() method that follows a similar form. They must also include a using declaration in their class definition that defines a concrete BSSRDF type. (The code for rendering BSSRDFs is included only in the online edition.)

template <typename TextureEvaluator> ConcreteBSSRDF GetBSSRDF(TextureEvaluator texEval, MaterialEvalContext ctx, SampledWavelengths &lambda) const;

The Material class provides a corresponding GetBSSRDF() method that uses the provided ScratchBuffer to allocate storage for the material-specific BSSRDF.

<<Material Interface>>= 
template <typename TextureEvaluator> BSSRDF GetBSSRDF(TextureEvaluator texEval, MaterialEvalContext ctx, SampledWavelengths &lambda, ScratchBuffer &buf) const;

The MaterialEvalContext that GetBxDF() and GetBSSRDF() take plays a similar role to other *EvalContext classes: it encapsulates only the values that are necessary for material evaluation. They are a superset of those that are used for texture evaluation, so it inherits from TextureEvalContext. Doing so has the added advantage that MaterialEvalContexts can be passed directly to the texture evaluation methods.

<<MaterialEvalContext Definition>>= 
struct MaterialEvalContext : public TextureEvalContext { <<MaterialEvalContext Public Methods>> 
MaterialEvalContext() = default; MaterialEvalContext(const SurfaceInteraction &si) : TextureEvalContext(si), wo(si.wo), ns(si.shading.n), dpdus(si.shading.dpdu) {}
Vector3f wo; Normal3f ns; Vector3f dpdus; };

As before, there is not only a constructor that initializes a MaterialEvalContext from a SurfaceInteraction but also a constructor that takes the values for the members individually (not included here).

<<MaterialEvalContext Public Methods>>= 
MaterialEvalContext() = default; MaterialEvalContext(const SurfaceInteraction &si) : TextureEvalContext(si), wo(si.wo), ns(si.shading.n), dpdus(si.shading.dpdu) {}

A TextureEvaluator is a class that is able to evaluate some or all of pbrt’s texture types. One of its methods takes a set of textures and reports whether it is capable of evaluating them, while others actually evaluate textures. On the face of it, there is no obvious need for such a class: why not allow Materials to call the Texture Evaluate() methods directly? This additional layer of abstraction aids performance with the wavefront integrator; it makes it possible to separate materials into those that have lightweight textures and those with heavyweight textures and to process them separately. Doing so is beneficial to performance on the GPU; see Section 15.3.9 for further discussion.

For now we will only define the UniversalTextureEvaluator, which can evaluate all textures. In practice, the indirection it adds is optimized away by the compiler such that it introduces no runtime overhead. It is used with all of pbrt’s integrators other than the one defined in Chapter 15.

<<UniversalTextureEvaluator Definition>>= 
class UniversalTextureEvaluator { public: <<UniversalTextureEvaluator Public Methods>> 
bool CanEvaluate(std::initializer_list<FloatTexture>, std::initializer_list<SpectrumTexture>) const { return true; } PBRT_CPU_GPU Float operator()(FloatTexture tex, TextureEvalContext ctx); PBRT_CPU_GPU SampledSpectrum operator()(SpectrumTexture tex, TextureEvalContext ctx, SampledWavelengths lambda);
};

TextureEvaluators must provide a CanEvaluate() method that takes lists of FloatTextures and SpectrumTextures. They can then examine the types of the provided textures to determine if they are able to evaluate them. For the universal texture evaluator, the answer is always the same.

<<UniversalTextureEvaluator Public Methods>>= 
bool CanEvaluate(std::initializer_list<FloatTexture>, std::initializer_list<SpectrumTexture>) const { return true; }

TextureEvaluators must also provide operator() method implementations that evaluate a given texture. Thus, given a texture evaluator texEval, material code should use the expression texEval(tex, ctx) rather than tex.Evaluate(ctx). The implementation of this method is again trivial for the universal evaluator. (A corresponding method for spectrum textures is effectively the same and not included here.)

<<UniversalTextureEvaluator Method Definitions>>= 
Float UniversalTextureEvaluator::operator()(FloatTexture tex, TextureEvalContext ctx) { return tex.Evaluate(ctx); }

Returning to the Material interface, all materials must provide a CanEvaluateTextures() method that takes a texture evaluator. They should return the result of calling its CanEvaluate() method with all of their textures provided. Code that uses Materials is then responsible for ensuring that a Material’s GetBxDF() or GetBSSRDF() method is only called with a texture evaluator that is able to evaluate its textures.

<<Material Interface>>+=  
template <typename TextureEvaluator> bool CanEvaluateTextures(TextureEvaluator texEval) const;

Materials also may modify the shading normals of objects they are bound to, usually in order to introduce the appearance of greater geometric detail than is actually present. The Material interface has two ways that they may do so, normal mapping and bump mapping.

pbrt’s normal mapping code, which will be described in Section 10.5.3, takes an image that specifies the shading normals. A nullptr value should be returned by this interface method if no normal map is included with a material.

<<Material Interface>>+=  
const Image *GetNormalMap() const;

Alternatively, shading normals may be specified via bump mapping, which takes a displacement function that specifies surface detail with a FloatTexture. A nullptr value should be returned if no such displacement function has been specified.

<<Material Interface>>+=  
FloatTexture GetDisplacement() const;

What should be returned by HasSubsurfaceScattering() method implementations should be obvious; this method is used to determine for which materials in a scene it is necessary to do the additional processing to model that effect.

<<Material Interface>>+= 
bool HasSubsurfaceScattering() const;

10.5.1 Material Implementations

With the preliminaries covered, we will now present a few material implementations. All the Materials in pbrt are fairly basic bridges between Textures and BxDFs, so we will focus here on their basic form and some of the unique details of one of them.

Diffuse Material

DiffuseMaterial is the simplest material implementation and is a good starting point for understanding the material requirements.

<<DiffuseMaterial Definition>>= 
class DiffuseMaterial { public: <<DiffuseMaterial Type Definitions>> 
using BxDF = DiffuseBxDF; using BSSRDF = void;
<<DiffuseMaterial Public Methods>> 
static const char *Name() { return "DiffuseMaterial"; } PBRT_CPU_GPU FloatTexture GetDisplacement() const { return displacement; } PBRT_CPU_GPU const Image *GetNormalMap() const { return normalMap; } static DiffuseMaterial *Create(const TextureParameterDictionary &parameters, Image *normalMap, const FileLoc *loc, Allocator alloc); template <typename TextureEvaluator> PBRT_CPU_GPU void GetBSSRDF(TextureEvaluator texEval, MaterialEvalContext ctx, SampledWavelengths &lambda, void *) const { } PBRT_CPU_GPU static constexpr bool HasSubsurfaceScattering() { return false; } std::string ToString() const; DiffuseMaterial(SpectrumTexture reflectance, FloatTexture displacement, Image *normalMap) : normalMap(normalMap), displacement(displacement), reflectance(reflectance) {} template <typename TextureEvaluator> bool CanEvaluateTextures(TextureEvaluator texEval) const { return texEval.CanEvaluate({}, {reflectance}); } template <typename TextureEvaluator> DiffuseBxDF GetBxDF(TextureEvaluator texEval, MaterialEvalContext ctx, SampledWavelengths &lambda) const { SampledSpectrum r = Clamp(texEval(reflectance, ctx, lambda), 0, 1); return DiffuseBxDF(r); }
private: <<DiffuseMaterial Private Members>> 
Image *normalMap; FloatTexture displacement; SpectrumTexture reflectance;
};

These are the BxDF and BSSRDF type definitions for DiffuseMaterial. Because this material does not include subsurface scattering, BSSRDF can be set to be void.

<<DiffuseMaterial Type Definitions>>= 
using BxDF = DiffuseBxDF; using BSSRDF = void;

The constructor initializes the following member variables with provided values, so it is not included here.

<<DiffuseMaterial Private Members>>= 
Image *normalMap; FloatTexture displacement; SpectrumTexture reflectance;

The CanEvaluateTextures() method is easy to implement; the various textures used for BSDF evaluation are passed to the given TextureEvaluator. Note that the displacement texture is not included here; if present, it is handled separately by the bump mapping code.

<<DiffuseMaterial Public Methods>>= 
template <typename TextureEvaluator> bool CanEvaluateTextures(TextureEvaluator texEval) const { return texEval.CanEvaluate({}, {reflectance}); }

There is also not very much to GetBxDF(); it evaluates the reflectance texture, clamping the result to the range of valid reflectances before passing it along to the DiffuseBxDF constructor and returning a DiffuseBxDF.

<<DiffuseMaterial Public Methods>>+= 
template <typename TextureEvaluator> DiffuseBxDF GetBxDF(TextureEvaluator texEval, MaterialEvalContext ctx, SampledWavelengths &lambda) const { SampledSpectrum r = Clamp(texEval(reflectance, ctx, lambda), 0, 1); return DiffuseBxDF(r); }

GetNormalMap() and GetDisplacement() return the corresponding member variables and the remaining methods are trivial; see the source code for details.

Dielectric Material

DielectricMaterial represents a dielectric interface.

<<DielectricMaterial Definition>>= 
class DielectricMaterial { public: <<DielectricMaterial Type Definitions>> 
using BxDF = DielectricBxDF; using BSSRDF = void;
<<DielectricMaterial Public Methods>> 
DielectricMaterial(FloatTexture uRoughness, FloatTexture vRoughness, Spectrum eta, FloatTexture displacement, Image *normalMap, bool remapRoughness) : normalMap(normalMap), displacement(displacement), uRoughness(uRoughness), vRoughness(vRoughness), eta(eta), remapRoughness(remapRoughness) {} static const char *Name() { return "DielectricMaterial"; } template <typename TextureEvaluator> PBRT_CPU_GPU bool CanEvaluateTextures(TextureEvaluator texEval) const { return texEval.CanEvaluate({uRoughness, vRoughness}, {}); } PBRT_CPU_GPU FloatTexture GetDisplacement() const { return displacement; } PBRT_CPU_GPU const Image *GetNormalMap() const { return normalMap; } static DielectricMaterial *Create(const TextureParameterDictionary &parameters, Image *normalMap, const FileLoc *loc, Allocator alloc); std::string ToString() const; template <typename TextureEvaluator> PBRT_CPU_GPU void GetBSSRDF(TextureEvaluator texEval, MaterialEvalContext ctx, SampledWavelengths &lambda) const { } PBRT_CPU_GPU static constexpr bool HasSubsurfaceScattering() { return false; } template <typename TextureEvaluator> DielectricBxDF GetBxDF(TextureEvaluator texEval, MaterialEvalContext ctx, SampledWavelengths &lambda) const { <<Compute index of refraction for dielectric material>> 
Float sampledEta = eta(lambda[0]); if (!eta.template Is<ConstantSpectrum>()) lambda.TerminateSecondary();
<<Create microfacet distribution for dielectric material>> 
Float urough = texEval(uRoughness, ctx), vrough = texEval(vRoughness, ctx); if (remapRoughness) { urough = TrowbridgeReitzDistribution::RoughnessToAlpha(urough); vrough = TrowbridgeReitzDistribution::RoughnessToAlpha(vrough); } TrowbridgeReitzDistribution distrib(urough, vrough);
<<Return BSDF for dielectric material>> 
return DielectricBxDF(sampledEta, distrib);
}
private: <<DielectricMaterial Private Members>> 
Image *normalMap; FloatTexture displacement; FloatTexture uRoughness, vRoughness; bool remapRoughness; Spectrum eta;
};

It returns a DielectricBxDF and does not include subsurface scattering.

<<DielectricMaterial Type Definitions>>= 
using BxDF = DielectricBxDF; using BSSRDF = void;

DielectricMaterial has a few more parameters than DiffuseMaterial. The index of refraction is specified with a SpectrumTexture so that it may vary with wavelength. Note also that two roughness values are stored, which allows the specification of an anisotropic microfacet distribution. If the distribution is isotropic, this leads to a minor inefficiency in storage and, shortly, texture evaluation, since both are always evaluated.

<<DielectricMaterial Private Members>>= 
Image *normalMap; FloatTexture displacement; FloatTexture uRoughness, vRoughness; bool remapRoughness; Spectrum eta;

GetBxDF() follows a similar form to DiffuseMaterial, evaluating various textures and using their results to initialize the returned DielectricBxDF.

<<DielectricMaterial Public Methods>>= 
template <typename TextureEvaluator> DielectricBxDF GetBxDF(TextureEvaluator texEval, MaterialEvalContext ctx, SampledWavelengths &lambda) const { <<Compute index of refraction for dielectric material>> 
Float sampledEta = eta(lambda[0]); if (!eta.template Is<ConstantSpectrum>()) lambda.TerminateSecondary();
<<Create microfacet distribution for dielectric material>> 
Float urough = texEval(uRoughness, ctx), vrough = texEval(vRoughness, ctx); if (remapRoughness) { urough = TrowbridgeReitzDistribution::RoughnessToAlpha(urough); vrough = TrowbridgeReitzDistribution::RoughnessToAlpha(vrough); } TrowbridgeReitzDistribution distrib(urough, vrough);
<<Return BSDF for dielectric material>> 
return DielectricBxDF(sampledEta, distrib);
}

If the index of refraction is the same for all wavelengths, then all wavelengths will follow the same path if a ray is refracted. Otherwise, they will go in different directions—this is dispersion. In that case, pbrt only follows a single ray path according to the first wavelength in SampledWavelengths rather than tracing multiple rays to track each of them, and a call to SampledWavelengths::TerminateSecondary() is necessary. (See Section 4.5.4 for more information.)

DielectricMaterial therefore calls TerminateSecondary() unless the index of refraction is known to be constant, as determined by checking if eta’s Spectrum type is a ConstantSpectrum. This check does not detect all cases where the sampled spectrum values are all the same, but it catches most of them in practice, and unnecessarily terminating the secondary wavelengths affects performance but not correctness. A bigger shortcoming of the implementation here is that there is no dispersion if light is reflected at a surface and not refracted. In that case, all wavelengths could still be followed. However, how light paths will be sampled at the surface is not known at this point in program execution.

<<Compute index of refraction for dielectric material>>= 
Float sampledEta = eta(lambda[0]); if (!eta.template Is<ConstantSpectrum>()) lambda.TerminateSecondary();

It can be convenient to specify a microfacet distribution’s roughness with a scalar parameter in left-bracket 0 comma 1 right-bracket , where values close to zero correspond to near-perfect specular reflection, rather than by specifying alpha values directly. The RoughnessToAlpha() method performs a mapping that gives a reasonably intuitive control for surface appearance.

<<TrowbridgeReitzDistribution Public Methods>>+=  
static Float RoughnessToAlpha(Float roughness) { return std::sqrt(roughness); }

The GetBxDF() method then evaluates the roughness textures and remaps the returned values if required.

<<Create microfacet distribution for dielectric material>>= 
Float urough = texEval(uRoughness, ctx), vrough = texEval(vRoughness, ctx); if (remapRoughness) { urough = TrowbridgeReitzDistribution::RoughnessToAlpha(urough); vrough = TrowbridgeReitzDistribution::RoughnessToAlpha(vrough); } TrowbridgeReitzDistribution distrib(urough, vrough);

Given the index of refraction and microfacet distribution, it is easy to pull the pieces together to return the final BxDF.

<<Return BSDF for dielectric material>>= 
return DielectricBxDF(sampledEta, distrib);

Mix Material

The final material implementation that we will describe in the text is MixMaterial, which stores two other materials and uses a Float-valued texture to blend between them.

<<MixMaterial Definition>>= 
class MixMaterial { public: <<MixMaterial Type Definitions>> 
using BxDF = void; using BSSRDF = void;
<<MixMaterial Public Methods>> 
MixMaterial(Material m[2], FloatTexture amount) : amount(amount) { materials[0] = m[0]; materials[1] = m[1]; } template <typename TextureEvaluator> Material ChooseMaterial(TextureEvaluator texEval, MaterialEvalContext ctx) const { Float amt = texEval(amount, ctx); if (amt <= 0) return materials[0]; if (amt >= 1) return materials[1]; Float u = HashFloat(ctx.p, ctx.wo, materials[0], materials[1]); return (amt < u) ? materials[0] : materials[1]; } Material GetMaterial(int i) const { return materials[i]; } static const char *Name() { return "MixMaterial"; } PBRT_CPU_GPU FloatTexture GetDisplacement() const { #ifndef PBRT_IS_GPU_CODE LOG_FATAL("Shouldn't be called"); #endif return nullptr; } PBRT_CPU_GPU const Image *GetNormalMap() const { #ifndef PBRT_IS_GPU_CODE LOG_FATAL("Shouldn't be called"); #endif return nullptr; } static MixMaterial *Create(Material materials[2], const TextureParameterDictionary &parameters, const FileLoc *loc, Allocator alloc); template <typename TextureEvaluator> PBRT_CPU_GPU void GetBSSRDF(TextureEvaluator texEval, MaterialEvalContext ctx, SampledWavelengths &lambda) const { #ifndef PBRT_IS_GPU_CODE LOG_FATAL("Shouldn't be called"); #endif } PBRT_CPU_GPU static constexpr bool HasSubsurfaceScattering() { return false; } std::string ToString() const; template <typename TextureEvaluator> bool CanEvaluateTextures(TextureEvaluator texEval) const { return texEval.CanEvaluate({amount}, {}); } template <typename TextureEvaluator> void GetBxDF(TextureEvaluator texEval, MaterialEvalContext ctx, SampledWavelengths &lambda) const { LOG_FATAL("MixMaterial::GetBxDF() shouldn't be called"); }
private: <<MixMaterial Private Members>> 
FloatTexture amount; Material materials[2];
};

<<MixMaterial Private Members>>= 
FloatTexture amount; Material materials[2];

MixMaterial does not cleanly fit into pbrt’s Material abstraction. For example, it is unable to define a single BxDF type that it will return, since its two constituent materials may have different BxDFs, and may themselves be MixMaterials, for that matter. Thus, MixMaterial requires special handling by the code that uses materials. (For example, there is a special case for MixMaterials in the SurfaceInteraction::GetBSDF() method described in Section 10.5.2.)

This is not ideal: as a general point of software design, it would be better to have abstractions that make it possible to provide this functionality without requiring special-case handling in calling code. However, we were unable to find a clean way to do this while still being able to statically reason about the type of BxDF a material will return; that aspect of the Material interface offers enough of a performance benefit that we did not want to change it.

Therefore, when a MixMaterial is encountered, one of its constituent materials is randomly chosen, with probability given by the floating-point amount texture. Thus, a 50/50 mix of two materials is not represented by the average of their respective BSDFs and so forth, but instead by each of them being evaluated half the time. This is effectively the material analog of the stochastic alpha test that was described in Section 7.1.1. The ChooseMaterial() method implements the logic.

<<MixMaterial Public Methods>>= 
template <typename TextureEvaluator> Material ChooseMaterial(TextureEvaluator texEval, MaterialEvalContext ctx) const { Float amt = texEval(amount, ctx); if (amt <= 0) return materials[0]; if (amt >= 1) return materials[1]; Float u = HashFloat(ctx.p, ctx.wo, materials[0], materials[1]); return (amt < u) ? materials[0] : materials[1]; }

Figure 10.20: Effect of Sampling Rate with the MixMaterial. In this scene, the MixMaterial is used to blend between blue and red diffuse materials for the dragon, using an equal weighting for each. (a) With one sample per pixel, there is visible noise in the corresponding pixels since each pixel only includes one of the two constituent materials. (b) With a sufficient number of samples (here, 128), stochastic selection of materials causes no visual harm. In practice, the pixel sampling rates necessary to reduce other forms of error from simulating light transport are almost always enough to resolve stochastic material sampling.

Stochastic selection of materials can introduce noise in images at low sampling rates; see Figure 10.20. However, a few tens of samples are generally plenty to resolve any visual error. Furthermore, this approach does bring benefits: sampling and evaluation of the resulting BSDF is more efficient than if it was a weighted sum of the BSDFs from the constituent materials.

MixMaterial provides an accessor that makes it possible to traverse all the materials in the scene, including those nested inside a MixMaterial, so that it is possible to perform operations such as determining which types of materials are and are not present in a scene.

<<MixMaterial Public Methods>>+=  
Material GetMaterial(int i) const { return materials[i]; }

A fatal error is issued if the GetBxDF() method is called. A call to GetBSSRDF() is handled similarly, in code not included here.

<<MixMaterial Public Methods>>+= 
template <typename TextureEvaluator> void GetBxDF(TextureEvaluator texEval, MaterialEvalContext ctx, SampledWavelengths &lambda) const { LOG_FATAL("MixMaterial::GetBxDF() shouldn't be called"); }

10.5.2 Finding the BSDF at a Surface

Because pbrt’s Integrators use the SurfaceInteraction class to collect the necessary information associated with each intersection point, we will add a GetBSDF() method to this class that handles all the details related to computing the BSDF at its point.

<<SurfaceInteraction Method Definitions>>+=  
BSDF SurfaceInteraction::GetBSDF( const RayDifferential &ray, SampledWavelengths &lambda, Camera camera, ScratchBuffer &scratchBuffer, Sampler sampler) { <<Estimate left-parenthesis u comma v right-parenthesis and position differentials at intersection point>> 
ComputeDifferentials(ray, camera, sampler.SamplesPerPixel());
<<Resolve MixMaterial if necessary>> 
while (material.Is<MixMaterial>()) { MixMaterial *mix = material.Cast<MixMaterial>(); material = mix->ChooseMaterial(UniversalTextureEvaluator(), *this); }
<<Return unset BSDF if surface has a null material>> 
if (!material) return {};
<<Evaluate normal or bump map, if present>> 
FloatTexture displacement = material.GetDisplacement(); const Image *normalMap = material.GetNormalMap(); if (displacement || normalMap) { <<Get shading partial-differential normal p slash partial-differential u and partial-differential normal p slash partial-differential v using normal or bump map>> 
Vector3f dpdu, dpdv; if (normalMap) NormalMap(*normalMap, *this, &dpdu, &dpdv); else BumpMap(UniversalTextureEvaluator(), displacement, *this, &dpdu, &dpdv);
Normal3f ns(Normalize(Cross(dpdu, dpdv))); SetShadingGeometry(ns, dpdu, dpdv, shading.dndu, shading.dndv, false); }
<<Return BSDF for surface interaction>> 
BSDF bsdf = material.GetBSDF(UniversalTextureEvaluator(), *this, lambda, scratchBuffer); if (bsdf && GetOptions().forceDiffuse) { <<Override bsdf with diffuse equivalent>> 
SampledSpectrum r = bsdf.rho(wo, {sampler.Get1D()}, {sampler.Get2D()}); bsdf = BSDF(shading.n, shading.dpdu, scratchBuffer.Alloc<DiffuseBxDF>(r));
} return bsdf;
}

This method first calls the SurfaceInteraction’s ComputeDifferentials() method to compute information about the projected size of the surface area around the intersection on the image plane for use in texture antialiasing.

<<Estimate left-parenthesis u comma v right-parenthesis and position differentials at intersection point>>= 
ComputeDifferentials(ray, camera, sampler.SamplesPerPixel());

As described in Section 10.5.1, if there is a MixMaterial at the intersection point, it is necessary to resolve it to be a regular material. A while loop here ensures that nested MixMaterials are handled correctly.

<<Resolve MixMaterial if necessary>>= 
while (material.Is<MixMaterial>()) { MixMaterial *mix = material.Cast<MixMaterial>(); material = mix->ChooseMaterial(UniversalTextureEvaluator(), *this); }

If the final material is nullptr, it represents a non-scattering interface between two types of participating media. In this case, a default uninitialized BSDF is returned.

<<Return unset BSDF if surface has a null material>>= 
if (!material) return {};

Otherwise, normal or bump mapping is performed before the BSDF is created.

<<Evaluate normal or bump map, if present>>= 
FloatTexture displacement = material.GetDisplacement(); const Image *normalMap = material.GetNormalMap(); if (displacement || normalMap) { <<Get shading partial-differential normal p slash partial-differential u and partial-differential normal p slash partial-differential v using normal or bump map>> 
Vector3f dpdu, dpdv; if (normalMap) NormalMap(*normalMap, *this, &dpdu, &dpdv); else BumpMap(UniversalTextureEvaluator(), displacement, *this, &dpdu, &dpdv);
Normal3f ns(Normalize(Cross(dpdu, dpdv))); SetShadingGeometry(ns, dpdu, dpdv, shading.dndu, shading.dndv, false); }

The appropriate utility function for normal or bump mapping is called, depending on which technique is to be used.

<<Get shading partial-differential normal p slash partial-differential u and partial-differential normal p slash partial-differential v using normal or bump map>>= 
Vector3f dpdu, dpdv; if (normalMap) NormalMap(*normalMap, *this, &dpdu, &dpdv); else BumpMap(UniversalTextureEvaluator(), displacement, *this, &dpdu, &dpdv);

With differentials both for texture filtering and for shading geometry now settled, the Material::GetBSDF() method can be called. Note that the universal texture evaluator is used both here and previously in the method, as there is no need to distinguish between different texture complexities in this part of the system.

<<Return BSDF for surface interaction>>= 
BSDF bsdf = material.GetBSDF(UniversalTextureEvaluator(), *this, lambda, scratchBuffer); if (bsdf && GetOptions().forceDiffuse) { <<Override bsdf with diffuse equivalent>> 
SampledSpectrum r = bsdf.rho(wo, {sampler.Get1D()}, {sampler.Get2D()}); bsdf = BSDF(shading.n, shading.dpdu, scratchBuffer.Alloc<DiffuseBxDF>(r));
} return bsdf;

pbrt provides an option to override all the materials in a scene with equivalent diffuse BSDFs; doing so can be useful for some debugging problems. In this case, the hemispherical–directional reflectance is used to initialize a DiffuseBxDF.

<<Override bsdf with diffuse equivalent>>= 
SampledSpectrum r = bsdf.rho(wo, {sampler.Get1D()}, {sampler.Get2D()}); bsdf = BSDF(shading.n, shading.dpdu, scratchBuffer.Alloc<DiffuseBxDF>(r));

The SurfaceInteraction::GetBSSRDF() method, not included here, follows a similar path before calling Material::GetBSSRDF().

10.5.3 Normal Mapping

Normal mapping is a technique that maps tabularized surface normals stored in images to surfaces and uses them to specify shading normals in order to give the appearance of fine geometric detail.

With normal maps, one must choose a coordinate system for the stored normals. While any coordinate system may be chosen, one of the most useful is the local shading coordinate system at each point on a surface where the z axis is aligned with the surface normal and tangent vectors are aligned with x and y . (This is the same as the reflection coordinate system described in Section 9.1.1.) When that coordinate system is used, the approach is called tangent-space normal mapping. With tangent-space normal mapping, a given normal map can be applied to a variety of shapes, while choosing a coordinate system like object space would closely couple a normal map’s encoding to a specific geometric object.

Figure 10.21: (a) A normal map modeling wrinkles for a pillow model. (b) Pillow geometry without normal map. (c) When applied to the pillow, the normal map gives a convincing approximation to more detailed geometry than is actually present. (Scene courtesy of Angelo Ferretti.)

Normal maps are traditionally encoded in RGB images, where red, green, and blue respectively store the x , y , and z components of the surface normal. When tangent-space normal mapping is used, normal map images are typically predominantly blue, reflecting the fact that the z component of the surface normal has the largest magnitude unless the normal has been substantially perturbed. (See Figure 10.21.)

This RGB encoding brings us to an unfortunate casualty from the adoption of spectral rendering in this version of pbrt: while pbrt’s SpectrumTextures previously returned RGB colors, they now return point-sampled spectral values. If an RGB image map is used for a spectrum texture, it is not possible to exactly reconstruct the original RGB colors; there will unavoidably be error in the Monte Carlo estimator that must be evaluated to find RGB. Introducing noise in the orientations of surface normals is unacceptable, since it would lead to systemic bias in rendered images. Consider a bumpy shiny object: error in the surface normal would lead to scattered rays intersecting objects that they would never intersect given the correct normals, which could cause arbitrarily large error.

We might avoid that problem by augmenting the SpectrumTexture interface to include a method that returned RGB color, introducing a separate RGBTexture interface and texture implementations, or by introducing a NormalTexture that returned normals directly. Any of these could cleanly support normal mapping, though all would require a significant amount of additional code.

Because the capability of directly looking up RGB values is only needed for normal mapping, the NormalMap() function therefore takes an Image to specify the normal map. It assumes that the first three channels of the image represent red, green, and blue. With this approach we have lost the benefits of being able to scale and mix textures as well as the ability to apply a variety of mapping functions to compute texture coordinates. While that is unfortunate, those capabilities are less often used with normal maps than with other types of textures, and so we prefer not to make the Texture interfaces more complex purely for normal mapping.

<<Normal Mapping Function Definitions>>= 
void NormalMap(const Image &normalMap, const NormalBumpEvalContext &ctx, Vector3f *dpdu, Vector3f *dpdv) { <<Get normalized normal vector from normal map>> 
WrapMode2D wrap(WrapMode::Repeat); Point2f uv(ctx.uv[0], 1 - ctx.uv[1]); Vector3f ns(2 * normalMap.BilerpChannel(uv, 0, wrap) - 1, 2 * normalMap.BilerpChannel(uv, 1, wrap) - 1, 2 * normalMap.BilerpChannel(uv, 2, wrap) - 1); ns = Normalize(ns);
<<Transform tangent-space normal to rendering space>> 
Frame frame = Frame::FromZ(ctx.shading.n); ns = frame.FromLocal(ns);
<<Find partial-differential normal p slash partial-differential u and partial-differential normal p slash partial-differential v that give shading normal>> 
Float ulen = Length(ctx.shading.dpdu), vlen = Length(ctx.shading.dpdv); *dpdu = Normalize(GramSchmidt(ctx.shading.dpdu, ns)) * ulen; *dpdv = Normalize(Cross(ns, *dpdu)) * vlen;
}

Both NormalMap() and BumpMap() take a NormalBumpEvalContext to specify the local geometric information for the point where the shading geometry is being computed.

<<NormalBumpEvalContext Definition>>= 
struct NormalBumpEvalContext { <<NormalBumpEvalContext Public Methods>> 
NormalBumpEvalContext() = default; PBRT_CPU_GPU NormalBumpEvalContext(const SurfaceInteraction &si) : p(si.p()), uv(si.uv), n(si.n), dudx(si.dudx), dudy(si.dudy), dvdx(si.dvdx), dvdy(si.dvdy), dpdx(si.dpdx), dpdy(si.dpdy), faceIndex(si.faceIndex) { shading.n = si.shading.n; shading.dpdu = si.shading.dpdu; shading.dpdv = si.shading.dpdv; shading.dndu = si.shading.dndu; shading.dndv = si.shading.dndv; } std::string ToString() const; operator TextureEvalContext() const { return TextureEvalContext(p, dpdx, dpdy, n, uv, dudx, dudy, dvdx, dvdy); }
<<NormalBumpEvalContext Public Members>> 
Point3f p; Point2f uv; Normal3f n; struct { Normal3f n; Vector3f dpdu, dpdv; Normal3f dndu, dndv; } shading; Float dudx = 0, dudy = 0, dvdx = 0, dvdy = 0; Vector3f dpdx, dpdy;
};

As usual, it has a constructor, not included here, that performs initialization given a SurfaceInteraction.

<<NormalBumpEvalContext Public Members>>= 
Point3f p; Point2f uv; Normal3f n; struct { Normal3f n; Vector3f dpdu, dpdv; Normal3f dndu, dndv; } shading; Float dudx = 0, dudy = 0, dvdx = 0, dvdy = 0; Vector3f dpdx, dpdy;

It also provides a conversion operator to TextureEvalContext, which only needs a subset of the values stored in NormalBumpEvalContext.

<<NormalBumpEvalContext Public Methods>>= 
operator TextureEvalContext() const { return TextureEvalContext(p, dpdx, dpdy, n, uv, dudx, dudy, dvdx, dvdy); }

The first step in the normal mapping computation is to read the tangent-space normal vector from the image map. The image wrap mode is hard-coded here since Repeat is almost always the desired mode, though it would be easy to allow the wrap mode to be set via a parameter. Note also that the v coordinate is inverted, again following the image texture coordinate convention discussed in Section 10.4.2.

Normal maps are traditionally encoded in fixed-point image formats with pixel values that range from 0 to 1. This encoding allows the use of compact 8-bit pixel representations as well as compressed image formats that are supported by GPUs. Values read from the image must therefore be remapped to the range left-bracket negative 1 comma 1 right-bracket to reconstruct an associated normal vector. The normal vector must be renormalized, as both the quantization in the image pixel format and the bilinear interpolation may have caused it to be non-unit-length.

<<Get normalized normal vector from normal map>>= 
WrapMode2D wrap(WrapMode::Repeat); Point2f uv(ctx.uv[0], 1 - ctx.uv[1]); Vector3f ns(2 * normalMap.BilerpChannel(uv, 0, wrap) - 1, 2 * normalMap.BilerpChannel(uv, 1, wrap) - 1, 2 * normalMap.BilerpChannel(uv, 2, wrap) - 1); ns = Normalize(ns);

In order to transform the normal to rendering space, a Frame can be used to specify a coordinate system where the original shading normal is aligned with the plus z axis. Transforming the tangent-space normal into this coordinate system gives the rendering-space normal.

<<Transform tangent-space normal to rendering space>>= 
Frame frame = Frame::FromZ(ctx.shading.n); ns = frame.FromLocal(ns);

This function returns partial derivatives of the surface that account for the shading normal rather than the shading normal itself. Suitable partial derivatives can be found in two steps. First, a call to GramSchmidt() with the original partial-differential normal p slash partial-differential u and the new shading normal bold n Subscript bold s gives the closest vector to partial-differential normal p slash partial-differential u that is perpendicular to bold n Subscript bold s . partial-differential normal p slash partial-differential v is then found by taking the cross product of bold n Subscript bold s and the new partial-differential normal p slash partial-differential v , giving an orthogonal coordinate system. Both of these vectors are respectively scaled to have the same length as the original partial-differential normal p slash partial-differential u and partial-differential normal p slash partial-differential v vectors.

<<Find partial-differential normal p slash partial-differential u and partial-differential normal p slash partial-differential v that give shading normal>>= 
Float ulen = Length(ctx.shading.dpdu), vlen = Length(ctx.shading.dpdv); *dpdu = Normalize(GramSchmidt(ctx.shading.dpdu, ns)) * ulen; *dpdv = Normalize(Cross(ns, *dpdu)) * vlen;

10.5.4 Bump Mapping

Another way to define shading normals is via a FloatTexture that defines a displacement at each point on the surface: each point normal p Subscript has a displaced point normal p prime associated with it, defined by normal p prime equals normal p Subscript Baseline plus d left-parenthesis normal p Subscript Baseline right-parenthesis bold n left-parenthesis normal p Subscript Baseline right-parenthesis , where d left-parenthesis normal p Subscript Baseline right-parenthesis is the offset returned by the displacement texture at normal p Subscript and bold n left-parenthesis normal p Subscript Baseline right-parenthesis is the surface normal at normal p Subscript (Figure 10.22). We can use this texture to compute shading normals so that the surface appears as if it actually had been offset by the displacement function, without modifying its geometry. This process is called bump mapping. For relatively small displacement functions, the visual effect of bump mapping can be quite convincing.

Figure 10.22: A displacement function associated with a material defines a new surface based on the old one, offset by the displacement amount along the normal at each point. pbrt does not compute a geometric representation of this displaced surface in the BumpMap() function, but instead uses it to compute shading normals for bump mapping.

An example of bump mapping is shown in Figure 10.23, which shows part of the San Miguel scene rendered with and without bump mapping. There, the bump map gives the appearance of a substantial amount of detail in the walls and floors that is not actually present in the geometric model. Figure 10.24 shows one of the image maps used to define the bump function in Figure 10.23.

Figure 10.23: Detail of the San Miguel scene, rendered (a) without bump mapping and (b) with bump mapping. Bump mapping substantially increases the apparent geometric complexity of the model, without the increased rendering time and memory use that would result from a geometric representation with the equivalent amount of small-scale detail. (Scene courtesy of Guillermo M. Leal Llaguno.)

Figure 10.24: The image used as a bump map for the tiles in the San Miguel rendering in Figure 10.23.

The BumpMap() function is responsible for computing the effect of bump mapping at the point being shaded given a particular displacement texture. Its implementation is based on finding an approximation to the partial derivatives partial-differential normal p slash partial-differential u and partial-differential normal p slash partial-differential v of the displaced surface and using them in place of the surface’s actual partial derivatives to compute the shading normal. Assume that the original surface is defined by a parametric function p left-parenthesis u comma v right-parenthesis , and the bump offset function is a scalar function d left-parenthesis u comma v right-parenthesis . Then the displaced surface is given by

p prime left-parenthesis u comma v right-parenthesis equals p left-parenthesis u comma v right-parenthesis plus d left-parenthesis u comma v right-parenthesis bold n Subscript Baseline left-parenthesis u comma v right-parenthesis comma

where bold n Subscript Baseline left-parenthesis u comma v right-parenthesis is the surface normal at left-parenthesis u comma v right-parenthesis .

<<Bump Mapping Function Definitions>>= 
template <typename TextureEvaluator> void BumpMap(TextureEvaluator texEval, FloatTexture displacement, const NormalBumpEvalContext &ctx, Vector3f *dpdu, Vector3f *dpdv) { <<Compute offset positions and evaluate displacement texture>> 
TextureEvalContext shiftedCtx = ctx; <<Shift shiftedCtx du in the u direction>> 
Float du = .5f * (std::abs(ctx.dudx) + std::abs(ctx.dudy)); if (du == 0) du = .0005f; shiftedCtx.p = ctx.p + du * ctx.shading.dpdu; shiftedCtx.uv = ctx.uv + Vector2f(du, 0.f);
Float uDisplace = texEval(displacement, shiftedCtx); <<Shift shiftedCtx dv in the v direction>> 
Float dv = .5f * (std::abs(ctx.dvdx) + std::abs(ctx.dvdy)); if (dv == 0) dv = .0005f; shiftedCtx.p = ctx.p + dv * ctx.shading.dpdv; shiftedCtx.uv = ctx.uv + Vector2f(0.f, dv);
Float vDisplace = texEval(displacement, shiftedCtx); Float displace = texEval(displacement, ctx);
<<Compute bump-mapped differential geometry>> 
*dpdu = ctx.shading.dpdu + (uDisplace - displace) / du * Vector3f(ctx.shading.n) + displace * Vector3f(ctx.shading.dndu); *dpdv = ctx.shading.dpdv + (vDisplace - displace) / dv * Vector3f(ctx.shading.n) + displace * Vector3f(ctx.shading.dndv);
}

The partial derivatives of p prime can be found using the chain rule. For example, the partial derivative in u is

StartFraction partial-differential p Superscript prime Baseline Over partial-differential u EndFraction equals StartFraction partial-differential p left-parenthesis u comma v right-parenthesis Over partial-differential u EndFraction plus StartFraction partial-differential d left-parenthesis u comma v right-parenthesis Over partial-differential u EndFraction bold n Subscript Baseline left-parenthesis u comma v right-parenthesis plus d left-parenthesis u comma v right-parenthesis StartFraction partial-differential bold n Subscript Baseline left-parenthesis u comma v right-parenthesis Over partial-differential u EndFraction period

We have already computed the value of partial-differential p left-parenthesis u comma v right-parenthesis slash partial-differential u ; it is partial-differential normal p slash partial-differential u and is available in the TextureEvalContext structure, which also stores the surface normal bold n Subscript Baseline left-parenthesis u comma v right-parenthesis and the partial derivative partial-differential bold n Subscript Baseline left-parenthesis u comma v right-parenthesis slash partial-differential u equals partial-differential bold n Subscript Baseline slash partial-differential u . The displacement function d left-parenthesis u comma v right-parenthesis can be readily evaluated, which leaves partial-differential d left-parenthesis u comma v right-parenthesis slash partial-differential u as the only remaining term.

There are two possible approaches to finding the values of partial-differential d left-parenthesis u comma v right-parenthesis slash partial-differential u and partial-differential d left-parenthesis u comma v right-parenthesis slash partial-differential v . One option would be to augment the FloatTexture interface with a method to compute partial derivatives of the underlying texture function. For example, for image map textures mapped to the surface directly using its left-parenthesis u comma v right-parenthesis parameterization, these partial derivatives can be computed by subtracting adjacent texels in the u and v directions. However, this approach is difficult to extend to complex procedural textures like some of the ones defined earlier in this chapter. Therefore, pbrt directly computes these values with forward differencing, without modifying the FloatTexture interface.

Recall the definition of the partial derivative:

StartFraction partial-differential d left-parenthesis u comma v right-parenthesis Over partial-differential u EndFraction equals limit Underscript normal upper Delta Subscript u Baseline right-arrow 0 Endscripts StartFraction d left-parenthesis u plus normal upper Delta Subscript u Baseline comma v right-parenthesis minus d left-parenthesis u comma v right-parenthesis Over normal upper Delta Subscript u Baseline EndFraction period

Forward differencing approximates the value using a finite value of normal upper Delta Subscript u and evaluating d left-parenthesis u comma v right-parenthesis at two positions. Thus, the final expression for partial-differential p prime slash partial-differential u is the following (for simplicity, we have dropped the explicit dependence on left-parenthesis u comma v right-parenthesis for some of the terms):

StartFraction partial-differential p Superscript prime Baseline Over partial-differential u EndFraction almost-equals StartFraction partial-differential normal p Over partial-differential u EndFraction plus StartFraction d left-parenthesis u plus normal upper Delta Subscript u Baseline comma v right-parenthesis minus d left-parenthesis u comma v right-parenthesis Over normal upper Delta Subscript u Baseline EndFraction bold n Subscript Baseline plus d left-parenthesis u comma v right-parenthesis StartFraction partial-differential bold n Over partial-differential u EndFraction period

Interestingly enough, most bump-mapping implementations ignore the final term under the assumption that d left-parenthesis u comma v right-parenthesis is expected to be relatively small. (Since bump mapping is mostly useful for approximating small perturbations, this is a reasonable assumption.) The fact that many renderers do not compute the values partial-differential bold n Subscript slash partial-differential u and partial-differential bold n Subscript slash partial-differential v may also have something to do with this simplification. An implication of ignoring the last term is that the magnitude of the displacement function then does not affect the bump-mapped partial derivatives; adding a constant value to it globally does not affect the final result, since only differences of the bump function affect it. pbrt computes all three terms since it has partial-differential bold n Subscript slash partial-differential u and partial-differential bold n Subscript slash partial-differential v readily available, although in practice this final term rarely makes a visually noticeable difference.

<<Compute offset positions and evaluate displacement texture>>= 
TextureEvalContext shiftedCtx = ctx; <<Shift shiftedCtx du in the u direction>> 
Float du = .5f * (std::abs(ctx.dudx) + std::abs(ctx.dudy)); if (du == 0) du = .0005f; shiftedCtx.p = ctx.p + du * ctx.shading.dpdu; shiftedCtx.uv = ctx.uv + Vector2f(du, 0.f);
Float uDisplace = texEval(displacement, shiftedCtx); <<Shift shiftedCtx dv in the v direction>> 
Float dv = .5f * (std::abs(ctx.dvdx) + std::abs(ctx.dvdy)); if (dv == 0) dv = .0005f; shiftedCtx.p = ctx.p + dv * ctx.shading.dpdv; shiftedCtx.uv = ctx.uv + Vector2f(0.f, dv);
Float vDisplace = texEval(displacement, shiftedCtx); Float displace = texEval(displacement, ctx);

One remaining issue is how to choose the offsets normal upper Delta Subscript u and normal upper Delta Subscript v for the finite differencing computations. They should be small enough that fine changes in d left-parenthesis u comma v right-parenthesis are captured but large enough so that available floating-point precision is sufficient to give a good result. Here, we will choose normal upper Delta Subscript u and normal upper Delta Subscript v values that lead to an offset that is about half the image-space pixel sample spacing and use them to update the appropriate member variables in the TextureEvalContext to reflect a shift to the offset position.

<<Shift shiftedCtx du in the u direction>>= 
Float du = .5f * (std::abs(ctx.dudx) + std::abs(ctx.dudy)); if (du == 0) du = .0005f; shiftedCtx.p = ctx.p + du * ctx.shading.dpdu; shiftedCtx.uv = ctx.uv + Vector2f(du, 0.f);

The <<Shift shiftedCtx dv in the v direction>> fragment is nearly the same as the fragment that shifts du, so it is not included here.

Given the new positions and the displacement texture’s values at them, the partial derivatives can be computed directly using Equation (10.12):

<<Compute bump-mapped differential geometry>>= 
*dpdu = ctx.shading.dpdu + (uDisplace - displace) / du * Vector3f(ctx.shading.n) + displace * Vector3f(ctx.shading.dndu); *dpdv = ctx.shading.dpdv + (vDisplace - displace) / dv * Vector3f(ctx.shading.n) + displace * Vector3f(ctx.shading.dndv);