6.5 Triangle Meshes

The triangle is one of the most commonly used shapes in computer graphics; complex scenes may be modeled using millions of triangles to achieve great detail. (Figure 6.11 shows an image of a complex triangle mesh of over four million triangles.)

Figure 6.11: Ganesha Model. This triangle mesh contains over four million individual triangles. It was created from a real statue using a 3D scanner that uses structured light to determine shapes of objects.

While a natural representation would be to have a Triangle shape implementation where each triangle stored the positions of its three vertices, a more memory-efficient representation is to separately store entire triangle meshes with an array of vertex positions where each individual triangle just stores three offsets into this array for its three vertices. To see why this is the case, consider the celebrated Euler–Poincaré formula, which relates the number of vertices upper V , edges upper E , and faces upper F on closed discrete meshes as

upper V minus upper E plus upper F equals 2 left-parenthesis 1 minus g right-parenthesis comma

where g element-of bold upper N Superscript is the genus of the mesh. The genus is usually a small number and can be interpreted as the number of “handles” in the mesh (analogous to a handle of a teacup). On a triangle mesh, the number of edges and vertices is furthermore related by the identity

upper E equals three-halves upper F period

This can be seen by dividing each edge into two parts associated with the two adjacent triangles. There are 3 upper F such half-edges, and all colocated pairs constitute the upper E mesh edges. For large closed triangle meshes, the overall effect of the genus usually becomes negligible and we can combine the previous two equations (with g equals 0 ) to obtain

upper F almost-equals 2 upper V period

In other words, there are approximately twice as many faces as vertices. Since each face references three vertices, every vertex is (on average) referenced a total of six times. Thus, when vertices are shared, the total amortized storage required per triangle will be 12 bytes of memory for the offsets (at 4 bytes for three 32-bit integer offsets) plus half of the storage for one vertex—6 bytes, assuming three 4-byte floats are used to store the vertex position—for a total of 18 bytes per triangle. This is much better than the 36 bytes per triangle that storing the three positions directly would require. The relative storage savings are even better when there are per-vertex surface normals or texture coordinates in a mesh.

6.5.1 Mesh Representation and Storage

pbrt uses the TriangleMesh class to store the shared information about a triangle mesh. It is defined in the files util/mesh.h and util/mesh.cpp.

<<TriangleMesh Definition>>= 
class TriangleMesh { public: <<TriangleMesh Public Methods>> 
TriangleMesh(const Transform &renderFromObject, bool reverseOrientation, std::vector<int> vertexIndices, std::vector<Point3f> p, std::vector<Vector3f> S, std::vector<Normal3f> N, std::vector<Point2f> uv, std::vector<int> faceIndices, Allocator alloc); std::string ToString() const; bool WritePLY(std::string filename) const; static void Init(Allocator alloc);
<<TriangleMesh Public Members>> 
int nTriangles, nVertices; const int *vertexIndices = nullptr; const Point3f *p = nullptr; const Normal3f *n = nullptr; const Vector3f *s = nullptr; const Point2f *uv = nullptr; bool reverseOrientation, transformSwapsHandedness;
};

In addition to the mesh vertex positions and vertex indices, per-vertex normals n, tangent vectors s, and texture coordinates uv may be provided. The corresponding vectors should be empty if there are no such values or should be the same size as p otherwise.

<<TriangleMesh Method Definitions>>= 
TriangleMesh::TriangleMesh( const Transform &renderFromObject, bool reverseOrientation, std::vector<int> indices, std::vector<Point3f> p, std::vector<Vector3f> s, std::vector<Normal3f> n, std::vector<Point2f> uv, std::vector<int> faceIndices, Allocator alloc) : nTriangles(indices.size() / 3), nVertices(p.size()) { <<Initialize mesh vertexIndices>>  <<Transform mesh vertices to rendering space and initialize mesh p>> 
for (Point3f &pt : p) pt = renderFromObject(pt); this->p = point3BufferCache->LookupOrAdd(p, alloc);
<<Remainder of TriangleMesh constructor>> 
this->reverseOrientation = reverseOrientation; this->transformSwapsHandedness = renderFromObject.SwapsHandedness(); if (!uv.empty()) { CHECK_EQ(nVertices, uv.size()); this->uv = point2BufferCache->LookupOrAdd(uv, alloc); } if (!n.empty()) { CHECK_EQ(nVertices, n.size()); for (Normal3f &nn : n) { nn = renderFromObject(nn); if (reverseOrientation) nn = -nn; } this->n = normal3BufferCache->LookupOrAdd(n, alloc); } if (!s.empty()) { CHECK_EQ(nVertices, s.size()); for (Vector3f &ss : s) ss = renderFromObject(ss); this->s = vector3BufferCache->LookupOrAdd(s, alloc); } if (!faceIndices.empty()) { CHECK_EQ(nTriangles, faceIndices.size()); this->faceIndices = intBufferCache->LookupOrAdd(faceIndices, alloc); } // Make sure that we don't have too much stuff to be using integers to // index into things. CHECK_LE(p.size(), std::numeric_limits<int>::max()); // We could be clever and check indices.size() / 3 if we were careful // to promote to a 64-bit int before multiplying by 3 when we look up // in the indices array... CHECK_LE(indices.size(), std::numeric_limits<int>::max());
}

The mesh data is made available via public member variables; as with things like coordinates of points or rays’ directions, there would be little benefit and some bother from information hiding in this case.

<<TriangleMesh Public Members>>= 
int nTriangles, nVertices; const int *vertexIndices = nullptr; const Point3f *p = nullptr;

Although its constructor takes std::vector parameters, TriangleMesh stores plain pointers to its data arrays. The vertexIndices pointer points to 3 * nTriangles values, and the per-vertex pointers, if not nullptr, point to nVertices values.

We chose this design so that different TriangleMeshes could potentially point to the same arrays in memory in the case that they were both given the same values for some or all of their parameters. Although pbrt offers capabilities for object instancing, where multiple copies of the same geometry can be placed in the scene with different transformation matrices (e.g., via the TransformedPrimitive that is described in Section 7.1.2), the scenes provided to it do not always make full use of this capability. For example, with the landscape scene in Figures 5.11 and 7.2, over 400 MB is saved from detecting such redundant arrays.

The BufferCache class handles the details of storing a single unique copy of each buffer provided to it. Its LookupOrAdd() method, to be defined shortly, takes a std::vector of the type it manages and returns a pointer to memory that stores the same values.

<<Initialize mesh vertexIndices>>= 

The BufferCaches are made available through global variables in the pbrt namespace. Additional ones, not included here, handle normals, tangent vectors, and texture coordinates.

<<BufferCache Global Declarations>>= 
extern BufferCache<int> *intBufferCache; extern BufferCache<Point3f> *point3BufferCache;

The BufferCache class is templated based on the array element type that it stores.

<<BufferCache Definition>>= 
template <typename T> class BufferCache { public: <<BufferCache Public Methods>> 
const T *LookupOrAdd(pstd::span<const T> buf, Allocator alloc) { <<Return pointer to data if buf contents are already in the cache>> 
Buffer lookupBuffer(buf.data(), buf.size()); int shardIndex = uint32_t(lookupBuffer.hash) >> (32 - logShards); mutex[shardIndex].lock_shared(); if (auto iter = cache[shardIndex].find(lookupBuffer); iter != cache[shardIndex].end()) { const T *ptr = iter->ptr; mutex[shardIndex].unlock_shared(); return ptr; }
<<Add buf contents to cache and return pointer to cached copy>> 
mutex[shardIndex].unlock_shared(); T *ptr = alloc.allocate_object<T>(buf.size()); std::copy(buf.begin(), buf.end(), ptr); mutex[shardIndex].lock(); <<Handle the case of another thread adding the buffer first>> 
if (auto iter = cache[shardIndex].find(lookupBuffer); iter != cache[shardIndex].end()) { const T *cachePtr = iter->ptr; mutex[shardIndex].unlock(); alloc.deallocate_object(ptr, buf.size()); return cachePtr; }
cache[shardIndex].insert(Buffer(ptr, buf.size())); mutex[shardIndex].unlock(); return ptr;
} size_t BytesUsed() const { return bytesUsed; }
private: <<BufferCache::Buffer Definition>> 
struct Buffer { <<BufferCache::Buffer Public Methods>> 
Buffer(const T *ptr, size_t size) : ptr(ptr), size(size) { hash = HashBuffer(ptr, size); } bool operator==(const Buffer &b) const { return size == b.size && hash == b.hash && std::memcmp(ptr, b.ptr, size * sizeof(T)) == 0; }
const T *ptr = nullptr; size_t size = 0, hash; };
<<BufferCache::BufferHasher Definition>> 
struct BufferHasher { size_t operator()(const Buffer &b) const { return b.hash; } };
<<BufferCache Private Members>> 
static constexpr int logShards = 6; static constexpr int nShards = 1 << logShards; std::shared_mutex mutex[nShards]; std::unordered_set<Buffer, BufferHasher> cache[nShards];
};

BufferCache allows concurrent use by multiple threads so that multiple meshes can be added to the scene in parallel; the scene construction code in Appendix C takes advantage of this capability. While a single mutex could be used to manage access to it, contention over that mutex by multiple threads can inhibit concurrency, reducing the benefits of multi-threading. Therefore, the cache is broken into 64 independent shards, each holding a subset of the entries. Each shard has its own mutex, allowing different threads to concurrently access different shards.

<<BufferCache Private Members>>= 
static constexpr int logShards = 6; static constexpr int nShards = 1 << logShards; std::shared_mutex mutex[nShards]; std::unordered_set<Buffer, BufferHasher> cache[nShards];

Buffer is a small helper class that wraps an allocation managed by the BufferCache.

<<BufferCache::Buffer Definition>>= 
struct Buffer { <<BufferCache::Buffer Public Methods>> 
Buffer(const T *ptr, size_t size) : ptr(ptr), size(size) { hash = HashBuffer(ptr, size); } bool operator==(const Buffer &b) const { return size == b.size && hash == b.hash && std::memcmp(ptr, b.ptr, size * sizeof(T)) == 0; }
const T *ptr = nullptr; size_t size = 0, hash; };

The Buffer constructor computes the buffer’s hash, which is stored in a member variable.

<<BufferCache::Buffer Public Methods>>= 
Buffer(const T *ptr, size_t size) : ptr(ptr), size(size) { hash = HashBuffer(ptr, size); }

An equality operator, which is required by the std::unordered_set, only returns true if both buffers are the same size and store the same values.

<<BufferCache::Buffer Public Methods>>+= 
bool operator==(const Buffer &b) const { return size == b.size && hash == b.hash && std::memcmp(ptr, b.ptr, size * sizeof(T)) == 0; }

BufferHasher is another helper class, used by std::unordered_set. It returns the buffer’s already-computed hash.

<<BufferCache::BufferHasher Definition>>= 
struct BufferHasher { size_t operator()(const Buffer &b) const { return b.hash; } };

The BufferCache LookUpOrAdd() method checks to see if the values stored by the provided buffer are already in the cache and returns a pointer to them if so. Otherwise, it allocates memory to store them and returns a pointer to it.

<<BufferCache Public Methods>>= 
const T *LookupOrAdd(pstd::span<const T> buf, Allocator alloc) { <<Return pointer to data if buf contents are already in the cache>> 
Buffer lookupBuffer(buf.data(), buf.size()); int shardIndex = uint32_t(lookupBuffer.hash) >> (32 - logShards); mutex[shardIndex].lock_shared(); if (auto iter = cache[shardIndex].find(lookupBuffer); iter != cache[shardIndex].end()) { const T *ptr = iter->ptr; mutex[shardIndex].unlock_shared(); return ptr; }
<<Add buf contents to cache and return pointer to cached copy>> 
mutex[shardIndex].unlock_shared(); T *ptr = alloc.allocate_object<T>(buf.size()); std::copy(buf.begin(), buf.end(), ptr); mutex[shardIndex].lock(); <<Handle the case of another thread adding the buffer first>> 
if (auto iter = cache[shardIndex].find(lookupBuffer); iter != cache[shardIndex].end()) { const T *cachePtr = iter->ptr; mutex[shardIndex].unlock(); alloc.deallocate_object(ptr, buf.size()); return cachePtr; }
cache[shardIndex].insert(Buffer(ptr, buf.size())); mutex[shardIndex].unlock(); return ptr;
}

The pstd::span’s contents need to be wrapped in a Buffer instance to be able to search for a matching buffer in the cache. The buffer’s pointer is returned if it is already present. Because the cache is only read here and is not being modified, the lock_shared() capability of std::shared_mutex is used here, allowing multiple threads to read the hash table concurrently.

<<Return pointer to data if buf contents are already in the cache>>= 
Buffer lookupBuffer(buf.data(), buf.size()); int shardIndex = uint32_t(lookupBuffer.hash) >> (32 - logShards); mutex[shardIndex].lock_shared(); if (auto iter = cache[shardIndex].find(lookupBuffer); iter != cache[shardIndex].end()) { const T *ptr = iter->ptr; mutex[shardIndex].unlock_shared(); return ptr; }

Otherwise, memory is allocated using the allocator to store the buffer, and the values are copied from the provided span before the Buffer is added to the cache. An exclusive lock to the mutex must be held in order to modify the cache; one is acquired by giving up the shared lock and then calling the regular lock() method.

<<Add buf contents to cache and return pointer to cached copy>>= 
mutex[shardIndex].unlock_shared(); T *ptr = alloc.allocate_object<T>(buf.size()); std::copy(buf.begin(), buf.end(), ptr); mutex[shardIndex].lock(); <<Handle the case of another thread adding the buffer first>> 
if (auto iter = cache[shardIndex].find(lookupBuffer); iter != cache[shardIndex].end()) { const T *cachePtr = iter->ptr; mutex[shardIndex].unlock(); alloc.deallocate_object(ptr, buf.size()); return cachePtr; }
cache[shardIndex].insert(Buffer(ptr, buf.size())); mutex[shardIndex].unlock(); return ptr;

It is possible that another thread may have added the buffer to the cache before the current thread is able to; if the same buffer is being added by multiple threads concurrently, then one will end up acquiring the exclusive lock before the other. In that rare case, a pointer to the already-added buffer is returned and the memory allocated by this thread is released.

<<Handle the case of another thread adding the buffer first>>= 
if (auto iter = cache[shardIndex].find(lookupBuffer); iter != cache[shardIndex].end()) { const T *cachePtr = iter->ptr; mutex[shardIndex].unlock(); alloc.deallocate_object(ptr, buf.size()); return cachePtr; }

Returning now to the TriangleMesh constructor, the vertex positions are processed next. Unlike the other shapes that leave the shape description in object space and then transform incoming rays from rendering space to object space, triangle meshes transform the shape into rendering space and thus save the work of transforming incoming rays into object space and the work of transforming the intersection’s geometric representation out to rendering space. This is a good idea because this operation can be performed once at startup, avoiding transforming rays many times during rendering. Using this approach with quadrics is more complicated, although possible—see Exercise 6.8.8 at the end of the chapter.

The resulting points are also provided to the buffer cache, though after the rendering from object transformation has been applied. Because the positions were transformed to rendering space, this cache lookup is rarely successful. The hit rate would likely be higher if positions were left in object space, though doing so would require additional computation to transform vertex positions when they were accessed. Vertex indices and uv texture coordinates fare better with the buffer cache, however.

<<Transform mesh vertices to rendering space and initialize mesh p>>= 
for (Point3f &pt : p) pt = renderFromObject(pt); this->p = point3BufferCache->LookupOrAdd(p, alloc);

We will omit the remainder of the TriangleMesh constructor, as handling the other per-vertex buffer types is similar to how the positions are processed. The remainder of its member variables are below. In addition to the remainder of the mesh vertex and face data, the TriangleMesh records whether the normals should be flipped by way of the values of reverseOrientation and transformSwapsHandedness. Because these two have the same value for all triangles in a mesh, memory can be saved by storing them once with the mesh itself rather than redundantly with each of the triangles.

<<TriangleMesh Public Members>>+= 
const Normal3f *n = nullptr; const Vector3f *s = nullptr; const Point2f *uv = nullptr; bool reverseOrientation, transformSwapsHandedness;

6.5.2 Triangle Class

The Triangle class actually implements the Shape interface. It represents a single triangle.

<<Triangle Definition>>= 
class Triangle { public: <<Triangle Public Methods>> 
static pstd::vector<Shape> CreateTriangles(const TriangleMesh *mesh, Allocator alloc); Triangle(int meshIndex, int triIndex) : meshIndex(meshIndex), triIndex(triIndex) {} static void Init(Allocator alloc); PBRT_CPU_GPU Bounds3f Bounds() const; PBRT_CPU_GPU pstd::optional<ShapeIntersection> Intersect(const Ray &ray, Float tMax = Infinity) const; PBRT_CPU_GPU bool IntersectP(const Ray &ray, Float tMax = Infinity) const; Float Area() const { <<Get triangle vertices in p0, p1, and p2>> 
const TriangleMesh *mesh = GetMesh(); const int *v = &mesh->vertexIndices[3 * triIndex]; Point3f p0 = mesh->p[v[0]], p1 = mesh->p[v[1]], p2 = mesh->p[v[2]];
return 0.5f * Length(Cross(p1 - p0, p2 - p0)); } PBRT_CPU_GPU DirectionCone NormalBounds() const; std::string ToString() const; static TriangleMesh *CreateMesh(const Transform *renderFromObject, bool reverseOrientation, const ParameterDictionary &parameters, const FileLoc *loc, Allocator alloc); Float SolidAngle(Point3f p) const { <<Get triangle vertices in p0, p1, and p2>> 
const TriangleMesh *mesh = GetMesh(); const int *v = &mesh->vertexIndices[3 * triIndex]; Point3f p0 = mesh->p[v[0]], p1 = mesh->p[v[1]], p2 = mesh->p[v[2]];
return SphericalTriangleArea(Normalize(p0 - p), Normalize(p1 - p), Normalize(p2 - p)); } static SurfaceInteraction InteractionFromIntersection( const TriangleMesh *mesh, int triIndex, TriangleIntersection ti, Float time, Vector3f wo) { const int *v = &mesh->vertexIndices[3 * triIndex]; Point3f p0 = mesh->p[v[0]], p1 = mesh->p[v[1]], p2 = mesh->p[v[2]]; <<Compute triangle partial derivatives>> 
<<Compute deltas and matrix determinant for triangle partial derivatives>> 
<<Get triangle texture coordinates in uv array>> 
pstd::array<Point2f, 3> uv = mesh->uv ? pstd::array<Point2f, 3>( {mesh->uv[v[0]], mesh->uv[v[1]], mesh->uv[v[2]]}) : pstd::array<Point2f, 3>({Point2f(0, 0), Point2f(1, 0), Point2f(1, 1)});
Vector2f duv02 = uv[0] - uv[2], duv12 = uv[1] - uv[2]; Vector3f dp02 = p0 - p2, dp12 = p1 - p2; Float determinant = DifferenceOfProducts(duv02[0], duv12[1], duv02[1], duv12[0]);
Vector3f dpdu, dpdv; bool degenerateUV = std::abs(determinant) < 1e-9f; if (!degenerateUV) { <<Compute triangle partial-differential normal p slash partial-differential u and partial-differential normal p slash partial-differential v via matrix inversion>> 
Float invdet = 1 / determinant; dpdu = DifferenceOfProducts(duv12[1], dp02, duv02[1], dp12) * invdet; dpdv = DifferenceOfProducts(duv02[0], dp12, duv12[0], dp02) * invdet;
} <<Handle degenerate triangle left-parenthesis u comma v right-parenthesis parameterization or partial derivatives>> 
if (degenerateUV || LengthSquared(Cross(dpdu, dpdv)) == 0) { Vector3f ng = Cross(p2 - p0, p1 - p0); if (LengthSquared(ng) == 0) ng = Vector3f(Cross(Vector3<double>(p2 - p0), Vector3<double>(p1 - p0))); CoordinateSystem(Normalize(ng), &dpdu, &dpdv); }
<<Interpolate left-parenthesis u comma v right-parenthesis parametric coordinates and hit point>> 
Point3f pHit = ti.b0 * p0 + ti.b1 * p1 + ti.b2 * p2; Point2f uvHit = ti.b0 * uv[0] + ti.b1 * uv[1] + ti.b2 * uv[2];
<<Return SurfaceInteraction for triangle hit>> 
bool flipNormal = mesh->reverseOrientation ^ mesh->transformSwapsHandedness; <<Compute error bounds pError for triangle intersection>> 
Point3f pAbsSum = Abs(ti.b0 * p0) + Abs(ti.b1 * p1) + Abs(ti.b2 * p2); Vector3f pError = gamma(7) * Vector3f(pAbsSum);
SurfaceInteraction isect(Point3fi(pHit, pError), uvHit, wo, dpdu, dpdv, Normal3f(), Normal3f(), time, flipNormal); <<Set final surface normal and shading geometry for triangle>> 
<<Override surface normal in isect for triangle>> 
isect.n = isect.shading.n = Normal3f(Normalize(Cross(dp02, dp12))); if (mesh->reverseOrientation ^ mesh->transformSwapsHandedness) isect.n = isect.shading.n = -isect.n;
if (mesh->n || mesh->s) { <<Initialize Triangle shading geometry>> 
<<Compute shading normal ns for triangle>> 
Normal3f ns; if (mesh->n) { ns = ti.b0 * mesh->n[v[0]] + ti.b1 * mesh->n[v[1]] + ti.b2 * mesh->n[v[2]]; ns = LengthSquared(ns) > 0 ? Normalize(ns) : isect.n; } else ns = isect.n;
<<Compute shading tangent ss for triangle>> 
Vector3f ss; if (mesh->s) { ss = ti.b0 * mesh->s[v[0]] + ti.b1 * mesh->s[v[1]] + ti.b2 * mesh->s[v[2]]; if (LengthSquared(ss) == 0) ss = isect.dpdu; } else ss = isect.dpdu;
<<Compute shading bitangent ts for triangle and adjust ss>> 
Vector3f ts = Cross(ns, ss); if (LengthSquared(ts) > 0) ss = Cross(ts, ns); else CoordinateSystem(ns, &ss, &ts);
<<Compute partial-differential bold n Subscript slash partial-differential u and partial-differential bold n Subscript slash partial-differential v for triangle shading geometry>> 
Normal3f dndu, dndv; if (mesh->n) { // Compute deltas for triangle partial derivatives of normal Vector2f duv02 = uv[0] - uv[2]; Vector2f duv12 = uv[1] - uv[2]; Normal3f dn1 = mesh->n[v[0]] - mesh->n[v[2]]; Normal3f dn2 = mesh->n[v[1]] - mesh->n[v[2]]; Float determinant = DifferenceOfProducts(duv02[0], duv12[1], duv02[1], duv12[0]); bool degenerateUV = std::abs(determinant) < 1e-9; if (degenerateUV) { // We can still compute dndu and dndv, with respect to the // same arbitrary coordinate system we use to compute dpdu // and dpdv when this happens. It's important to do this // (rather than giving up) so that ray differentials for // rays reflected from triangles with degenerate // parameterizations are still reasonable. Vector3f dn = Cross(Vector3f(mesh->n[v[2]] - mesh->n[v[0]]), Vector3f(mesh->n[v[1]] - mesh->n[v[0]])); if (LengthSquared(dn) == 0) dndu = dndv = Normal3f(0, 0, 0); else { Vector3f dnu, dnv; CoordinateSystem(dn, &dnu, &dnv); dndu = Normal3f(dnu); dndv = Normal3f(dnv); } } else { Float invDet = 1 / determinant; dndu = DifferenceOfProducts(duv12[1], dn1, duv02[1], dn2) * invDet; dndv = DifferenceOfProducts(duv02[0], dn2, duv12[0], dn1) * invDet; } } else dndu = dndv = Normal3f(0, 0, 0);
isect.SetShadingGeometry(ns, ss, ts, dndu, dndv, true);
}
return isect;
} pstd::optional<ShapeSample> Sample(Point2f u) const { <<Get triangle vertices in p0, p1, and p2>> 
const TriangleMesh *mesh = GetMesh(); const int *v = &mesh->vertexIndices[3 * triIndex]; Point3f p0 = mesh->p[v[0]], p1 = mesh->p[v[1]], p2 = mesh->p[v[2]];
<<Sample point on triangle uniformly by area>> 
pstd::array<Float, 3> b = SampleUniformTriangle(u); Point3f p = b[0] * p0 + b[1] * p1 + b[2] * p2;
<<Compute surface normal for sampled point on triangle>> 
Normal3f n = Normalize(Normal3f(Cross(p1 - p0, p2 - p0))); if (mesh->n) { Normal3f ns(b[0] * mesh->n[v[0]] + b[1] * mesh->n[v[1]] + (1 - b[0] - b[1]) * mesh->n[v[2]]); n = FaceForward(n, ns); } else if (mesh->reverseOrientation ^ mesh->transformSwapsHandedness) n *= -1;
<<Compute left-parenthesis u comma v right-parenthesis for sampled point on triangle>> 
<<Get triangle texture coordinates in uv array>> 
pstd::array<Point2f, 3> uv = mesh->uv ? pstd::array<Point2f, 3>( {mesh->uv[v[0]], mesh->uv[v[1]], mesh->uv[v[2]]}) : pstd::array<Point2f, 3>({Point2f(0, 0), Point2f(1, 0), Point2f(1, 1)});
Point2f uvSample = b[0] * uv[0] + b[1] * uv[1] + b[2] * uv[2];
<<Compute error bounds pError for sampled point on triangle>> 
Point3f pAbsSum = Abs(b[0] * p0) + Abs(b[1] * p1) + Abs((1 - b[0] - b[1]) * p2); Vector3f pError = Vector3f(gamma(6) * pAbsSum);
return ShapeSample{Interaction(Point3fi(p, pError), n, uvSample), 1 / Area()}; } Float PDF(const Interaction &) const { return 1 / Area(); } pstd::optional<ShapeSample> Sample(const ShapeSampleContext &ctx, Point2f u) const { <<Get triangle vertices in p0, p1, and p2>> 
const TriangleMesh *mesh = GetMesh(); const int *v = &mesh->vertexIndices[3 * triIndex]; Point3f p0 = mesh->p[v[0]], p1 = mesh->p[v[1]], p2 = mesh->p[v[2]];
<<Use uniform area sampling for numerically unstable cases>> 
Float solidAngle = SolidAngle(ctx.p()); if (solidAngle < MinSphericalSampleArea || solidAngle > MaxSphericalSampleArea) { <<Sample shape by area and compute incident direction wi>> 
pstd::optional<ShapeSample> ss = Sample(u); ss->intr.time = ctx.time; Vector3f wi = ss->intr.p() - ctx.p(); if (LengthSquared(wi) == 0) return {}; wi = Normalize(wi);
<<Convert area sampling PDF in ss to solid angle measure>> 
ss->pdf /= AbsDot(ss->intr.n, -wi) / DistanceSquared(ctx.p(), ss->intr.p()); if (IsInf(ss->pdf)) return {};
return ss; }
<<Sample spherical triangle from reference point>> 
<<Apply warp product sampling for cosine factor at reference point>> 
Float pdf = 1; if (ctx.ns != Normal3f(0, 0, 0)) { <<Compute cosine theta -based weights w at sample domain corners>> 
Point3f rp = ctx.p(); Vector3f wi[3] = {Normalize(p0 - rp), Normalize(p1 - rp), Normalize(p2 - rp)}; pstd::array<Float, 4> w = pstd::array<Float, 4>{std::max<Float>(0.01, AbsDot(ctx.ns, wi[1])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[1])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[0])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[2]))};
u = SampleBilinear(u, w); pdf = BilinearPDF(u, w); }
Float triPDF; pstd::array<Float, 3> b = SampleSphericalTriangle({p0, p1, p2}, ctx.p(), u, &triPDF); if (triPDF == 0) return {}; pdf *= triPDF;
<<Compute error bounds pError for sampled point on triangle>> 
Point3f pAbsSum = Abs(b[0] * p0) + Abs(b[1] * p1) + Abs((1 - b[0] - b[1]) * p2); Vector3f pError = Vector3f(gamma(6) * pAbsSum);
<<Return ShapeSample for solid angle sampled point on triangle>> 
Point3f p = b[0] * p0 + b[1] * p1 + b[2] * p2; <<Compute surface normal for sampled point on triangle>> 
Normal3f n = Normalize(Normal3f(Cross(p1 - p0, p2 - p0))); if (mesh->n) { Normal3f ns(b[0] * mesh->n[v[0]] + b[1] * mesh->n[v[1]] + (1 - b[0] - b[1]) * mesh->n[v[2]]); n = FaceForward(n, ns); } else if (mesh->reverseOrientation ^ mesh->transformSwapsHandedness) n *= -1;
<<Compute left-parenthesis u comma v right-parenthesis for sampled point on triangle>> 
<<Get triangle texture coordinates in uv array>> 
pstd::array<Point2f, 3> uv = mesh->uv ? pstd::array<Point2f, 3>( {mesh->uv[v[0]], mesh->uv[v[1]], mesh->uv[v[2]]}) : pstd::array<Point2f, 3>({Point2f(0, 0), Point2f(1, 0), Point2f(1, 1)});
Point2f uvSample = b[0] * uv[0] + b[1] * uv[1] + b[2] * uv[2];
return ShapeSample{Interaction(Point3fi(p, pError), n, ctx.time, uvSample), pdf};
} Float PDF(const ShapeSampleContext &ctx, Vector3f wi) const { Float solidAngle = SolidAngle(ctx.p()); <<Return PDF based on uniform area sampling for challenging triangles>> 
if (solidAngle < MinSphericalSampleArea || solidAngle > MaxSphericalSampleArea) { <<Intersect sample ray with shape geometry>> 
Ray ray = ctx.SpawnRay(wi); pstd::optional<ShapeIntersection> isect = Intersect(ray); if (!isect) return 0;
<<Compute PDF in solid angle measure from shape intersection point>> 
Float pdf = (1 / Area()) / (AbsDot(isect->intr.n, -wi) / DistanceSquared(ctx.p(), isect->intr.p())); if (IsInf(pdf)) pdf = 0;
return pdf; }
Float pdf = 1 / solidAngle; <<Adjust PDF for warp product sampling of triangle cosine theta factor>> 
if (ctx.ns != Normal3f(0, 0, 0)) { <<Get triangle vertices in p0, p1, and p2>> 
const TriangleMesh *mesh = GetMesh(); const int *v = &mesh->vertexIndices[3 * triIndex]; Point3f p0 = mesh->p[v[0]], p1 = mesh->p[v[1]], p2 = mesh->p[v[2]];
Point2f u = InvertSphericalTriangleSample({p0, p1, p2}, ctx.p(), wi); <<Compute cosine theta -based weights w at sample domain corners>> 
Point3f rp = ctx.p(); Vector3f wi[3] = {Normalize(p0 - rp), Normalize(p1 - rp), Normalize(p2 - rp)}; pstd::array<Float, 4> w = pstd::array<Float, 4>{std::max<Float>(0.01, AbsDot(ctx.ns, wi[1])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[1])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[0])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[2]))};
pdf *= BilinearPDF(u, w); }
return pdf; }
private: <<Triangle Private Methods>> 
const TriangleMesh *GetMesh() const { return (*allMeshes)[meshIndex]; }
<<Triangle Private Members>> 
int meshIndex = -1, triIndex = -1; static pstd::vector<const TriangleMesh *> *allMeshes; static constexpr Float MinSphericalSampleArea = 3e-4; static constexpr Float MaxSphericalSampleArea = 6.22;
};

Because complex scenes may have billions of triangles, it is important to minimize the amount of memory that each triangle uses. pbrt stores pointers to all the TriangleMeshes for the scene in a vector, which allows each triangle to be represented using just two integers: one to record which mesh it is a part of and another to record which triangle in the mesh it represents. With 4-byte ints, each Triangle uses just 8 bytes of memory.

Given this compact representation of triangles, recall the discussion in Section 1.5.7 about the memory cost of classes with virtual functions: if Triangle inherited from an abstract Shape base class that defined pure virtual functions, the virtual function pointer with each Triangle alone would double its size, assuming a 64-bit architecture with 8-byte pointers.

<<Triangle Public Methods>>= 
Triangle(int meshIndex, int triIndex) : meshIndex(meshIndex), triIndex(triIndex) {}

<<Triangle Private Members>>= 
int meshIndex = -1, triIndex = -1; static pstd::vector<const TriangleMesh *> *allMeshes;

The bounding box of a triangle is easily found by computing a bounding box that encompasses its three vertices. Because the vertices have already been transformed to rendering space, no transformation of the bounds is necessary.

<<Triangle Method Definitions>>= 
Bounds3f Triangle::Bounds() const { <<Get triangle vertices in p0, p1, and p2>> 
const TriangleMesh *mesh = GetMesh(); const int *v = &mesh->vertexIndices[3 * triIndex]; Point3f p0 = mesh->p[v[0]], p1 = mesh->p[v[1]], p2 = mesh->p[v[2]];
return Union(Bounds3f(p0, p1), p2); }

Finding the positions of the three triangle vertices requires some indirection: first the mesh pointer must be found; then the indices of the three triangle vertices can be found given the triangle’s index in the mesh; finally, the positions can be read from the mesh’s p array. We will reuse this fragment repeatedly in the following, as the vertex positions are needed in many of the Triangle methods.

<<Get triangle vertices in p0, p1, and p2>>= 
const TriangleMesh *mesh = GetMesh(); const int *v = &mesh->vertexIndices[3 * triIndex]; Point3f p0 = mesh->p[v[0]], p1 = mesh->p[v[1]], p2 = mesh->p[v[2]];

The GetMesh() method encapsulates the indexing operation to get the mesh’s pointer.

<<Triangle Private Methods>>= 
const TriangleMesh *GetMesh() const { return (*allMeshes)[meshIndex]; }

Using the fact that the area of a parallelogram is given by the length of the cross product of the two vectors along its sides, the Area() method computes the triangle area as half the area of the parallelogram formed by two of its edge vectors (see Figure 6.13).

<<Triangle Public Methods>>+=  
Float Area() const { <<Get triangle vertices in p0, p1, and p2>> 
const TriangleMesh *mesh = GetMesh(); const int *v = &mesh->vertexIndices[3 * triIndex]; Point3f p0 = mesh->p[v[0]], p1 = mesh->p[v[1]], p2 = mesh->p[v[2]];
return 0.5f * Length(Cross(p1 - p0, p2 - p0)); }

Bounding the triangle’s normal should be trivial: a cross product of appropriate edges gives its single normal vector direction. However, two subtleties that affect the orientation of the normal must be handled before the bounds are returned.

<<Triangle Method Definitions>>+=  
DirectionCone Triangle::NormalBounds() const { <<Get triangle vertices in p0, p1, and p2>> 
const TriangleMesh *mesh = GetMesh(); const int *v = &mesh->vertexIndices[3 * triIndex]; Point3f p0 = mesh->p[v[0]], p1 = mesh->p[v[1]], p2 = mesh->p[v[2]];
Normal3f n = Normalize(Normal3f(Cross(p1 - p0, p2 - p0))); <<Ensure correct orientation of geometric normal for normal bounds>> 
if (mesh->n) { Normal3f ns(mesh->n[v[0]] + mesh->n[v[1]] + mesh->n[v[2]]); n = FaceForward(n, ns); } else if (mesh->reverseOrientation ^ mesh->transformSwapsHandedness) n *= -1;
return DirectionCone(Vector3f(n)); }

The first issue with the returned normal comes from the presence of per-vertex normals, even though it is a bound on geometric normals that NormalBounds() is supposed to return. pbrt requires that both the geometric normal and the interpolated per-vertex normal lie on the same side of the surface. If the two of them are on different sides, then pbrt follows the convention that the geometric normal is the one that should be flipped.

Furthermore, if there are not per-vertex normals, then—as with earlier shapes—the normal is flipped if either ReverseOrientation was specified in the scene description or the rendering to object transformation swaps the coordinate system handedness, but not both. Both of these considerations must be accounted for in the normal returned for the normal bounds.

<<Ensure correct orientation of geometric normal for normal bounds>>= 
if (mesh->n) { Normal3f ns(mesh->n[v[0]] + mesh->n[v[1]] + mesh->n[v[2]]); n = FaceForward(n, ns); } else if (mesh->reverseOrientation ^ mesh->transformSwapsHandedness) n *= -1;

Although it is not required by the Shape interface, we will find it useful to be able to compute the solid angle that a triangle subtends from a reference point. The previously defined SphericalTriangleArea() function takes care of this directly.

<<Triangle Public Methods>>+=  
Float SolidAngle(Point3f p) const { <<Get triangle vertices in p0, p1, and p2>> 
const TriangleMesh *mesh = GetMesh(); const int *v = &mesh->vertexIndices[3 * triIndex]; Point3f p0 = mesh->p[v[0]], p1 = mesh->p[v[1]], p2 = mesh->p[v[2]];
return SphericalTriangleArea(Normalize(p0 - p), Normalize(p1 - p), Normalize(p2 - p)); }

6.5.3 Ray–Triangle Intersection

Unlike the other shapes so far, pbrt provides a stand-alone triangle intersection function that takes a ray and the three triangle vertices directly. Having this functionality available without needing to instantiate both a Triangle and a TriangleMesh in order to do a ray–triangle intersection test is helpful in a few other parts of the system. The Triangle class intersection methods, described next, use this function in their implementations.

<<Triangle Functions>>= 
pstd::optional<TriangleIntersection> IntersectTriangle(const Ray &ray, Float tMax, Point3f p0, Point3f p1, Point3f p2) { <<Return no intersection if triangle is degenerate>> 
if (LengthSquared(Cross(p2 - p0, p1 - p0)) == 0) return {};
<<Transform triangle vertices to ray coordinate space>> 
<<Translate vertices based on ray origin>> 
Point3f p0t = p0 - Vector3f(ray.o); Point3f p1t = p1 - Vector3f(ray.o); Point3f p2t = p2 - Vector3f(ray.o);
<<Permute components of triangle vertices and ray direction>> 
int kz = MaxComponentIndex(Abs(ray.d)); int kx = kz + 1; if (kx == 3) kx = 0; int ky = kx + 1; if (ky == 3) ky = 0; Vector3f d = Permute(ray.d, {kx, ky, kz}); p0t = Permute(p0t, {kx, ky, kz}); p1t = Permute(p1t, {kx, ky, kz}); p2t = Permute(p2t, {kx, ky, kz});
<<Apply shear transformation to translated vertex positions>> 
Float Sx = -d.x / d.z; Float Sy = -d.y / d.z; Float Sz = 1 / d.z; p0t.x += Sx * p0t.z; p0t.y += Sy * p0t.z; p1t.x += Sx * p1t.z; p1t.y += Sy * p1t.z; p2t.x += Sx * p2t.z; p2t.y += Sy * p2t.z;
<<Compute edge function coefficients e0, e1, and e2>> 
Float e0 = DifferenceOfProducts(p1t.x, p2t.y, p1t.y, p2t.x); Float e1 = DifferenceOfProducts(p2t.x, p0t.y, p2t.y, p0t.x); Float e2 = DifferenceOfProducts(p0t.x, p1t.y, p0t.y, p1t.x);
<<Fall back to double-precision test at triangle edges>> 
if (sizeof(Float) == sizeof(float) && (e0 == 0.0f || e1 == 0.0f || e2 == 0.0f)) { double p2txp1ty = (double)p2t.x * (double)p1t.y; double p2typ1tx = (double)p2t.y * (double)p1t.x; e0 = (float)(p2typ1tx - p2txp1ty); double p0txp2ty = (double)p0t.x * (double)p2t.y; double p0typ2tx = (double)p0t.y * (double)p2t.x; e1 = (float)(p0typ2tx - p0txp2ty); double p1txp0ty = (double)p1t.x * (double)p0t.y; double p1typ0tx = (double)p1t.y * (double)p0t.x; e2 = (float)(p1typ0tx - p1txp0ty); }
<<Perform triangle edge and determinant tests>> 
if ((e0 < 0 || e1 < 0 || e2 < 0) && (e0 > 0 || e1 > 0 || e2 > 0)) return {}; Float det = e0 + e1 + e2; if (det == 0) return {};
<<Compute scaled hit distance to triangle and test against ray t range>> 
p0t.z *= Sz; p1t.z *= Sz; p2t.z *= Sz; Float tScaled = e0 * p0t.z + e1 * p1t.z + e2 * p2t.z; if (det < 0 && (tScaled >= 0 || tScaled < tMax * det)) return {}; else if (det > 0 && (tScaled <= 0 || tScaled > tMax * det)) return {};
<<Compute barycentric coordinates and t value for triangle intersection>> 
Float invDet = 1 / det; Float b0 = e0 * invDet, b1 = e1 * invDet, b2 = e2 * invDet; Float t = tScaled * invDet;
<<Ensure that computed triangle t is conservatively greater than zero>> 
<<Compute delta Subscript z term for triangle t error bounds>> 
Float maxZt = MaxComponentValue(Abs(Vector3f(p0t.z, p1t.z, p2t.z))); Float deltaZ = gamma(3) * maxZt;
<<Compute delta Subscript x and delta Subscript y terms for triangle t error bounds>> 
Float maxXt = MaxComponentValue(Abs(Vector3f(p0t.x, p1t.x, p2t.x))); Float maxYt = MaxComponentValue(Abs(Vector3f(p0t.y, p1t.y, p2t.y))); Float deltaX = gamma(5) * (maxXt + maxZt); Float deltaY = gamma(5) * (maxYt + maxZt);
<<Compute delta Subscript e term for triangle t error bounds>> 
Float deltaE = 2 * (gamma(2) * maxXt * maxYt + deltaY * maxXt + deltaX * maxYt);
<<Compute delta Subscript t term for triangle t error bounds and check t>> 
Float maxE = MaxComponentValue(Abs(Vector3f(e0, e1, e2))); Float deltaT = 3 * (gamma(3) * maxE * maxZt + deltaE * maxZt + deltaZ * maxE) * std::abs(invDet); if (t <= deltaT) return {};
<<Return TriangleIntersection for intersection>> 
return TriangleIntersection{b0, b1, b2, t};
}

pbrt’s ray–triangle intersection test is based on first computing an affine transformation that transforms the ray such that its origin is at left-parenthesis 0 comma 0 comma 0 right-parenthesis in the transformed coordinate system and such that its direction is along the plus z axis. Triangle vertices are also transformed into this coordinate system before the intersection test is performed. In the following, we will see that applying this coordinate system transformation simplifies the intersection test logic since, for example, the x and y coordinates of any intersection point must be zero. Later, in Section 6.8.4, we will see that this transformation makes it possible to have a watertight ray–triangle intersection algorithm, such that intersections with tricky rays like those that hit the triangle right on the edge are never incorrectly reported as misses.

One side effect of the transformation that we will apply to the vertices is that, due to floating-point round-off error, a degenerate triangle may be transformed into a non-degenerate triangle. If an intersection is reported with a degenerate triangle, then later code that tries to compute the geometric properties of the intersection will be unable to compute valid results. Therefore, this function starts with testing for a degenerate triangle and returning immediately if one was provided.

<<Return no intersection if triangle is degenerate>>= 
if (LengthSquared(Cross(p2 - p0, p1 - p0)) == 0) return {};

There are three steps to computing the transformation from rendering space to the ray–triangle intersection coordinate space: a translation bold upper T , a coordinate permutation bold upper P , and a shear bold upper S . Rather than computing explicit transformation matrices for each of these and then computing an aggregate transformation matrix bold upper M equals bold upper S bold upper P bold upper T to transform vertices to the coordinate space, the following implementation applies each step of the transformation directly, which ends up being a more efficient approach.

<<Transform triangle vertices to ray coordinate space>>= 
<<Translate vertices based on ray origin>> 
Point3f p0t = p0 - Vector3f(ray.o); Point3f p1t = p1 - Vector3f(ray.o); Point3f p2t = p2 - Vector3f(ray.o);
<<Permute components of triangle vertices and ray direction>> 
int kz = MaxComponentIndex(Abs(ray.d)); int kx = kz + 1; if (kx == 3) kx = 0; int ky = kx + 1; if (ky == 3) ky = 0; Vector3f d = Permute(ray.d, {kx, ky, kz}); p0t = Permute(p0t, {kx, ky, kz}); p1t = Permute(p1t, {kx, ky, kz}); p2t = Permute(p2t, {kx, ky, kz});
<<Apply shear transformation to translated vertex positions>> 
Float Sx = -d.x / d.z; Float Sy = -d.y / d.z; Float Sz = 1 / d.z; p0t.x += Sx * p0t.z; p0t.y += Sy * p0t.z; p1t.x += Sx * p1t.z; p1t.y += Sy * p1t.z; p2t.x += Sx * p2t.z; p2t.y += Sy * p2t.z;

The translation that places the ray origin at the origin of the coordinate system is:

bold upper T equals Start 4 By 4 Matrix 1st Row 1st Column 1 2nd Column 0 3rd Column 0 4th Column minus normal o Subscript x Baseline 2nd Row 1st Column 0 2nd Column 1 3rd Column 0 4th Column minus normal o Subscript y Baseline 3rd Row 1st Column 0 2nd Column 0 3rd Column 1 4th Column minus normal o Subscript z Baseline 4th Row 1st Column 0 2nd Column 0 3rd Column 0 4th Column 1 EndMatrix period

This transformation does not need to be explicitly applied to the ray origin, but we will apply it to the three triangle vertices.

<<Translate vertices based on ray origin>>= 
Point3f p0t = p0 - Vector3f(ray.o); Point3f p1t = p1 - Vector3f(ray.o); Point3f p2t = p2 - Vector3f(ray.o);

Next, the three dimensions of the space are permuted so that the z dimension is the one where the absolute value of the ray’s direction is largest. The x and y dimensions are arbitrarily assigned to the other two dimensions. This step ensures that if, for example, the original ray’s z direction is zero, then a dimension with nonzero magnitude is mapped to plus z .

For example, if the ray’s direction had the largest magnitude in x , the permutation would be:

bold upper P equals Start 4 By 4 Matrix 1st Row 1st Column 0 2nd Column 1 3rd Column 0 4th Column 0 2nd Row 1st Column 0 2nd Column 0 3rd Column 1 4th Column 0 3rd Row 1st Column 1 2nd Column 0 3rd Column 0 4th Column 0 4th Row 1st Column 0 2nd Column 0 3rd Column 0 4th Column 1 EndMatrix period

As before, it is easiest to permute the dimensions of the ray direction and the translated triangle vertices directly.

<<Permute components of triangle vertices and ray direction>>= 
int kz = MaxComponentIndex(Abs(ray.d)); int kx = kz + 1; if (kx == 3) kx = 0; int ky = kx + 1; if (ky == 3) ky = 0; Vector3f d = Permute(ray.d, {kx, ky, kz}); p0t = Permute(p0t, {kx, ky, kz}); p1t = Permute(p1t, {kx, ky, kz}); p2t = Permute(p2t, {kx, ky, kz});

Finally, a shear transformation aligns the ray direction with the plus z axis:

bold upper S equals Start 4 By 4 Matrix 1st Row 1st Column 1 2nd Column 0 3rd Column minus bold d Subscript x Baseline slash bold d Subscript z Baseline 4th Column 0 2nd Row 1st Column 0 2nd Column 1 3rd Column minus bold d Subscript y Baseline slash bold d Subscript z Baseline 4th Column 0 3rd Row 1st Column 0 2nd Column 0 3rd Column 1 slash bold d Subscript z Baseline 4th Column 0 4th Row 1st Column 0 2nd Column 0 3rd Column 0 4th Column 1 EndMatrix period

To see how this transformation works, consider its operation on the ray direction vector left-bracket bold d Subscript x Baseline bold d Subscript y Baseline bold d Subscript z Baseline 0 right-bracket Superscript upper T .

For now, only the x and y dimensions are sheared; we can wait and shear the z dimension only if the ray intersects the triangle.

<<Apply shear transformation to translated vertex positions>>= 
Float Sx = -d.x / d.z; Float Sy = -d.y / d.z; Float Sz = 1 / d.z; p0t.x += Sx * p0t.z; p0t.y += Sy * p0t.z; p1t.x += Sx * p1t.z; p1t.y += Sy * p1t.z; p2t.x += Sx * p2t.z; p2t.y += Sy * p2t.z;

Note that the calculations for the coordinate permutation and the shear coefficients only depend on the given ray; they are independent of the triangle. In a high-performance ray tracer, it may be worthwhile to compute these values once and store them in the Ray class, rather than recomputing them for each triangle the ray is intersected with.

With the triangle vertices transformed to this coordinate system, our task now is to find whether the ray starting from the origin and traveling along the plus z axis intersects the transformed triangle. Because of the way the coordinate system was constructed, this problem is equivalent to the 2D problem of determining if the x , y coordinates left-parenthesis 0 comma 0 right-parenthesis are inside the x y projection of the triangle (Figure 6.12).

Figure 6.12: In the ray–triangle intersection coordinate system, the ray starts at the origin and goes along the plus z axis. The intersection test can be performed by considering only the x y projection of the ray and the triangle vertices, which in turn reduces to determining if the 2D point left-parenthesis 0 comma 0 right-parenthesis is within the triangle.

To understand how the intersection algorithm works, first recall from Figure 3.6 that the length of the cross product of two vectors gives the area of the parallelogram that they define. In 2D, with vectors bold a and bold b , the area is

bold a Subscript x Baseline bold b Subscript y Baseline minus bold b Subscript x Baseline bold a Subscript y Baseline period

Half of this area is the area of the triangle that they define. Thus, we can see that in 2D, the area of a triangle with vertices normal p 0 , normal p 1 , and normal p 2 is

one-half left-parenthesis left-parenthesis normal p 1 Subscript x Baseline minus normal p 0 Subscript x Baseline right-parenthesis left-parenthesis normal p 2 Subscript y Baseline minus normal p 0 Subscript y Baseline right-parenthesis minus left-parenthesis normal p 2 Subscript x Baseline minus normal p 0 Subscript x Baseline right-parenthesis left-parenthesis normal p 1 Subscript y Baseline minus normal p 0 Subscript y Baseline right-parenthesis right-parenthesis period

Figure 6.13 visualizes this idea geometrically.

Figure 6.13: The area of a triangle with two edges given by vectors bold v 1 and bold v 2 is one-half of the area of the parallelogram shown here. The parallelogram area is given by the length of the cross product of bold v 1 and bold v 2 .

We will use this expression of triangle area to define a signed edge function: given two triangle vertices normal p 0 and normal p 1 , we can define the directed edge function e as the function that gives twice the area of the triangle given by normal p 0 , normal p 1 , and a given third point normal p Subscript :

e left-parenthesis normal p Subscript Baseline right-parenthesis equals left-parenthesis normal p 1 Subscript x Baseline minus normal p 0 Subscript x Baseline right-parenthesis left-parenthesis normal p Subscript Baseline Subscript y Baseline minus normal p 0 Subscript y Baseline right-parenthesis minus left-parenthesis normal p Subscript Baseline Subscript x Baseline minus normal p 0 Subscript x Baseline right-parenthesis left-parenthesis normal p 1 Subscript y Baseline minus normal p 0 Subscript y Baseline right-parenthesis period
(6.5)

(See Figure 6.14.)

Figure 6.14: The edge function e left-parenthesis normal p Subscript Baseline right-parenthesis characterizes points with respect to an oriented line between two points normal p 0 and normal p 1 . The value of the edge function is positive for points normal p Subscript to the left of the line, zero for points on the line, and negative for points to the right of the line. The ray–triangle intersection algorithm uses an edge function that is twice the signed area of the triangle formed by the three points.

The edge function gives a positive value for points to the right of the line, and a negative value for points to the left. Thus, if a point has edge function values of the same sign for all three edges of a triangle, it must be on the same side of all three edges and thus must be inside the triangle.

Thanks to the coordinate system transformation, the point normal p Subscript that we are testing has coordinates left-parenthesis 0 comma 0 right-parenthesis . This simplifies the edge function expressions. For example, for the edge e 0 from normal p 1 to normal p 2 , we have:

StartLayout 1st Row 1st Column e 0 left-parenthesis normal p Subscript Baseline right-parenthesis 2nd Column equals left-parenthesis normal p 2 Subscript x Baseline minus normal p 1 Subscript x Baseline right-parenthesis left-parenthesis normal p Subscript Baseline Subscript y Baseline minus normal p 1 Subscript y Baseline right-parenthesis minus left-parenthesis normal p Subscript Baseline Subscript x Baseline minus normal p 1 Subscript x Baseline right-parenthesis left-parenthesis normal p 2 Subscript y Baseline minus normal p 1 Subscript y Baseline right-parenthesis 2nd Row 1st Column Blank 2nd Column equals left-parenthesis normal p 2 Subscript x Baseline minus normal p 1 Subscript x Baseline right-parenthesis left-parenthesis minus normal p 1 Subscript y Baseline right-parenthesis minus left-parenthesis minus normal p 1 Subscript x Baseline right-parenthesis left-parenthesis normal p 2 Subscript y Baseline minus normal p 1 Subscript y Baseline right-parenthesis 3rd Row 1st Column Blank 2nd Column equals normal p 1 Subscript x Baseline normal p 2 Subscript y Baseline minus normal p 2 Subscript x Baseline normal p 1 Subscript y Baseline period EndLayout
(6.6)

In the following, we will use the indexing scheme that the edge function e Subscript i corresponds to the directed edge from vertex normal p Subscript Baseline Subscript left-parenthesis i plus 1 right-parenthesis normal m normal o normal d 3 to normal p Subscript Baseline Subscript left-parenthesis i plus 2 right-parenthesis normal m normal o normal d 3 .

<<Compute edge function coefficients e0, e1, and e2>>= 
Float e0 = DifferenceOfProducts(p1t.x, p2t.y, p1t.y, p2t.x); Float e1 = DifferenceOfProducts(p2t.x, p0t.y, p2t.y, p0t.x); Float e2 = DifferenceOfProducts(p0t.x, p1t.y, p0t.y, p1t.x);

In the rare case that any of the edge function values is exactly zero, it is not possible to be sure if the ray hits the triangle or not, and the edge equations are reevaluated using double-precision floating-point arithmetic. (Section 6.8.4 discusses the need for this step in more detail.) The fragment that implements this computation, <<Fall back to double-precision test at triangle edges>>, is just a reimplementation of <<Compute edge function coefficients e0, e1, and e2>> using doubles and so is not included here.

Given the values of the three edge functions, we have our first two opportunities to determine that there is no intersection. First, if the signs of the edge function values differ, then the point left-parenthesis 0 comma 0 right-parenthesis is not on the same side of all three edges and therefore is outside the triangle. Second, if the sum of the three edge function values is zero, then the ray is approaching the triangle edge-on, and we report no intersection. (For a closed triangle mesh, the ray will hit a neighboring triangle instead.)

<<Perform triangle edge and determinant tests>>= 
if ((e0 < 0 || e1 < 0 || e2 < 0) && (e0 > 0 || e1 > 0 || e2 > 0)) return {}; Float det = e0 + e1 + e2; if (det == 0) return {};

Because the ray starts at the origin, has unit length, and is along the plus z axis, the z coordinate value of the intersection point is equal to the intersection’s parametric t value. To compute this z value, we first need to go ahead and apply the shear transformation to the z coordinates of the triangle vertices. Given these z values, the barycentric coordinates of the intersection point in the triangle can be used to interpolate them across the triangle. They are given by dividing each edge function value by the sum of edge function values:

b Subscript i Baseline equals StartFraction e Subscript i Baseline Over e 0 plus e 1 plus e 2 EndFraction period

Thus, the b Subscript i sum to one.

The interpolated z value is given by

z equals b 0 z 0 plus b 1 z 1 plus b 2 z 2 comma

where z Subscript i are the coordinates of the three vertices in the ray–triangle intersection coordinate system.

To save the cost of the floating-point division to compute b Subscript i in cases where the final t value is out of the range of valid t values, the implementation here first computes t by interpolating z Subscript i with e Subscript i (in other words, not yet performing the division by d equals e 0 plus e 1 plus e 2 ). If the sign of d and the sign of the interpolated t value are different, then the final t value will certainly be negative and thus not a valid intersection.

Along similar lines, the check t less-than t Subscript normal m normal a normal x can be equivalently performed in two ways:

StartLayout 1st Row 1st Column sigma-summation Underscript i Endscripts e Subscript i Baseline z Subscript i Baseline less-than t Subscript normal m normal a normal x Baseline left-parenthesis e 0 plus e 1 plus e 2 right-parenthesis 2nd Column if e 0 plus e 1 plus e 2 greater-than 0 2nd Row 1st Column sigma-summation Underscript i Endscripts e Subscript i Baseline z Subscript i Baseline greater-than t Subscript normal m normal a normal x Baseline left-parenthesis e 0 plus e 1 plus e 2 right-parenthesis 2nd Column otherwise period EndLayout

<<Compute scaled hit distance to triangle and test against ray t range>>= 
p0t.z *= Sz; p1t.z *= Sz; p2t.z *= Sz; Float tScaled = e0 * p0t.z + e1 * p1t.z + e2 * p2t.z; if (det < 0 && (tScaled >= 0 || tScaled < tMax * det)) return {}; else if (det > 0 && (tScaled <= 0 || tScaled > tMax * det)) return {};

Given a valid intersection, the actual barycentric coordinates and t value for the intersection are found.

<<Compute barycentric coordinates and t value for triangle intersection>>= 
Float invDet = 1 / det; Float b0 = e0 * invDet, b1 = e1 * invDet, b2 = e2 * invDet; Float t = tScaled * invDet;

After a final test on the t value that will be discussed in Section 6.8.7, a TriangleIntersection object that represents the intersection can be returned.

<<Return TriangleIntersection for intersection>>= 
return TriangleIntersection{b0, b1, b2, t};

TriangleIntersection just records the barycentric coordinates and the t value along the ray where the intersection occurred.

<<TriangleIntersection Definition>>= 
struct TriangleIntersection { Float b0, b1, b2; Float t; };

The structure of the Triangle::Intersect() method follows the form of earlier intersection test methods.

<<Triangle Method Definitions>>+= 
pstd::optional<ShapeIntersection> Triangle::Intersect(const Ray &ray, Float tMax) const { <<Get triangle vertices in p0, p1, and p2>> 
const TriangleMesh *mesh = GetMesh(); const int *v = &mesh->vertexIndices[3 * triIndex]; Point3f p0 = mesh->p[v[0]], p1 = mesh->p[v[1]], p2 = mesh->p[v[2]];
pstd::optional<TriangleIntersection> triIsect = IntersectTriangle(ray, tMax, p0, p1, p2); if (!triIsect) return {}; SurfaceInteraction intr = InteractionFromIntersection( mesh, triIndex, *triIsect, ray.time, -ray.d); return ShapeIntersection{intr, triIsect->t}; }

We will not include the Triangle::IntersectP() method here, as it is just based on calling IntersectTriangle().

The InteractionFromIntersection() method is different than the corresponding methods in the quadrics in that it is a stand-alone function rather than a regular member function. Because a call to it is thus not associated with a specific Triangle instance, it takes a TriangleMesh and the index of a triangle in the mesh as parameters. In the context of its usage in the Intersect() method, this may seem gratuitous—why pass that information as parameters rather than access it directly in a non-static method?

We have designed the interface in this way so that we are able to use this method in pbrt’s GPU rendering path, where the Triangle class is not used. There, the representation of triangles in the scene is abstracted by a ray intersection API and the geometric ray–triangle intersection test is performed using specialized hardware. Given an intersection, it provides the triangle index, a pointer to the mesh that the triangle is a part of, and the barycentric coordinates of the intersection. That information is sufficient to call this method, which then allows us to find the SurfaceInteraction for such intersections using the same code as executes on the CPU.

<<Triangle Public Methods>>+=  
static SurfaceInteraction InteractionFromIntersection( const TriangleMesh *mesh, int triIndex, TriangleIntersection ti, Float time, Vector3f wo) { const int *v = &mesh->vertexIndices[3 * triIndex]; Point3f p0 = mesh->p[v[0]], p1 = mesh->p[v[1]], p2 = mesh->p[v[2]]; <<Compute triangle partial derivatives>> 
<<Compute deltas and matrix determinant for triangle partial derivatives>> 
<<Get triangle texture coordinates in uv array>> 
pstd::array<Point2f, 3> uv = mesh->uv ? pstd::array<Point2f, 3>( {mesh->uv[v[0]], mesh->uv[v[1]], mesh->uv[v[2]]}) : pstd::array<Point2f, 3>({Point2f(0, 0), Point2f(1, 0), Point2f(1, 1)});
Vector2f duv02 = uv[0] - uv[2], duv12 = uv[1] - uv[2]; Vector3f dp02 = p0 - p2, dp12 = p1 - p2; Float determinant = DifferenceOfProducts(duv02[0], duv12[1], duv02[1], duv12[0]);
Vector3f dpdu, dpdv; bool degenerateUV = std::abs(determinant) < 1e-9f; if (!degenerateUV) { <<Compute triangle partial-differential normal p slash partial-differential u and partial-differential normal p slash partial-differential v via matrix inversion>> 
Float invdet = 1 / determinant; dpdu = DifferenceOfProducts(duv12[1], dp02, duv02[1], dp12) * invdet; dpdv = DifferenceOfProducts(duv02[0], dp12, duv12[0], dp02) * invdet;
} <<Handle degenerate triangle left-parenthesis u comma v right-parenthesis parameterization or partial derivatives>> 
if (degenerateUV || LengthSquared(Cross(dpdu, dpdv)) == 0) { Vector3f ng = Cross(p2 - p0, p1 - p0); if (LengthSquared(ng) == 0) ng = Vector3f(Cross(Vector3<double>(p2 - p0), Vector3<double>(p1 - p0))); CoordinateSystem(Normalize(ng), &dpdu, &dpdv); }
<<Interpolate left-parenthesis u comma v right-parenthesis parametric coordinates and hit point>> 
Point3f pHit = ti.b0 * p0 + ti.b1 * p1 + ti.b2 * p2; Point2f uvHit = ti.b0 * uv[0] + ti.b1 * uv[1] + ti.b2 * uv[2];
<<Return SurfaceInteraction for triangle hit>> 
bool flipNormal = mesh->reverseOrientation ^ mesh->transformSwapsHandedness; <<Compute error bounds pError for triangle intersection>> 
Point3f pAbsSum = Abs(ti.b0 * p0) + Abs(ti.b1 * p1) + Abs(ti.b2 * p2); Vector3f pError = gamma(7) * Vector3f(pAbsSum);
SurfaceInteraction isect(Point3fi(pHit, pError), uvHit, wo, dpdu, dpdv, Normal3f(), Normal3f(), time, flipNormal); <<Set final surface normal and shading geometry for triangle>> 
<<Override surface normal in isect for triangle>> 
isect.n = isect.shading.n = Normal3f(Normalize(Cross(dp02, dp12))); if (mesh->reverseOrientation ^ mesh->transformSwapsHandedness) isect.n = isect.shading.n = -isect.n;
if (mesh->n || mesh->s) { <<Initialize Triangle shading geometry>> 
<<Compute shading normal ns for triangle>> 
Normal3f ns; if (mesh->n) { ns = ti.b0 * mesh->n[v[0]] + ti.b1 * mesh->n[v[1]] + ti.b2 * mesh->n[v[2]]; ns = LengthSquared(ns) > 0 ? Normalize(ns) : isect.n; } else ns = isect.n;
<<Compute shading tangent ss for triangle>> 
Vector3f ss; if (mesh->s) { ss = ti.b0 * mesh->s[v[0]] + ti.b1 * mesh->s[v[1]] + ti.b2 * mesh->s[v[2]]; if (LengthSquared(ss) == 0) ss = isect.dpdu; } else ss = isect.dpdu;
<<Compute shading bitangent ts for triangle and adjust ss>> 
Vector3f ts = Cross(ns, ss); if (LengthSquared(ts) > 0) ss = Cross(ts, ns); else CoordinateSystem(ns, &ss, &ts);
<<Compute partial-differential bold n Subscript slash partial-differential u and partial-differential bold n Subscript slash partial-differential v for triangle shading geometry>> 
Normal3f dndu, dndv; if (mesh->n) { // Compute deltas for triangle partial derivatives of normal Vector2f duv02 = uv[0] - uv[2]; Vector2f duv12 = uv[1] - uv[2]; Normal3f dn1 = mesh->n[v[0]] - mesh->n[v[2]]; Normal3f dn2 = mesh->n[v[1]] - mesh->n[v[2]]; Float determinant = DifferenceOfProducts(duv02[0], duv12[1], duv02[1], duv12[0]); bool degenerateUV = std::abs(determinant) < 1e-9; if (degenerateUV) { // We can still compute dndu and dndv, with respect to the // same arbitrary coordinate system we use to compute dpdu // and dpdv when this happens. It's important to do this // (rather than giving up) so that ray differentials for // rays reflected from triangles with degenerate // parameterizations are still reasonable. Vector3f dn = Cross(Vector3f(mesh->n[v[2]] - mesh->n[v[0]]), Vector3f(mesh->n[v[1]] - mesh->n[v[0]])); if (LengthSquared(dn) == 0) dndu = dndv = Normal3f(0, 0, 0); else { Vector3f dnu, dnv; CoordinateSystem(dn, &dnu, &dnv); dndu = Normal3f(dnu); dndv = Normal3f(dnv); } } else { Float invDet = 1 / determinant; dndu = DifferenceOfProducts(duv12[1], dn1, duv02[1], dn2) * invDet; dndv = DifferenceOfProducts(duv02[0], dn2, duv12[0], dn1) * invDet; } } else dndu = dndv = Normal3f(0, 0, 0);
isect.SetShadingGeometry(ns, ss, ts, dndu, dndv, true);
}
return isect;
}

To generate consistent tangent vectors over triangle meshes, it is necessary to compute the partial derivatives partial-differential normal p slash partial-differential u and partial-differential normal p slash partial-differential v using the parametric left-parenthesis u comma v right-parenthesis values at the triangle vertices, if provided. Although the partial derivatives are the same at all points on the triangle, the implementation here recomputes them each time an intersection is found. Although this results in redundant computation, the storage savings for large triangle meshes can be significant.

A triangle can be described by the set of points

normal p Subscript normal o Baseline plus u StartFraction partial-differential normal p Over partial-differential u EndFraction plus v StartFraction partial-differential normal p Over partial-differential v EndFraction comma

for some normal p Subscript normal o , where u and v range over the parametric coordinates of the triangle. We also know the three vertex positions normal p Subscript i , i equals 0 comma 1 comma 2 , and the texture coordinates left-parenthesis u Subscript i Baseline comma v Subscript i Baseline right-parenthesis at each vertex. From this it follows that the partial derivatives of normal p Subscript must satisfy

normal p Subscript i Baseline equals normal p Subscript normal o Baseline plus u Subscript i Baseline StartFraction partial-differential normal p Over partial-differential u EndFraction plus v Subscript i Baseline StartFraction partial-differential normal p Over partial-differential v EndFraction period

In other words, there is a unique affine mapping from the 2D left-parenthesis u comma v right-parenthesis space to points on the triangle. (Such a mapping exists even though the triangle is specified in 3D because the triangle is planar.) To compute expressions for partial-differential normal p slash partial-differential u and partial-differential normal p slash partial-differential v , we start by computing the differences normal p 0 minus normal p 2 and normal p 1 minus normal p 2 , giving the matrix equation

Start 2 By 2 Matrix 1st Row 1st Column u 0 minus u 2 2nd Column v 0 minus v 2 2nd Row 1st Column u 1 minus u 2 2nd Column v 1 minus v 2 EndMatrix StartBinomialOrMatrix partial-differential normal p slash partial-differential u Choose partial-differential normal p slash partial-differential v EndBinomialOrMatrix equals StartBinomialOrMatrix normal p 0 minus normal p 2 Choose normal p 1 minus normal p 2 EndBinomialOrMatrix period

Thus,

StartBinomialOrMatrix partial-differential normal p slash partial-differential u Choose partial-differential normal p slash partial-differential v EndBinomialOrMatrix equals Start 2 By 2 Matrix 1st Row 1st Column u 0 minus u 2 2nd Column v 0 minus v 2 2nd Row 1st Column u 1 minus u 2 2nd Column v 1 minus v 2 EndMatrix Superscript negative 1 Baseline StartBinomialOrMatrix normal p 0 minus normal p 2 Choose normal p 1 minus normal p 2 EndBinomialOrMatrix period

Inverting a 2 times 2 matrix is straightforward. The inverse of the left-parenthesis u comma v right-parenthesis differences matrix is

StartFraction 1 Over left-parenthesis u 0 minus u 2 right-parenthesis left-parenthesis v 1 minus v 2 right-parenthesis minus left-parenthesis v 0 minus v 2 right-parenthesis left-parenthesis u 1 minus u 2 right-parenthesis EndFraction Start 2 By 2 Matrix 1st Row 1st Column v 1 minus v 2 2nd Column minus left-parenthesis v 0 minus v 2 right-parenthesis 2nd Row 1st Column minus left-parenthesis u 1 minus u 2 right-parenthesis 2nd Column u 0 minus u 2 EndMatrix period
(6.7)

This computation is performed by the <<Compute triangle partial derivatives>> fragment, with handling for various additional corner cases.

<<Compute triangle partial derivatives>>= 
<<Compute deltas and matrix determinant for triangle partial derivatives>> 
<<Get triangle texture coordinates in uv array>> 
pstd::array<Point2f, 3> uv = mesh->uv ? pstd::array<Point2f, 3>( {mesh->uv[v[0]], mesh->uv[v[1]], mesh->uv[v[2]]}) : pstd::array<Point2f, 3>({Point2f(0, 0), Point2f(1, 0), Point2f(1, 1)});
Vector2f duv02 = uv[0] - uv[2], duv12 = uv[1] - uv[2]; Vector3f dp02 = p0 - p2, dp12 = p1 - p2; Float determinant = DifferenceOfProducts(duv02[0], duv12[1], duv02[1], duv12[0]);
Vector3f dpdu, dpdv; bool degenerateUV = std::abs(determinant) < 1e-9f; if (!degenerateUV) { <<Compute triangle partial-differential normal p slash partial-differential u and partial-differential normal p slash partial-differential v via matrix inversion>> 
Float invdet = 1 / determinant; dpdu = DifferenceOfProducts(duv12[1], dp02, duv02[1], dp12) * invdet; dpdv = DifferenceOfProducts(duv02[0], dp12, duv12[0], dp02) * invdet;
} <<Handle degenerate triangle left-parenthesis u comma v right-parenthesis parameterization or partial derivatives>> 
if (degenerateUV || LengthSquared(Cross(dpdu, dpdv)) == 0) { Vector3f ng = Cross(p2 - p0, p1 - p0); if (LengthSquared(ng) == 0) ng = Vector3f(Cross(Vector3<double>(p2 - p0), Vector3<double>(p1 - p0))); CoordinateSystem(Normalize(ng), &dpdu, &dpdv); }

The triangle’s uv coordinates are found by indexing into the TriangleMesh::uv array, if present. Otherwise, a default parameterization is used. We will not include the fragment that initializes uv here.

<<Compute deltas and matrix determinant for triangle partial derivatives>>= 
<<Get triangle texture coordinates in uv array>> 
pstd::array<Point2f, 3> uv = mesh->uv ? pstd::array<Point2f, 3>( {mesh->uv[v[0]], mesh->uv[v[1]], mesh->uv[v[2]]}) : pstd::array<Point2f, 3>({Point2f(0, 0), Point2f(1, 0), Point2f(1, 1)});
Vector2f duv02 = uv[0] - uv[2], duv12 = uv[1] - uv[2]; Vector3f dp02 = p0 - p2, dp12 = p1 - p2; Float determinant = DifferenceOfProducts(duv02[0], duv12[1], duv02[1], duv12[0]);

In the usual case, the 2 times 2 matrix is non-degenerate, and the partial derivatives are computed using Equation (6.7).

<<Compute triangle partial-differential normal p slash partial-differential u and partial-differential normal p slash partial-differential v via matrix inversion>>= 
Float invdet = 1 / determinant; dpdu = DifferenceOfProducts(duv12[1], dp02, duv02[1], dp12) * invdet; dpdv = DifferenceOfProducts(duv02[0], dp12, duv12[0], dp02) * invdet;

However, there are a number of rare additional cases that must be handled. For example, the user may have provided left-parenthesis u comma v right-parenthesis coordinates that specify a degenerate parameterization, such as the same left-parenthesis u comma v right-parenthesis at all three vertices. Alternatively, the computed dpdu and dpdv values may have a degenerate cross product due to rounding error. In such cases we fall back to computing dpdu and dpdv that at least give the correct normal vector.

<<Handle degenerate triangle left-parenthesis u comma v right-parenthesis parameterization or partial derivatives>>= 
if (degenerateUV || LengthSquared(Cross(dpdu, dpdv)) == 0) { Vector3f ng = Cross(p2 - p0, p1 - p0); if (LengthSquared(ng) == 0) ng = Vector3f(Cross(Vector3<double>(p2 - p0), Vector3<double>(p1 - p0))); CoordinateSystem(Normalize(ng), &dpdu, &dpdv); }

To compute the intersection point and the left-parenthesis u comma v right-parenthesis parametric coordinates at the hit point, the barycentric interpolation formula is applied to the vertex positions and the left-parenthesis u comma v right-parenthesis coordinates at the vertices. As we will see in Section 6.8.5, this gives a more accurate result for the intersection point than evaluating the parametric ray equation using t.

<<Interpolate left-parenthesis u comma v right-parenthesis parametric coordinates and hit point>>= 
Point3f pHit = ti.b0 * p0 + ti.b1 * p1 + ti.b2 * p2; Point2f uvHit = ti.b0 * uv[0] + ti.b1 * uv[1] + ti.b2 * uv[2];

Unlike with the shapes we have seen so far, it is not necessary to transform the SurfaceInteraction here to rendering space, since the geometric per-vertex values are already in rendering space. Like the disk, the partial derivatives of the triangle’s normal are also both left-parenthesis 0 comma 0 comma 0 right-parenthesis , since it is flat.

<<Return SurfaceInteraction for triangle hit>>= 
bool flipNormal = mesh->reverseOrientation ^ mesh->transformSwapsHandedness; <<Compute error bounds pError for triangle intersection>> 
Point3f pAbsSum = Abs(ti.b0 * p0) + Abs(ti.b1 * p1) + Abs(ti.b2 * p2); Vector3f pError = gamma(7) * Vector3f(pAbsSum);
SurfaceInteraction isect(Point3fi(pHit, pError), uvHit, wo, dpdu, dpdv, Normal3f(), Normal3f(), time, flipNormal); <<Set final surface normal and shading geometry for triangle>> 
<<Override surface normal in isect for triangle>> 
isect.n = isect.shading.n = Normal3f(Normalize(Cross(dp02, dp12))); if (mesh->reverseOrientation ^ mesh->transformSwapsHandedness) isect.n = isect.shading.n = -isect.n;
if (mesh->n || mesh->s) { <<Initialize Triangle shading geometry>> 
<<Compute shading normal ns for triangle>> 
Normal3f ns; if (mesh->n) { ns = ti.b0 * mesh->n[v[0]] + ti.b1 * mesh->n[v[1]] + ti.b2 * mesh->n[v[2]]; ns = LengthSquared(ns) > 0 ? Normalize(ns) : isect.n; } else ns = isect.n;
<<Compute shading tangent ss for triangle>> 
Vector3f ss; if (mesh->s) { ss = ti.b0 * mesh->s[v[0]] + ti.b1 * mesh->s[v[1]] + ti.b2 * mesh->s[v[2]]; if (LengthSquared(ss) == 0) ss = isect.dpdu; } else ss = isect.dpdu;
<<Compute shading bitangent ts for triangle and adjust ss>> 
Vector3f ts = Cross(ns, ss); if (LengthSquared(ts) > 0) ss = Cross(ts, ns); else CoordinateSystem(ns, &ss, &ts);
<<Compute partial-differential bold n Subscript slash partial-differential u and partial-differential bold n Subscript slash partial-differential v for triangle shading geometry>> 
Normal3f dndu, dndv; if (mesh->n) { // Compute deltas for triangle partial derivatives of normal Vector2f duv02 = uv[0] - uv[2]; Vector2f duv12 = uv[1] - uv[2]; Normal3f dn1 = mesh->n[v[0]] - mesh->n[v[2]]; Normal3f dn2 = mesh->n[v[1]] - mesh->n[v[2]]; Float determinant = DifferenceOfProducts(duv02[0], duv12[1], duv02[1], duv12[0]); bool degenerateUV = std::abs(determinant) < 1e-9; if (degenerateUV) { // We can still compute dndu and dndv, with respect to the // same arbitrary coordinate system we use to compute dpdu // and dpdv when this happens. It's important to do this // (rather than giving up) so that ray differentials for // rays reflected from triangles with degenerate // parameterizations are still reasonable. Vector3f dn = Cross(Vector3f(mesh->n[v[2]] - mesh->n[v[0]]), Vector3f(mesh->n[v[1]] - mesh->n[v[0]])); if (LengthSquared(dn) == 0) dndu = dndv = Normal3f(0, 0, 0); else { Vector3f dnu, dnv; CoordinateSystem(dn, &dnu, &dnv); dndu = Normal3f(dnu); dndv = Normal3f(dnv); } } else { Float invDet = 1 / determinant; dndu = DifferenceOfProducts(duv12[1], dn1, duv02[1], dn2) * invDet; dndv = DifferenceOfProducts(duv02[0], dn2, duv12[0], dn1) * invDet; } } else dndu = dndv = Normal3f(0, 0, 0);
isect.SetShadingGeometry(ns, ss, ts, dndu, dndv, true);
}
return isect;

Before the SurfaceInteraction is returned, some final details related to its surface normal and shading geometry must be taken care of.

<<Set final surface normal and shading geometry for triangle>>= 
<<Override surface normal in isect for triangle>> 
isect.n = isect.shading.n = Normal3f(Normalize(Cross(dp02, dp12))); if (mesh->reverseOrientation ^ mesh->transformSwapsHandedness) isect.n = isect.shading.n = -isect.n;
if (mesh->n || mesh->s) { <<Initialize Triangle shading geometry>> 
<<Compute shading normal ns for triangle>> 
Normal3f ns; if (mesh->n) { ns = ti.b0 * mesh->n[v[0]] + ti.b1 * mesh->n[v[1]] + ti.b2 * mesh->n[v[2]]; ns = LengthSquared(ns) > 0 ? Normalize(ns) : isect.n; } else ns = isect.n;
<<Compute shading tangent ss for triangle>> 
Vector3f ss; if (mesh->s) { ss = ti.b0 * mesh->s[v[0]] + ti.b1 * mesh->s[v[1]] + ti.b2 * mesh->s[v[2]]; if (LengthSquared(ss) == 0) ss = isect.dpdu; } else ss = isect.dpdu;
<<Compute shading bitangent ts for triangle and adjust ss>> 
Vector3f ts = Cross(ns, ss); if (LengthSquared(ts) > 0) ss = Cross(ts, ns); else CoordinateSystem(ns, &ss, &ts);
<<Compute partial-differential bold n Subscript slash partial-differential u and partial-differential bold n Subscript slash partial-differential v for triangle shading geometry>> 
Normal3f dndu, dndv; if (mesh->n) { // Compute deltas for triangle partial derivatives of normal Vector2f duv02 = uv[0] - uv[2]; Vector2f duv12 = uv[1] - uv[2]; Normal3f dn1 = mesh->n[v[0]] - mesh->n[v[2]]; Normal3f dn2 = mesh->n[v[1]] - mesh->n[v[2]]; Float determinant = DifferenceOfProducts(duv02[0], duv12[1], duv02[1], duv12[0]); bool degenerateUV = std::abs(determinant) < 1e-9; if (degenerateUV) { // We can still compute dndu and dndv, with respect to the // same arbitrary coordinate system we use to compute dpdu // and dpdv when this happens. It's important to do this // (rather than giving up) so that ray differentials for // rays reflected from triangles with degenerate // parameterizations are still reasonable. Vector3f dn = Cross(Vector3f(mesh->n[v[2]] - mesh->n[v[0]]), Vector3f(mesh->n[v[1]] - mesh->n[v[0]])); if (LengthSquared(dn) == 0) dndu = dndv = Normal3f(0, 0, 0); else { Vector3f dnu, dnv; CoordinateSystem(dn, &dnu, &dnv); dndu = Normal3f(dnu); dndv = Normal3f(dnv); } } else { Float invDet = 1 / determinant; dndu = DifferenceOfProducts(duv12[1], dn1, duv02[1], dn2) * invDet; dndv = DifferenceOfProducts(duv02[0], dn2, duv12[0], dn1) * invDet; } } else dndu = dndv = Normal3f(0, 0, 0);
isect.SetShadingGeometry(ns, ss, ts, dndu, dndv, true);
}

The SurfaceInteraction constructor initializes the geometric normal n as the normalized cross product of dpdu and dpdv. This works well for most shapes, but in the case of triangle meshes it is preferable to rely on an initialization that does not depend on the underlying texture coordinates: it is fairly common to encounter meshes with bad parameterizations that do not preserve the orientation of the mesh, in which case the geometric normal would have an incorrect orientation.

We therefore initialize the geometric normal using the normalized cross product of the edge vectors dp02 and dp12, which results in the same normal up to a potential sign difference that depends on the exact order of triangle vertices (also known as the triangle’s winding order). 3D modeling packages generally try to ensure that triangles in a mesh have consistent winding orders, which makes this approach more robust.

<<Override surface normal in isect for triangle>>= 
isect.n = isect.shading.n = Normal3f(Normalize(Cross(dp02, dp12))); if (mesh->reverseOrientation ^ mesh->transformSwapsHandedness) isect.n = isect.shading.n = -isect.n;

With Triangles, the user can provide normal vectors and tangent vectors at the vertices of the mesh that are interpolated to give normals and tangents at points on the faces of triangles. Shading geometry with interpolated normals can make otherwise faceted triangle meshes appear to be smoother than they geometrically are. If either shading normals or shading tangents have been provided, they are used to initialize the shading geometry in the SurfaceInteraction.

<<Initialize Triangle shading geometry>>= 
<<Compute shading normal ns for triangle>> 
Normal3f ns; if (mesh->n) { ns = ti.b0 * mesh->n[v[0]] + ti.b1 * mesh->n[v[1]] + ti.b2 * mesh->n[v[2]]; ns = LengthSquared(ns) > 0 ? Normalize(ns) : isect.n; } else ns = isect.n;
<<Compute shading tangent ss for triangle>> 
Vector3f ss; if (mesh->s) { ss = ti.b0 * mesh->s[v[0]] + ti.b1 * mesh->s[v[1]] + ti.b2 * mesh->s[v[2]]; if (LengthSquared(ss) == 0) ss = isect.dpdu; } else ss = isect.dpdu;
<<Compute shading bitangent ts for triangle and adjust ss>> 
Vector3f ts = Cross(ns, ss); if (LengthSquared(ts) > 0) ss = Cross(ts, ns); else CoordinateSystem(ns, &ss, &ts);
<<Compute partial-differential bold n Subscript slash partial-differential u and partial-differential bold n Subscript slash partial-differential v for triangle shading geometry>> 
Normal3f dndu, dndv; if (mesh->n) { // Compute deltas for triangle partial derivatives of normal Vector2f duv02 = uv[0] - uv[2]; Vector2f duv12 = uv[1] - uv[2]; Normal3f dn1 = mesh->n[v[0]] - mesh->n[v[2]]; Normal3f dn2 = mesh->n[v[1]] - mesh->n[v[2]]; Float determinant = DifferenceOfProducts(duv02[0], duv12[1], duv02[1], duv12[0]); bool degenerateUV = std::abs(determinant) < 1e-9; if (degenerateUV) { // We can still compute dndu and dndv, with respect to the // same arbitrary coordinate system we use to compute dpdu // and dpdv when this happens. It's important to do this // (rather than giving up) so that ray differentials for // rays reflected from triangles with degenerate // parameterizations are still reasonable. Vector3f dn = Cross(Vector3f(mesh->n[v[2]] - mesh->n[v[0]]), Vector3f(mesh->n[v[1]] - mesh->n[v[0]])); if (LengthSquared(dn) == 0) dndu = dndv = Normal3f(0, 0, 0); else { Vector3f dnu, dnv; CoordinateSystem(dn, &dnu, &dnv); dndu = Normal3f(dnu); dndv = Normal3f(dnv); } } else { Float invDet = 1 / determinant; dndu = DifferenceOfProducts(duv12[1], dn1, duv02[1], dn2) * invDet; dndv = DifferenceOfProducts(duv02[0], dn2, duv12[0], dn1) * invDet; } } else dndu = dndv = Normal3f(0, 0, 0);
isect.SetShadingGeometry(ns, ss, ts, dndu, dndv, true);

Given the barycentric coordinates of the intersection point, it is easy to compute the shading normal by interpolating among the appropriate vertex normals, if present.

<<Compute shading normal ns for triangle>>= 
Normal3f ns; if (mesh->n) { ns = ti.b0 * mesh->n[v[0]] + ti.b1 * mesh->n[v[1]] + ti.b2 * mesh->n[v[2]]; ns = LengthSquared(ns) > 0 ? Normalize(ns) : isect.n; } else ns = isect.n;

The shading tangent is computed similarly.

<<Compute shading tangent ss for triangle>>= 
Vector3f ss; if (mesh->s) { ss = ti.b0 * mesh->s[v[0]] + ti.b1 * mesh->s[v[1]] + ti.b2 * mesh->s[v[2]]; if (LengthSquared(ss) == 0) ss = isect.dpdu; } else ss = isect.dpdu;

The bitangent vector ts is found using the cross product of ns and ss, giving a vector orthogonal to the two of them. Next, ss is overwritten with the cross product of ts and ns; this ensures that the cross product of ss and ts gives ns. Thus, if per-vertex bold n Subscript and bold s values are provided and if the interpolated bold n Subscript and bold s values are not perfectly orthogonal, bold n Subscript will be preserved and bold s will be modified so that the coordinate system is orthogonal.

<<Compute shading bitangent ts for triangle and adjust ss>>= 
Vector3f ts = Cross(ns, ss); if (LengthSquared(ts) > 0) ss = Cross(ts, ns); else CoordinateSystem(ns, &ss, &ts);

The code to compute the partial derivatives partial-differential bold n Subscript slash partial-differential u and partial-differential bold n Subscript slash partial-differential v of the shading normal is almost identical to the code to compute the partial derivatives partial-differential normal p slash partial-differential u and partial-differential normal p slash partial-differential v . Therefore, it has been elided from the text here.

6.5.4 Sampling

The uniform area triangle sampling method is based on mapping the provided random sample u to barycentric coordinates that are uniformly distributed over the triangle.

<<Triangle Public Methods>>+=  
pstd::optional<ShapeSample> Sample(Point2f u) const { <<Get triangle vertices in p0, p1, and p2>> 
const TriangleMesh *mesh = GetMesh(); const int *v = &mesh->vertexIndices[3 * triIndex]; Point3f p0 = mesh->p[v[0]], p1 = mesh->p[v[1]], p2 = mesh->p[v[2]];
<<Sample point on triangle uniformly by area>> 
pstd::array<Float, 3> b = SampleUniformTriangle(u); Point3f p = b[0] * p0 + b[1] * p1 + b[2] * p2;
<<Compute surface normal for sampled point on triangle>> 
Normal3f n = Normalize(Normal3f(Cross(p1 - p0, p2 - p0))); if (mesh->n) { Normal3f ns(b[0] * mesh->n[v[0]] + b[1] * mesh->n[v[1]] + (1 - b[0] - b[1]) * mesh->n[v[2]]); n = FaceForward(n, ns); } else if (mesh->reverseOrientation ^ mesh->transformSwapsHandedness) n *= -1;
<<Compute left-parenthesis u comma v right-parenthesis for sampled point on triangle>> 
<<Get triangle texture coordinates in uv array>> 
pstd::array<Point2f, 3> uv = mesh->uv ? pstd::array<Point2f, 3>( {mesh->uv[v[0]], mesh->uv[v[1]], mesh->uv[v[2]]}) : pstd::array<Point2f, 3>({Point2f(0, 0), Point2f(1, 0), Point2f(1, 1)});
Point2f uvSample = b[0] * uv[0] + b[1] * uv[1] + b[2] * uv[2];
<<Compute error bounds pError for sampled point on triangle>> 
Point3f pAbsSum = Abs(b[0] * p0) + Abs(b[1] * p1) + Abs((1 - b[0] - b[1]) * p2); Vector3f pError = Vector3f(gamma(6) * pAbsSum);
return ShapeSample{Interaction(Point3fi(p, pError), n, uvSample), 1 / Area()}; }

Uniform barycentric sampling is provided via a stand-alone utility function (to be described shortly), which makes it easier to reuse this functionality elsewhere.

<<Sample point on triangle uniformly by area>>= 
pstd::array<Float, 3> b = SampleUniformTriangle(u); Point3f p = b[0] * p0 + b[1] * p1 + b[2] * p2;

As with Triangle::NormalBounds(), the surface normal of the sampled point is affected by the orientation of the shading normal, if present.

<<Compute surface normal for sampled point on triangle>>= 
Normal3f n = Normalize(Normal3f(Cross(p1 - p0, p2 - p0))); if (mesh->n) { Normal3f ns(b[0] * mesh->n[v[0]] + b[1] * mesh->n[v[1]] + (1 - b[0] - b[1]) * mesh->n[v[2]]); n = FaceForward(n, ns); } else if (mesh->reverseOrientation ^ mesh->transformSwapsHandedness) n *= -1;

The left-parenthesis u comma v right-parenthesis coordinates for the sampled point are also found with barycentric interpolation.

<<Compute left-parenthesis u comma v right-parenthesis for sampled point on triangle>>= 
<<Get triangle texture coordinates in uv array>> 
pstd::array<Point2f, 3> uv = mesh->uv ? pstd::array<Point2f, 3>( {mesh->uv[v[0]], mesh->uv[v[1]], mesh->uv[v[2]]}) : pstd::array<Point2f, 3>({Point2f(0, 0), Point2f(1, 0), Point2f(1, 1)});
Point2f uvSample = b[0] * uv[0] + b[1] * uv[1] + b[2] * uv[2];

Because barycentric interpolation is linear, it can be shown that if we can find barycentric coordinates that uniformly sample a specific triangle, then those barycentrics can be used to uniformly sample any triangle. To derive the sampling algorithm, we will therefore consider the case of uniformly sampling a unit right triangle. Given a uniform sample in left-bracket 0 comma 1 right-parenthesis squared that we would like to map to the triangle, the task can also be considered as finding an area-preserving mapping from the unit square to the unit triangle.

Figure 6.15: Samples from the unit square can be mapped to the unit right triangle by reflecting across the x plus y equals 1 diagonal, though doing so causes far away samples on the square to map to nearby points on the triangle.

A straightforward approach is suggested by Figure 6.15: the unit square could be folded over onto itself, such that samples that are on the side of the diagonal that places them outside the triangle are reflected across the diagonal to be inside it. While this would provide a valid sampling technique, it is undesirable since it causes samples that were originally far away in left-bracket 0 comma 1 right-parenthesis squared to be close together on the triangle. (For example, left-parenthesis 0.01 comma 0.01 right-parenthesis and left-parenthesis 0.99 comma 0.99 right-parenthesis in the unit square would both map to the same point in the triangle.) The effect would be that sampling techniques that generate well-distributed uniform samples such as those discussed in Chapter 8 were less effective at reducing error.

A better mapping translates points along the diagonal by a varying amount that brings the two opposite sides of the unit square to the triangle’s diagonal.

f left-parenthesis xi 1 comma xi 2 right-parenthesis equals left-parenthesis xi 1 minus delta comma xi 2 minus delta right-parenthesis comma normal w normal h normal e normal r normal e delta equals StartLayout Enlarged left-brace 1st Row 1st Column xi 1 slash 2 2nd Column xi 1 less-than xi 2 2nd Row 1st Column xi 2 slash 2 2nd Column otherwise period EndLayout

The determinant of the Jacobian matrix for this mapping is a constant and therefore this mapping is area preserving and uniformly distributed samples in the unit square are uniform in the triangle. (Recall Section 2.4.1, which presented the mathematics of transforming samples from one domain to the other; there it was shown that if the Jacobian of the transformation is constant, the mapping is area-preserving.)

<<Sampling Inline Functions>>+=  
pstd::array<Float, 3> SampleUniformTriangle(Point2f u) { Float b0, b1; if (u[0] < u[1]) { b0 = u[0] / 2; b1 = u[1] - b0; } else { b1 = u[1] / 2; b0 = u[0] - b1; } return {b0, b1, 1 - b0 - b1}; }

The usual normalization constraint gives the PDF in terms of the triangle’s surface area.

<<Triangle Public Methods>>+=  
Float PDF(const Interaction &) const { return 1 / Area(); }

In order to sample points on spheres with respect to solid angle from a reference point, we derived a specialized sampling method that only sampled from the potentially visible region of the sphere. For the cylinder and disk, we just sampled uniformly by area and rescaled the PDF to account for the change of measure from area to solid angle. It is tempting to do the same for triangles (and, indeed, all three previous editions of this book did so), but going through the work to apply a solid angle sampling approach can lead to much better results.

To see why, consider a simplified form of the reflection integral from the scattering equation, (4.14):

integral Underscript script upper S squared Endscripts rho upper L Subscript normal i Baseline left-parenthesis normal p Subscript Baseline comma omega Subscript normal i Baseline right-parenthesis StartAbsoluteValue cosine theta Subscript normal i Baseline EndAbsoluteValue normal d omega Subscript normal i Baseline comma

where the BRDF f has been replaced with a constant rho , which corresponds to a diffuse surface. If we consider the case of incident radiance only coming from a triangular light source that emits uniform diffuse radiance upper L Subscript normal e , then we can rewrite this integral as

rho upper L Subscript normal e Baseline integral Underscript script upper S squared Endscripts upper V left-parenthesis normal p Subscript Baseline comma omega Subscript normal i Baseline right-parenthesis StartAbsoluteValue cosine theta Subscript normal i Baseline EndAbsoluteValue normal d omega Subscript normal i Baseline comma

where upper V is a visibility function that is 1 if the ray from normal p Subscript in direction omega Subscript normal i hits the light source and 0 if it misses or is occluded by another object. If we sample the triangle uniformly within the solid angle that it subtends from the reference point, we end up with the estimator

StartFraction rho upper L Subscript normal e Baseline Over 1 slash upper A Subscript normal s normal o normal l normal i normal d Baseline EndFraction left-parenthesis upper V left-parenthesis normal p Subscript Baseline comma omega Subscript normal i Baseline right-parenthesis StartAbsoluteValue cosine theta prime EndAbsoluteValue right-parenthesis

where upper A Subscript normal s normal o normal l normal i normal d is the subtended solid angle. The constant values have been pulled out, leaving just the two factors in parentheses that vary based on normal p Subscript . They are the only source of variance in estimates of the integral.

As an alternative, consider a Monte Carlo estimate of this function where a point normal p prime has been uniformly sampled on the surface of the triangle. If the triangle’s area is upper A , then the PDF is p left-parenthesis normal p prime right-parenthesis equals 1 slash upper A . Applying the standard Monte Carlo estimator and defining a new visibility function upper V that is between two points, we end up with

StartFraction rho upper L Subscript normal e Baseline Over 1 slash upper A EndFraction left-parenthesis upper V left-parenthesis normal p Subscript Baseline comma normal p Superscript prime Baseline right-parenthesis StartAbsoluteValue cosine theta Superscript prime Baseline EndAbsoluteValue StartFraction StartAbsoluteValue cosine theta Subscript normal l Baseline EndAbsoluteValue Over double-vertical-bar normal p prime minus normal p Subscript Baseline double-vertical-bar squared EndFraction right-parenthesis comma

where the last factor accounts for the change of variables and where cosine theta Subscript normal l is the angle between the light source’s surface normal and the vector between the two points. The values of the four factors inside the parentheses in this estimator all depend on the choice of normal p prime .

With area sampling, the StartAbsoluteValue cosine theta Subscript normal l Baseline EndAbsoluteValue factor adds some additional variance, though not too much, since it is between 0 and 1. However, 1 slash double-vertical-bar normal p prime minus normal p Subscript Baseline double-vertical-bar squared can have unbounded variation over the surface of the triangle, which can lead to high variance in the estimator since the method used to sample normal p prime does not account for it at all. This variance increases the larger the triangle is and the closer the reference point is to it. Figure 6.16 shows a scene where solid angle sampling significantly reduces error.

Figure 6.16: A Scene Where Solid Angle Triangle Sampling Is Beneficial. When points on triangles are sampled using uniform area sampling, error is high at points on the ground close to the emitter. If points are sampled on the triangle by uniformly sampling the solid angle the triangle subtends, then the remaining non-constant factors in the estimator are both between 0 and 1, which results in much lower error. For this scene, mean squared error (MSE) is reduced by a factor of 3.86 . (Dragon model courtesy of the Stanford Computer Graphics Laboratory.)

The Triangle::Sample() method that takes a reference point therefore samples a point according to solid angle.

<<Triangle Public Methods>>+=  
pstd::optional<ShapeSample> Sample(const ShapeSampleContext &ctx, Point2f u) const { <<Get triangle vertices in p0, p1, and p2>> 
const TriangleMesh *mesh = GetMesh(); const int *v = &mesh->vertexIndices[3 * triIndex]; Point3f p0 = mesh->p[v[0]], p1 = mesh->p[v[1]], p2 = mesh->p[v[2]];
<<Use uniform area sampling for numerically unstable cases>> 
Float solidAngle = SolidAngle(ctx.p()); if (solidAngle < MinSphericalSampleArea || solidAngle > MaxSphericalSampleArea) { <<Sample shape by area and compute incident direction wi>> 
pstd::optional<ShapeSample> ss = Sample(u); ss->intr.time = ctx.time; Vector3f wi = ss->intr.p() - ctx.p(); if (LengthSquared(wi) == 0) return {}; wi = Normalize(wi);
<<Convert area sampling PDF in ss to solid angle measure>> 
ss->pdf /= AbsDot(ss->intr.n, -wi) / DistanceSquared(ctx.p(), ss->intr.p()); if (IsInf(ss->pdf)) return {};
return ss; }
<<Sample spherical triangle from reference point>> 
<<Apply warp product sampling for cosine factor at reference point>> 
Float pdf = 1; if (ctx.ns != Normal3f(0, 0, 0)) { <<Compute cosine theta -based weights w at sample domain corners>> 
Point3f rp = ctx.p(); Vector3f wi[3] = {Normalize(p0 - rp), Normalize(p1 - rp), Normalize(p2 - rp)}; pstd::array<Float, 4> w = pstd::array<Float, 4>{std::max<Float>(0.01, AbsDot(ctx.ns, wi[1])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[1])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[0])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[2]))};
u = SampleBilinear(u, w); pdf = BilinearPDF(u, w); }
Float triPDF; pstd::array<Float, 3> b = SampleSphericalTriangle({p0, p1, p2}, ctx.p(), u, &triPDF); if (triPDF == 0) return {}; pdf *= triPDF;
<<Compute error bounds pError for sampled point on triangle>> 
Point3f pAbsSum = Abs(b[0] * p0) + Abs(b[1] * p1) + Abs((1 - b[0] - b[1]) * p2); Vector3f pError = Vector3f(gamma(6) * pAbsSum);
<<Return ShapeSample for solid angle sampled point on triangle>> 
Point3f p = b[0] * p0 + b[1] * p1 + b[2] * p2; <<Compute surface normal for sampled point on triangle>> 
Normal3f n = Normalize(Normal3f(Cross(p1 - p0, p2 - p0))); if (mesh->n) { Normal3f ns(b[0] * mesh->n[v[0]] + b[1] * mesh->n[v[1]] + (1 - b[0] - b[1]) * mesh->n[v[2]]); n = FaceForward(n, ns); } else if (mesh->reverseOrientation ^ mesh->transformSwapsHandedness) n *= -1;
<<Compute left-parenthesis u comma v right-parenthesis for sampled point on triangle>> 
<<Get triangle texture coordinates in uv array>> 
pstd::array<Point2f, 3> uv = mesh->uv ? pstd::array<Point2f, 3>( {mesh->uv[v[0]], mesh->uv[v[1]], mesh->uv[v[2]]}) : pstd::array<Point2f, 3>({Point2f(0, 0), Point2f(1, 0), Point2f(1, 1)});
Point2f uvSample = b[0] * uv[0] + b[1] * uv[1] + b[2] * uv[2];
return ShapeSample{Interaction(Point3fi(p, pError), n, ctx.time, uvSample), pdf};
}

Triangles that subtend a very small solid angle as well as ones that cover nearly the whole hemisphere can encounter problems with floating-point accuracy in the following solid angle sampling approach. The sampling method falls back to uniform area sampling in those cases, which does not hurt results in practice: for very small triangles, the various additional factors tend not to vary as much over the triangle’s area. pbrt also samples the BSDF as part of the direct lighting calculation, which is an effective strategy for large triangles, so uniform area sampling is fine in that case as well.

<<Use uniform area sampling for numerically unstable cases>>= 
Float solidAngle = SolidAngle(ctx.p()); if (solidAngle < MinSphericalSampleArea || solidAngle > MaxSphericalSampleArea) { <<Sample shape by area and compute incident direction wi>> 
pstd::optional<ShapeSample> ss = Sample(u); ss->intr.time = ctx.time; Vector3f wi = ss->intr.p() - ctx.p(); if (LengthSquared(wi) == 0) return {}; wi = Normalize(wi);
<<Convert area sampling PDF in ss to solid angle measure>> 
ss->pdf /= AbsDot(ss->intr.n, -wi) / DistanceSquared(ctx.p(), ss->intr.p()); if (IsInf(ss->pdf)) return {};
return ss; }

<<Triangle Private Members>>+= 
static constexpr Float MinSphericalSampleArea = 3e-4; static constexpr Float MaxSphericalSampleArea = 6.22;

pbrt also includes an approximation to the effect of the StartAbsoluteValue cosine theta prime EndAbsoluteValue factor in its triangle sampling algorithm, which leaves visibility and error in that approximation as the only sources of variance. We will defer discussion of the fragment that handles that, <<Apply warp product sampling for cosine factor at reference point>>, until after we have discussed the uniform solid angle sampling algorithm. For now we will note that it affects the final sampling PDF, which turns out to be the product of the PDF for uniform solid angle sampling of the triangle and a correction factor.

Uniform sampling of the solid angle that a triangle subtends is equivalent to uniformly sampling the spherical triangle that results from its projection on the unit sphere (recall Section 3.8.2). Spherical triangle sampling is implemented in a separate function described shortly, SampleSphericalTriangle(), that returns the barycentric coordinates for the sampled point.

<<Sample spherical triangle from reference point>>= 
<<Apply warp product sampling for cosine factor at reference point>> 
Float pdf = 1; if (ctx.ns != Normal3f(0, 0, 0)) { <<Compute cosine theta -based weights w at sample domain corners>> 
Point3f rp = ctx.p(); Vector3f wi[3] = {Normalize(p0 - rp), Normalize(p1 - rp), Normalize(p2 - rp)}; pstd::array<Float, 4> w = pstd::array<Float, 4>{std::max<Float>(0.01, AbsDot(ctx.ns, wi[1])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[1])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[0])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[2]))};
u = SampleBilinear(u, w); pdf = BilinearPDF(u, w); }
Float triPDF; pstd::array<Float, 3> b = SampleSphericalTriangle({p0, p1, p2}, ctx.p(), u, &triPDF); if (triPDF == 0) return {}; pdf *= triPDF;

Given the barycentric coordinates, it is simple to compute the sampled point. With that as well as the surface normal, computed by reusing a fragment from the other triangle sampling method, we have everything necessary to return a ShapeSample.

<<Return ShapeSample for solid angle sampled point on triangle>>= 
Point3f p = b[0] * p0 + b[1] * p1 + b[2] * p2; <<Compute surface normal for sampled point on triangle>> 
Normal3f n = Normalize(Normal3f(Cross(p1 - p0, p2 - p0))); if (mesh->n) { Normal3f ns(b[0] * mesh->n[v[0]] + b[1] * mesh->n[v[1]] + (1 - b[0] - b[1]) * mesh->n[v[2]]); n = FaceForward(n, ns); } else if (mesh->reverseOrientation ^ mesh->transformSwapsHandedness) n *= -1;
<<Compute left-parenthesis u comma v right-parenthesis for sampled point on triangle>> 
<<Get triangle texture coordinates in uv array>> 
pstd::array<Point2f, 3> uv = mesh->uv ? pstd::array<Point2f, 3>( {mesh->uv[v[0]], mesh->uv[v[1]], mesh->uv[v[2]]}) : pstd::array<Point2f, 3>({Point2f(0, 0), Point2f(1, 0), Point2f(1, 1)});
Point2f uvSample = b[0] * uv[0] + b[1] * uv[1] + b[2] * uv[2];
return ShapeSample{Interaction(Point3fi(p, pError), n, ctx.time, uvSample), pdf};

Figure 6.17: Geometric Setting for Spherical Triangles. Given vertices bold a , bold b , and bold c , the respective opposite edges are labeled bold a overbar , bold b overbar , and bold c overbar and the interior angles are labeled with Greek letters alpha , beta , and gamma .

The spherical triangle sampling function takes three triangle vertices v, a reference point p, and a uniform sample u. The value of the PDF for the sampled point is optionally returned via pdf, if it is not nullptr. Figure 6.17 shows the geometric setting.

<<Sampling Function Definitions>>= 
pstd::array<Float, 3> SampleSphericalTriangle( const pstd::array<Point3f, 3> &v, Point3f p, Point2f u, Float *pdf) { <<Compute vectors a, b, and c to spherical triangle vertices>> 
Vector3f a(v[0] - p), b(v[1] - p), c(v[2] - p); a = Normalize(a); b = Normalize(b); c = Normalize(c);
<<Compute normalized cross products of all direction pairs>> 
Vector3f n_ab = Cross(a, b), n_bc = Cross(b, c), n_ca = Cross(c, a); if (LengthSquared(n_ab) == 0 || LengthSquared(n_bc) == 0 || LengthSquared(n_ca) == 0) return {}; n_ab = Normalize(n_ab); n_bc = Normalize(n_bc); n_ca = Normalize(n_ca);
<<Find angles alpha , beta , and gamma at spherical triangle vertices>> 
Float alpha = AngleBetween(n_ab, -n_ca); Float beta = AngleBetween(n_bc, -n_ab); Float gamma = AngleBetween(n_ca, -n_bc);
<<Uniformly sample triangle area upper A to compute upper A prime >> 
Float A_pi = alpha + beta + gamma; Float Ap_pi = Lerp(u[0], Pi, A_pi); if (pdf) { Float A = A_pi - Pi; *pdf = (A <= 0) ? 0 : 1 / A; }
<<Find cosine beta prime for point along b for sampled area>> 
Float cosAlpha = std::cos(alpha), sinAlpha = std::sin(alpha); Float sinPhi = std::sin(Ap_pi) * cosAlpha - std::cos(Ap_pi) * sinAlpha; Float cosPhi = std::cos(Ap_pi) * cosAlpha + std::sin(Ap_pi) * sinAlpha; Float k1 = cosPhi + cosAlpha; Float k2 = sinPhi - sinAlpha * Dot(a, b) /* cos c */; Float cosBp = (k2 + (DifferenceOfProducts(k2, cosPhi, k1, sinPhi)) * cosAlpha) / ((SumOfProducts(k2, sinPhi, k1, cosPhi)) * sinAlpha); cosBp = Clamp(cosBp, -1, 1);
<<Sample c prime along the arc between a and c >> 
Float sinBp = SafeSqrt(1 - Sqr(cosBp)); Vector3f cp = cosBp * a + sinBp * Normalize(GramSchmidt(c, a));
<<Compute sampled spherical triangle direction and return barycentrics>> 
Float cosTheta = 1 - u[1] * (1 - Dot(cp, b)); Float sinTheta = SafeSqrt(1 - Sqr(cosTheta)); Vector3f w = cosTheta * b + sinTheta * Normalize(GramSchmidt(cp, b)); <<Find barycentric coordinates for sampled direction w>> 
Vector3f e1 = v[1] - v[0], e2 = v[2] - v[0]; Vector3f s1 = Cross(w, e2); Float divisor = Dot(s1, e1); Float invDivisor = 1 / divisor; Vector3f s = p - v[0]; Float b1 = Dot(s, s1) * invDivisor; Float b2 = Dot(w, Cross(s, e1)) * invDivisor;
<<Return clamped barycentrics for sampled direction>> 
b1 = Clamp(b1, 0, 1); b2 = Clamp(b2, 0, 1); if (b1 + b2 > 1) { b1 /= b1 + b2; b2 /= b1 + b2; } return {Float(1 - b1 - b2), Float(b1), Float(b2)};
}

Given the reference point, it is easy to project the vertices on the unit sphere to find the spherical triangle vertices bold a , bold b , and bold c .

<<Compute vectors a, b, and c to spherical triangle vertices>>= 
Vector3f a(v[0] - p), b(v[1] - p), c(v[2] - p); a = Normalize(a); b = Normalize(b); c = Normalize(c);

Because the plane containing an edge also passes through the origin, we can compute the plane normal for an edge from bold a to bold b as

bold n Subscript normal a normal b Baseline equals StartFraction bold a times bold b Over double-vertical-bar bold a times bold b double-vertical-bar EndFraction comma

and similarly for the other edges. If any of these normals are degenerate, then the triangle has zero area.

<<Compute normalized cross products of all direction pairs>>= 
Vector3f n_ab = Cross(a, b), n_bc = Cross(b, c), n_ca = Cross(c, a); if (LengthSquared(n_ab) == 0 || LengthSquared(n_bc) == 0 || LengthSquared(n_ca) == 0) return {}; n_ab = Normalize(n_ab); n_bc = Normalize(n_bc); n_ca = Normalize(n_ca);

Given the pairs of plane normals, AngleBetween() gives the angles between them. In computing these angles, we can take advantage of the fact that the plane normal for the edge between two vertices bold b and bold a is the negation of the plane normal for the edge from bold a to bold b .

<<Find angles alpha , beta , and gamma at spherical triangle vertices>>= 
Float alpha = AngleBetween(n_ab, -n_ca); Float beta = AngleBetween(n_bc, -n_ab); Float gamma = AngleBetween(n_ca, -n_bc);

The spherical triangle sampling algorithm operates in two stages. The first step uniformly samples a new triangle with area upper A prime between 0 and the area of the original spherical triangle upper A using the first sample: upper A prime equals xi 0 upper A . This triangle is defined by finding a new vertex bold c prime along the arc between bold a and bold c such that the resulting triangle bold a bold b bold c prime has area upper A prime (see Figure 6.18(a)).

Figure 6.18: (a) After sampling a triangle area, the first step of the sampling algorithm finds the vertex bold c prime that gives a spherical triangle bold a bold b bold c prime with that area. The vertices bold a and bold b and the edge bold c overbar are shared with the original triangle. (b) Given bold c prime , a direction omega Subscript is sampled along the arc between bold b and bold c prime .

In the second step, a vertex omega Subscript is sampled along the arc between bold b and bold c prime with sampling density that is relatively lower near bold b and higher near bold c prime ; the result is a uniform distribution of points inside the spherical triangle (Figure 6.18(b)). (The need for a nonuniform density along the arc can be understood by considering the arcs bold b bold c prime as bold c prime sweeps from bold a to bold c : the velocity of a point on the arc increases the farther away from bold b and so the density of sampled points along a given arc must be adjusted accordingly.)

Our implementation makes a small modification to the algorithm as described so far: rather than computing the triangle’s spherical area upper A and then sampling an area uniformly between 0 and  upper A , it instead starts by computing its area plus pi , which we will denote by upper A Subscript pi . Doing so lets us avoid the subtraction of pi from the sum of the interior angles that is present in the spherical triangle area equation, (3.5). For very small triangles, where alpha plus beta plus gamma almost-equals pi , the subtraction of pi can cause a significant loss of floating-point accuracy. The remainder of the algorithm’s implementation is then adjusted to be based on upper A prime Subscript pi in order to avoid needing to perform the subtraction.

Given upper A Subscript pi , the sampled area-plus- pi upper A prime Subscript pi is easily computed by uniformly sampling between pi and upper A Subscript pi .

The returned PDF value can be initialized at this point; because the algorithm samples uniformly in the triangle’s solid angle, the probability density function takes the constant value of one over the solid angle the triangle subtends, which is its spherical area. (For that, the value of pi must be subtracted.)

<<Uniformly sample triangle area upper A to compute upper A prime >>= 
Float A_pi = alpha + beta + gamma; Float Ap_pi = Lerp(u[0], Pi, A_pi); if (pdf) { Float A = A_pi - Pi; *pdf = (A <= 0) ? 0 : 1 / A; }

At this point, we need to determine more values related to the sampled triangle. We have the vertices bold a and bold b , the edge bold c overbar , and the angle alpha all unchanged from the given triangle. The area-plus- pi of the sampled triangle upper A prime Subscript pi is known, but we do not have the vertex bold c prime , the edges bold b overbar prime or bold a overbar prime , or the angles beta prime or gamma prime .

To find the vertex bold c prime , it is sufficient to find the length of the arc bold b overbar prime . In this case, cosine bold b overbar prime will do, since 0 less-than-or-equal-to bold b overbar prime less-than pi . The first step is to apply one of the spherical cosine laws, which gives the equality

cosine beta Superscript prime Baseline equals minus cosine gamma Superscript prime Baseline cosine alpha plus sine gamma Superscript prime Baseline sine alpha cosine bold b overbar Superscript prime Baseline period
(6.8)

Although we do not know gamma prime , we can apply the definition of upper A prime Subscript pi ,

upper A prime Subscript pi Baseline equals alpha plus beta prime plus gamma Superscript prime Baseline comma

to express gamma prime in terms of quantities that are either known or are beta prime :

gamma prime equals upper A prime Subscript pi Baseline minus alpha minus beta Superscript prime Baseline period

Substituting this equality in Equation (6.8) and solving for cosine bold b overbar prime gives

cosine bold b overbar Superscript prime Baseline equals StartFraction cosine beta Superscript prime Baseline plus cosine left-parenthesis upper A prime Subscript pi Baseline minus alpha minus beta Superscript prime Baseline right-parenthesis cosine alpha Over sine left-parenthesis upper A prime Subscript pi Baseline minus alpha minus beta Superscript prime Baseline right-parenthesis sine alpha EndFraction period

Defining phi equals upper A prime Subscript pi Baseline minus alpha to simplify notation, we have

cosine bold b overbar Superscript prime Baseline equals StartFraction cosine beta Superscript prime Baseline plus cosine left-parenthesis phi minus beta Superscript prime Baseline right-parenthesis cosine alpha Over sine left-parenthesis phi minus beta Superscript prime Baseline right-parenthesis sine alpha EndFraction period

The cosine and sine sum identities then give

cosine bold b overbar Superscript prime Baseline equals StartFraction cosine beta Superscript prime Baseline plus left-parenthesis cosine phi cosine beta Superscript prime Baseline plus sine phi sine beta Superscript prime Baseline right-parenthesis cosine alpha Over left-parenthesis sine phi cosine beta Superscript prime Baseline minus cosine phi sine beta Superscript prime Baseline right-parenthesis sine alpha EndFraction period
(6.9)

The only remaining unknowns on the right hand side are the sines and cosines of beta prime .

To find sine beta prime and cosine beta prime , we can use another spherical cosine law, which gives the equality

cosine gamma Superscript prime Baseline equals minus cosine beta Superscript prime Baseline cosine alpha plus sine beta Superscript prime Baseline sine alpha cosine bold c overbar period

It can be simplified in a similar manner to find the equation

0 equals left-parenthesis cosine phi plus cosine alpha right-parenthesis cosine beta Superscript prime Baseline plus left-parenthesis sine phi minus sine alpha cosine bold c overbar right-parenthesis sine beta Superscript prime Baseline period

The terms in parentheses are all known. We will denote them by k 1 equals cosine phi plus cosine alpha and k 2 equals sine phi minus sine alpha cosine bold c overbar . It is then easy to see that solutions to the equation

0 equals k 1 cosine beta Superscript prime Baseline plus k 2 sine beta prime

are given by

cosine beta Superscript prime Baseline equals StartFraction plus-or-minus k 2 Over StartRoot k 1 squared plus k 2 squared EndRoot EndFraction and sine beta Superscript prime Baseline equals StartFraction minus-or-plus k 1 Over StartRoot k 1 squared plus k 2 squared EndRoot EndFraction period

Substituting these into Equation (6.9), taking the solution with a positive cosine, and simplifying gives

cosine bold b overbar Superscript prime Baseline equals StartFraction k 2 plus left-parenthesis k 2 cosine phi minus k 1 sine phi right-parenthesis cosine alpha Over left-parenthesis k 2 sine phi plus k 1 cosine phi right-parenthesis sine alpha EndFraction comma

which finally has only known values on the right hand side.

The code to compute this cosine follows directly from this solution. In it, we have also applied trigonometric identities to compute sine phi and cosine phi in terms of other sines and cosines.

<<Find cosine beta prime for point along b for sampled area>>= 
Float cosAlpha = std::cos(alpha), sinAlpha = std::sin(alpha); Float sinPhi = std::sin(Ap_pi) * cosAlpha - std::cos(Ap_pi) * sinAlpha; Float cosPhi = std::cos(Ap_pi) * cosAlpha + std::sin(Ap_pi) * sinAlpha; Float k1 = cosPhi + cosAlpha; Float k2 = sinPhi - sinAlpha * Dot(a, b) /* cos c */; Float cosBp = (k2 + (DifferenceOfProducts(k2, cosPhi, k1, sinPhi)) * cosAlpha) / ((SumOfProducts(k2, sinPhi, k1, cosPhi)) * sinAlpha); cosBp = Clamp(cosBp, -1, 1);

The arc of the great circle between the two points bold a and bold c can be parameterized by cosine theta bold a plus sine theta bold c Subscript up-tack , where bold c Subscript up-tack is the normalized perpendicular component of bold c with respect to bold a . This vector is given by the GramSchmidt() function introduced earlier, which makes the computation of bold c prime straightforward. In this case, sine bold b overbar prime can then be found using cosine bold b overbar prime with the Pythagorean identity, since we know that it must be nonnegative.

<<Sample c prime along the arc between a and c >>= 
Float sinBp = SafeSqrt(1 - Sqr(cosBp)); Vector3f cp = cosBp * a + sinBp * Normalize(GramSchmidt(c, a));

For the sample points to be uniformly distributed in the spherical triangle, it can be shown that if the edge from bold b to bold c prime is parameterized using theta in the same way as was used for the edge from bold a to bold c , then cosine theta should be sampled as

cosine theta equals 1 minus xi 1 left-parenthesis 1 minus left-parenthesis bold c prime dot bold b right-parenthesis right-parenthesis period

(The “Further Reading” section has pointers to the details.)

With that, we can compute the final sampled direction omega Subscript . The remaining step is to compute the barycentric coordinates for the sampled direction.

<<Compute sampled spherical triangle direction and return barycentrics>>= 
Float cosTheta = 1 - u[1] * (1 - Dot(cp, b)); Float sinTheta = SafeSqrt(1 - Sqr(cosTheta)); Vector3f w = cosTheta * b + sinTheta * Normalize(GramSchmidt(cp, b)); <<Find barycentric coordinates for sampled direction w>> 
Vector3f e1 = v[1] - v[0], e2 = v[2] - v[0]; Vector3f s1 = Cross(w, e2); Float divisor = Dot(s1, e1); Float invDivisor = 1 / divisor; Vector3f s = p - v[0]; Float b1 = Dot(s, s1) * invDivisor; Float b2 = Dot(w, Cross(s, e1)) * invDivisor;
<<Return clamped barycentrics for sampled direction>> 
b1 = Clamp(b1, 0, 1); b2 = Clamp(b2, 0, 1); if (b1 + b2 > 1) { b1 /= b1 + b2; b2 /= b1 + b2; } return {Float(1 - b1 - b2), Float(b1), Float(b2)};

The barycentric coordinates of the corresponding point in the planar triangle can be found using part of a ray–triangle intersection algorithm that finds the barycentrics along the way (Möller and Trumbore 1997). It starts with equating the parametric form of the ray with the barycentric interpolation of the triangle’s vertices normal v Subscript i ,

normal o plus t bold d equals left-parenthesis 1 minus b 0 minus b 1 right-parenthesis normal v 0 plus b 1 normal v 1 plus b 2 normal v 2 comma

expressing this as a matrix equation, and solving the resulting linear system for the barycentrics. The solution is implemented in the following fragment, which includes the result of factoring out various common subexpressions.

<<Find barycentric coordinates for sampled direction w>>= 
Vector3f e1 = v[1] - v[0], e2 = v[2] - v[0]; Vector3f s1 = Cross(w, e2); Float divisor = Dot(s1, e1); Float invDivisor = 1 / divisor; Vector3f s = p - v[0]; Float b1 = Dot(s, s1) * invDivisor; Float b2 = Dot(w, Cross(s, e1)) * invDivisor;

The computed barycentrics may be invalid for very small and very large triangles. This happens rarely, but to protect against it, they are clamped to be within the triangle before they are returned.

<<Return clamped barycentrics for sampled direction>>= 
b1 = Clamp(b1, 0, 1); b2 = Clamp(b2, 0, 1); if (b1 + b2 > 1) { b1 /= b1 + b2; b2 /= b1 + b2; } return {Float(1 - b1 - b2), Float(b1), Float(b2)};

As noted earlier, uniform solid angle sampling does not account for the incident cosine factor at the reference point. Indeed, there is no known analytic method to do so. However, it is possible to apply a warping function to the uniform samples u that approximately accounts for this factor.

To understand the idea, first note that cosine theta varies smoothly over the spherical triangle. Because the spherical triangle sampling algorithm that we have just defined maintains a continuous relationship between sample values and points in the triangle, then if we consider the image of the cosine theta function back in the left-bracket 0 comma 1 right-bracket squared sampling domain, as would be found by mapping it through the inverse of the spherical triangle sampling algorithm, the cosine theta function is smoothly varying there as well. (See Figure 6.19.)

Figure 6.19: (a) The cosine theta factor varies smoothly over the area of a spherical triangle. (b) If it is mapped back to the left-bracket 0 comma 1 right-bracket squared sampling domain, it also varies smoothly there, thanks to the sampling algorithm not introducing any discontinuities or excessive distortion.

It can be shown through simple application of the chain rule that a suitable transformation of uniform left-bracket 0 comma 1 right-bracket squared sample points can account for the cosine theta factor. Specifically, if transformed points are distributed according to the distribution of cosine theta in left-bracket 0 comma 1 right-bracket squared and then used with the spherical triangle sampling algorithm, then the distribution of directions on the sphere will include the cosine theta factor.

The true function has no convenient analytic form, but because it is smoothly varying, here we will approximate it with a bilinear function. Each corner of the left-bracket 0 comma 1 right-bracket squared sampling domain maps to one of the three vertices of the spherical triangle, and so we set the bilinear function’s value at each corner according to the cosine theta factor computed at the associated triangle vertex.

Sampling a point in the triangle then proceeds by using the initial uniform sample to sample the bilinear distribution and to use the resulting nonuniform point in left-bracket 0 comma 1 right-bracket squared with the triangle sampling algorithm. (See Figure 6.20.)

Figure 6.20: If (a) uniform sample points are warped to (b) approximate the distribution of the incident cosine factor in left-bracket 0 comma 1 right-bracket squared before being used with the spherical triangle sampling algorithm, then (c) the resulting points in the triangle are approximately cosine-distributed.

Applying the principles of transforming between distributions that were introduced in Section 2.4.1, we can find that the overall PDF of such a sample is given by the product of the PDF for the bilinear sample and the PDF of the spherical triangle sample.

This technique is only applied for reference points on surfaces. For points in scattering media, the surface normal ShapeSampleContext::ns is degenerate and no sample warping is applied.

<<Apply warp product sampling for cosine factor at reference point>>= 
Float pdf = 1; if (ctx.ns != Normal3f(0, 0, 0)) { <<Compute cosine theta -based weights w at sample domain corners>> 
Point3f rp = ctx.p(); Vector3f wi[3] = {Normalize(p0 - rp), Normalize(p1 - rp), Normalize(p2 - rp)}; pstd::array<Float, 4> w = pstd::array<Float, 4>{std::max<Float>(0.01, AbsDot(ctx.ns, wi[1])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[1])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[0])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[2]))};
u = SampleBilinear(u, w); pdf = BilinearPDF(u, w); }

For the spherical triangle sampling algorithm, the vertex v0 corresponds to the sample left-parenthesis 0 comma 1 right-parenthesis , v1 to left-parenthesis 0 comma 0 right-parenthesis and left-parenthesis 1 comma 0 right-parenthesis (and the line in between), and v2 to left-parenthesis 1 comma 1 right-parenthesis . Therefore, the sampling weights at the corners of the left-bracket 0 comma 1 right-bracket squared domain are computed using the cosine of the direction to the corresponding vertex.

<<Compute cosine theta -based weights w at sample domain corners>>= 
Point3f rp = ctx.p(); Vector3f wi[3] = {Normalize(p0 - rp), Normalize(p1 - rp), Normalize(p2 - rp)}; pstd::array<Float, 4> w = pstd::array<Float, 4>{std::max<Float>(0.01, AbsDot(ctx.ns, wi[1])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[1])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[0])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[2]))};

The associated PDF() method is thankfully much simpler than the sampling routine.

<<Triangle Public Methods>>+= 
Float PDF(const ShapeSampleContext &ctx, Vector3f wi) const { Float solidAngle = SolidAngle(ctx.p()); <<Return PDF based on uniform area sampling for challenging triangles>> 
if (solidAngle < MinSphericalSampleArea || solidAngle > MaxSphericalSampleArea) { <<Intersect sample ray with shape geometry>> 
Ray ray = ctx.SpawnRay(wi); pstd::optional<ShapeIntersection> isect = Intersect(ray); if (!isect) return 0;
<<Compute PDF in solid angle measure from shape intersection point>> 
Float pdf = (1 / Area()) / (AbsDot(isect->intr.n, -wi) / DistanceSquared(ctx.p(), isect->intr.p())); if (IsInf(pdf)) pdf = 0;
return pdf; }
Float pdf = 1 / solidAngle; <<Adjust PDF for warp product sampling of triangle cosine theta factor>> 
if (ctx.ns != Normal3f(0, 0, 0)) { <<Get triangle vertices in p0, p1, and p2>> 
const TriangleMesh *mesh = GetMesh(); const int *v = &mesh->vertexIndices[3 * triIndex]; Point3f p0 = mesh->p[v[0]], p1 = mesh->p[v[1]], p2 = mesh->p[v[2]];
Point2f u = InvertSphericalTriangleSample({p0, p1, p2}, ctx.p(), wi); <<Compute cosine theta -based weights w at sample domain corners>> 
Point3f rp = ctx.p(); Vector3f wi[3] = {Normalize(p0 - rp), Normalize(p1 - rp), Normalize(p2 - rp)}; pstd::array<Float, 4> w = pstd::array<Float, 4>{std::max<Float>(0.01, AbsDot(ctx.ns, wi[1])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[1])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[0])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[2]))};
pdf *= BilinearPDF(u, w); }
return pdf; }

It is important that the PDF() method makes exactly the same decisions about which technique is used to sample the triangle as the Sample() method does. This method therefore starts with the same check for very small and very large triangles to determine whether it should fall back to returning the PDF based on uniform area sampling.

<<Return PDF based on uniform area sampling for challenging triangles>>= 
if (solidAngle < MinSphericalSampleArea || solidAngle > MaxSphericalSampleArea) { <<Intersect sample ray with shape geometry>> 
Ray ray = ctx.SpawnRay(wi); pstd::optional<ShapeIntersection> isect = Intersect(ray); if (!isect) return 0;
<<Compute PDF in solid angle measure from shape intersection point>> 
Float pdf = (1 / Area()) / (AbsDot(isect->intr.n, -wi) / DistanceSquared(ctx.p(), isect->intr.p())); if (IsInf(pdf)) pdf = 0;
return pdf; }

If Sample() would have warped the initial uniform random sample to account for the incident cosine theta factor, it is necessary to incorporate the corresponding change of variables factor in the returned PDF here. To do so, we need to be able to invert the spherical triangle sampling algorithm in order to determine the sample value u that samples a point on the triangle that gives the incident direction wi at the reference point. The InvertSphericalTriangleSample() function performs this computation.

<<Adjust PDF for warp product sampling of triangle cosine theta factor>>= 
if (ctx.ns != Normal3f(0, 0, 0)) { <<Get triangle vertices in p0, p1, and p2>> 
const TriangleMesh *mesh = GetMesh(); const int *v = &mesh->vertexIndices[3 * triIndex]; Point3f p0 = mesh->p[v[0]], p1 = mesh->p[v[1]], p2 = mesh->p[v[2]];
Point2f u = InvertSphericalTriangleSample({p0, p1, p2}, ctx.p(), wi); <<Compute cosine theta -based weights w at sample domain corners>> 
Point3f rp = ctx.p(); Vector3f wi[3] = {Normalize(p0 - rp), Normalize(p1 - rp), Normalize(p2 - rp)}; pstd::array<Float, 4> w = pstd::array<Float, 4>{std::max<Float>(0.01, AbsDot(ctx.ns, wi[1])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[1])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[0])), std::max<Float>(0.01, AbsDot(ctx.ns, wi[2]))};
pdf *= BilinearPDF(u, w); }

The pair of sample values that give a sampled direction omega Subscript can be found by inverting each of the sampling operations individually. The function that performs this computation starts out with a few reused fragments to compute the angles at the three vertices of the spherical triangle.

<<Sampling Function Definitions>>+=  
Point2f InvertSphericalTriangleSample(const pstd::array<Point3f, 3> &v, Point3f p, Vector3f w) { <<Compute vectors a, b, and c to spherical triangle vertices>> 
Vector3f a(v[0] - p), b(v[1] - p), c(v[2] - p); a = Normalize(a); b = Normalize(b); c = Normalize(c);
<<Compute normalized cross products of all direction pairs>> 
Vector3f n_ab = Cross(a, b), n_bc = Cross(b, c), n_ca = Cross(c, a); if (LengthSquared(n_ab) == 0 || LengthSquared(n_bc) == 0 || LengthSquared(n_ca) == 0) return {}; n_ab = Normalize(n_ab); n_bc = Normalize(n_bc); n_ca = Normalize(n_ca);
<<Find angles alpha , beta , and gamma at spherical triangle vertices>> 
Float alpha = AngleBetween(n_ab, -n_ca); Float beta = AngleBetween(n_bc, -n_ab); Float gamma = AngleBetween(n_ca, -n_bc);
<<Find vertex bold c prime along bold a bold c arc for omega Subscript >> 
Vector3f cp = Normalize(Cross(Cross(b, w), Cross(c, a))); if (Dot(cp, a + c) < 0) cp = -cp;
<<Invert uniform area sampling to find u0>> 
Float u0; if (Dot(a, cp) > 0.99999847691f /* 0.1 degrees */) u0 = 0; else { <<Compute area upper A prime of subtriangle>> 
Vector3f n_cpb = Cross(cp, b), n_acp = Cross(a, cp); if (LengthSquared(n_cpb) == 0 || LengthSquared(n_acp) == 0) return Point2f(0.5, 0.5); n_cpb = Normalize(n_cpb); n_acp = Normalize(n_acp); Float Ap = alpha + AngleBetween(n_ab, n_cpb) + AngleBetween(n_acp, -n_cpb) - Pi;
<<Compute sample u0 that gives the area upper A prime >> 
Float A = alpha + beta + gamma - Pi; u0 = Ap / A;
}
<<Invert arc sampling to find u1 and return result>> 
Float u1 = (1 - Dot(w, b)) / (1 - Dot(cp, b)); return Point2f(Clamp(u0, 0, 1), Clamp(u1, 0, 1));
}

Next, it finds the vertex bold c prime along the arc between bold a and bold c that defines the subtriangle that would have been sampled when sampling omega Subscript . This vertex can be found by computing the intersection of the great circle defined by bold b and omega Subscript and the great circle defined by bold a and bold c ; see Figure 6.21.

Figure 6.21: Given a spherical triangle bold a bold b bold c and a direction omega Subscript that is inside it, the vertex bold c prime along the edge from bold a to bold c can be found from the intersection of the great circle that passes through bold b and omega Subscript and the great circle that passes through bold a and bold c .

Recall from Section 3.8.2 that the great circle passing through two points on the sphere is given by the intersection of the sphere with a plane passing through the origin and those two points. Therefore, we can find the intersection between the two corresponding planes, which is a line. In 3D, the cross product of the plane normals gives this line’s direction. This line intersects the sphere at two points and so it is necessary to choose the one of them that is between bold a and bold c .

<<Find vertex bold c prime along bold a bold c arc for omega Subscript >>= 
Vector3f cp = Normalize(Cross(Cross(b, w), Cross(c, a))); if (Dot(cp, a + c) < 0) cp = -cp;

Given bold c prime , it is easy to compute the area of the triangle bold a bold b bold c prime ; the ratio of that area to the original area gives the first sample value u0. However, it is necessary to be aware of the case where bold a and bold c prime are nearly coincident; in that case, computation of the angle gamma prime may have high error, sometimes to the point that the subtriangle bold a bold b bold c prime seems to have larger area than the original triangle bold a bold b bold c . That case is caught with a dot product test.

<<Invert uniform area sampling to find u0>>= 
Float u0; if (Dot(a, cp) > 0.99999847691f /* 0.1 degrees */) u0 = 0; else { <<Compute area upper A prime of subtriangle>> 
Vector3f n_cpb = Cross(cp, b), n_acp = Cross(a, cp); if (LengthSquared(n_cpb) == 0 || LengthSquared(n_acp) == 0) return Point2f(0.5, 0.5); n_cpb = Normalize(n_cpb); n_acp = Normalize(n_acp); Float Ap = alpha + AngleBetween(n_ab, n_cpb) + AngleBetween(n_acp, -n_cpb) - Pi;
<<Compute sample u0 that gives the area upper A prime >> 
Float A = alpha + beta + gamma - Pi; u0 = Ap / A;
}

Otherwise, the area of the subtriangle upper A prime is computed using Girard’s theorem.

<<Compute area upper A prime of subtriangle>>= 
Vector3f n_cpb = Cross(cp, b), n_acp = Cross(a, cp); if (LengthSquared(n_cpb) == 0 || LengthSquared(n_acp) == 0) return Point2f(0.5, 0.5); n_cpb = Normalize(n_cpb); n_acp = Normalize(n_acp); Float Ap = alpha + AngleBetween(n_ab, n_cpb) + AngleBetween(n_acp, -n_cpb) - Pi;

The first sample value is then easily found given these two areas.

<<Compute sample u0 that gives the area upper A prime >>= 
Float A = alpha + beta + gamma - Pi; u0 = Ap / A;

The sampling method for choosing omega Subscript along the arc through bold b and bold c prime , Equation (6.10), is also easily inverted.

<<Invert arc sampling to find u1 and return result>>= 
Float u1 = (1 - Dot(w, b)) / (1 - Dot(cp, b)); return Point2f(Clamp(u0, 0, 1), Clamp(u1, 0, 1));