## 6.1 Basic Shape Interface

The interface for `Shape`s is defined in the file
`base/shape.h`, and the shape implementations can be found in
`shapes.h` and `shapes.cpp`. The `Shape`
class defines the general shape interface.

### 6.1.1 Bounding

The scenes that `pbrt` renders often contain objects
that are computationally expensive to process. For many operations, it is
useful to have a 3D *bounding volume* that
encloses an object. For example, if a ray does not pass through a
particular bounding volume, `pbrt` can avoid processing all the objects
inside of it for that ray.

Axis-aligned bounding boxes are a convenient bounding volume, as they
require only six floating-point values to store. They fit many shapes well and
it is fairly inexpensive to test for the intersection of a ray with an axis-aligned
bounding box. Each `Shape` implementation must therefore be capable
of bounding itself with an axis-aligned bounding box represented by a
`Bounds3f`. The returned bounding box should be in the rendering
coordinate system (recall the discussion of coordinate systems in
Section 5.1.1).

In addition to bounding their spatial extent, shapes must also be able to
bound their range of surface normals. The `NormalBounds()` method
should return such a bound using a `DirectionCone`, which was defined
in Section 3.8.4. Normal bounds are specifically
useful in lighting calculations: when a shape is emissive, they sometimes
make it possible to efficiently determine that the shape does not
illuminate a particular point in the scene.

### 6.1.2 Ray–Bounds Intersections

Given the use of `Bounds3f` instances to bound shapes, we will add a
`Bounds3` method, `Bounds3::IntersectP()`, that checks for a
ray–box intersection and returns the two parametric values of the
intersection, if any.

One way to think of bounding boxes is as the intersection of three slabs, where a slab is the region of space between two parallel planes. To intersect a ray with a box, we intersect the ray with each of the box’s three slabs in turn. Because the slabs are aligned with the three coordinate axes, a number of optimizations can be made in the ray–slab tests.

The basic ray–bounding box intersection algorithm works as follows: we start with a parametric interval that covers that range of positions along the ray where we are interested in finding intersections; typically, this is . We will then successively compute the two parametric positions where the ray intersects each axis-aligned slab. We compute the set intersection of the per-slab intersection interval with the current intersection interval, returning failure if we find that the resulting interval is degenerate. If, after checking all three slabs, the interval is nondegenerate, we have the parametric range of the ray that is inside the box. Figure 6.1 illustrates this process, and Figure 6.2 shows the basic geometry of a ray intersecting a slab.

If the `Bounds3::IntersectP()` method returns `true`, the
intersection’s parametric range is returned in the optional arguments
`hitt0` and `hitt1`. Intersections outside of the `(0,
tMax)` range of the ray are ignored. If the ray’s origin is inside the
box, is returned for `hitt0`.

`i`th bounding box slab>>

`tFar`to ensure robust ray–bounds intersection>>

For each pair of planes, this routine needs to compute two ray–plane intersections. For example, the slab described by two planes perpendicular to the axis can be described by planes through points and , each with normal . Consider the first value for a plane intersection, . The parametric value for the intersection between a ray with origin and direction and a plane can be found by substituting the ray equation into the plane equation:

Solving for gives

Because the and components of the plane’s normal are zero, and are zero, and is one. The plane’s coefficient is . We can use this information and the definition of the dot product to simplify the calculation substantially:

The code to compute the values of the slab intersections starts by
computing the reciprocal of the corresponding component of the ray
direction so that it can multiply by this factor instead of performing
multiple divisions. Note that, although it divides by this component, it
is not necessary to verify that it is nonzero. If it is zero, then
`invRayDir` will hold an infinite value, either or ,
and the rest of the algorithm still works correctly.

`i`th bounding box slab>>=

`tFar`to ensure robust ray–bounds intersection>>

The two distances are reordered so that `tNear` holds the closer
intersection and `tFar` the farther one. This gives a parametric
range , which is used to compute the set
intersection with the current range to compute a
new range. If this new range is empty (i.e., ),
then the code can immediately return failure.

There is another floating-point-related subtlety here: in the case where
the ray origin is in the plane of one of the bounding box slabs and the ray
lies in the plane of the slab, it is possible that `tNear` or
`tFar` will be computed by an expression of the form , which
results in a floating-point “not a number” (NaN) value. Like
infinity values, NaNs have well-specified semantics: for example, any
logical comparison involving a NaN always evaluates to false. Therefore,
the code that updates the values of `t0` and `t1` is carefully
written so that if `tNear` or `tFar` is NaN, then `t0` or
`t1` will not ever take on a NaN value but will always remain unchanged.

`tFar`to ensure robust ray–bounds intersection>>

`Bounds3` also provides a specialized `IntersectP()` method that
takes the reciprocal of the ray’s direction as an additional parameter, so
that the three reciprocals do not need to be computed each time
`IntersectP()` is called.

This version of the method also takes precomputed values that indicate
whether each direction component is negative, which makes it possible to
eliminate the comparisons of the computed `tNear` and `tFar`
values in the original routine and to directly compute the respective
near and far values. Because the comparisons that order these values from
low to high in the original code are dependent on computed values, they can
be inefficient for processors to execute, since the computation of their
values must be finished before the comparison can be made. Because many
ray–bounds intersection tests may be performed during rendering, this
small optimization is worth using.

This routine returns `true` if the ray segment is
entirely inside the bounding box, even if the intersections are not within
the ray’s range.

`tMax`and

`tyMax`to ensure robust bounds intersection>> if (tMin > tyMax || tyMin > tMax) return false; if (tyMin > tMin) tMin = tyMin; if (tyMax < tMax) tMax = tyMax;

`tzMax`to ensure robust bounds intersection>>

If the ray direction vector is negative, the “near” parametric intersection will be found with the slab with the larger of the two bounding values, and the far intersection will be found with the slab with the smaller of them. The implementation can use this observation to compute the near and far parametric values in each direction directly.

`tMax`and

`tyMax`to ensure robust bounds intersection>> if (tMin > tyMax || tyMin > tMax) return false; if (tyMin > tMin) tMin = tyMin; if (tyMax < tMax) tMax = tyMax;

The fragment <<Check for ray intersection against slab>> is analogous and is not included here.

This intersection test is at the heart of traversing the `BVHAggregate`
acceleration structure, which is introduced in Section 7.3.
Because so many ray–bounding box intersection tests are performed while
traversing the BVH tree, we found that this optimized method provided
approximately a 15% performance improvement in overall rendering time
compared to using the `Bounds3::IntersectP()` variant that did not take
the precomputed direction reciprocals and signs.

### 6.1.3 Intersection Tests

`Shape` implementations must provide an implementation of two
methods that test for ray intersections with their shape. The first,
`Intersect()`, returns geometric information about a single ray–shape
intersection corresponding to the first intersection, if any, in the
`(0, tMax)` parametric range along the given ray.

In the event that an intersection is found, a `SurfaceInteraction`
corresponding to the intersection point and the parametric distance
along the ray where the intersection occurred are returned via a
`ShapeIntersection` instance.

There are a few important things to keep in mind when reading (and writing) intersection routines:

- The provided
`tMax`value defines the endpoint of the ray. Intersection routines must ignore any intersections that occur after this point. - If there are multiple intersections with a shape along the ray, the closest one should be reported.
- The rays passed into intersection routines are in rendering space, so shapes are responsible for transforming them to object space if needed for intersection tests. The intersection information returned should be in rendering space.

The second intersection test method, `Shape::IntersectP()`, is a
predicate function that determines whether or not an intersection occurs
without returning any details about the intersection itself. That test is
often more efficient than a full intersection test. This method is used in
particular for shadow rays that are testing the visibility of a light
source from a point in the scene.

### 6.1.4 Intersection Coordinate Spaces

For some shapes, intersection routines are most naturally expressed in
their object space. For example, the following `Sphere` shape
computes the intersection with a sphere of a given radius positioned at the
origin. The sphere being at the origin allows various
simplifications to the intersection algorithm. Other shapes, like the
`Triangle`, transform their representation to rendering space and
perform intersection tests there.

Shapes like `Sphere` that operate in object space must transform the
specified ray to object space and then transform any intersection results
back to rendering space. Most of this is handled easily using associated
methods in the `Transform` class that were introduced in
Section 3.10, though a natural question to ask is,
“What effect does the object-from-rendering-space transformation have on the correct
parametric distance to return?” The intersection method has found a
parametric distance to the intersection for the object-space ray, which
may have been translated, rotated, scaled, or worse when it was transformed
from rendering space.

Using the properties of transformations, it is possible to show that the distance to the intersection is unaffected by the transformation. Consider a rendering-space ray with associated origin and direction . Given an object-from-rendering-space transformation matrix , we can then find the object-space ray with origin and direction .

If the ray–shape intersection algorithm finds an object-space intersection at a distance along the ray, then the object-space intersection point is

Now consider the rendering-space intersection point that is found by applying ’s inverse to both sides of that equation:

Therefore, the value that was computed in object space is the correct value for the intersection
point in rendering space as well. Note that if the object-space ray’s
direction had been normalized after the transformation, then this would no
longer be the case and a correction factor related to the unnormalized
ray’s length would be needed. This is one reason that `pbrt` does not
normalize object-space rays’ directions after transformation.

### 6.1.5 Sidedness

Many rendering systems, particularly those based on scanline or
-buffer algorithms, support the concept of shapes being
“one-sided”—the shape is visible if seen from the front but disappears
when viewed from behind. In particular, if a geometric object is closed
and always viewed from the outside, then the backfacing parts of it can be
discarded without changing the resulting image. This optimization can
substantially improve the speed of these types of hidden surface removal
algorithms. The potential for improved performance is reduced when using
this technique with ray tracing, however, since it is often necessary to
perform the ray–object intersection before determining the surface normal
to do the backfacing test. Furthermore, this feature can lead to a
physically inconsistent scene description if one-sided objects are not in
fact closed. For example, a surface might block light when a shadow ray is
traced from a light source to a point on another surface, but not if the
shadow ray is traced in the other direction. For all of these reasons,
`pbrt` does not support this feature.

### 6.1.6 Area

In `pbrt`, area lights are defined by attaching an emission profile to a
`Shape`. To use `Shape`s as area lights, it is
necessary that shapes be able to return their surface area of a shape in
rendering space.

### 6.1.7 Sampling

A few methods are necessary to sample points on the surface of shapes in
order to use them as emitters. Additional `Shape` methods make this
possible.

There are two shape sampling methods, both named `Sample()`. The
first chooses points on the surface of the shape using a sampling
distribution with respect to surface area and returns the local geometric
information about the sampled point in a `ShapeSample`. The provided
sample value `u`, a uniform sample in , should
be used to determine the point on the shape.

The `ShapeSample` structure that is returned stores an
`Interaction` corresponding to a sampled point on the surface as well
as the probability density with respect to surface area on the shape for
sampling that point.

Shapes must also provide an associated `PDF()` method that returns
probability density for sampling the specified point on the shape that
corresponds to the given `Interaction`. This method should only be
called with interactions that are on the shape’s surface. Although
`Sample()` already returns the probability density for the point it
samples, this method is useful when using multiple importance sampling, in
which case it is necessary to compute the probability density for samples
generated using other sampling techniques. An important detail is that
implementations are allowed to assume that the provided point is on their
surface; callers are responsible for ensuring that this is the case.

The second shape sampling method takes a reference point from which the shape is being viewed. This method is particularly useful for lighting, since the caller can pass in the point to be lit and allow shape implementations to ensure that they only sample the portion of the shape that is potentially visible from that point.

Unlike the first `Shape` sampling method, which generates points on the
shape according to a probability density with respect to surface area on
the shape, the second one uses a density with respect to solid angle from
the reference point. This difference stems from the fact that the area
light sampling routines evaluate the direct lighting integral as an
integral over directions from the reference point—expressing these
sampling densities with respect to solid angle at the point is more
convenient.

Information about the reference point and its geometric and shading normals
is provided by the `ShapeSampleContext` structure. The reference
point position is specified using the `Point3fi` class, which can
represent the numerical uncertainty in a ray intersection point computed
using floating-point arithmetic. Discussion of related
topics is in Section 6.8.
For points in participating media that are not associated with a surface,
the normal and shading normal are left with their default values of
.

`ShapeSampleContext` provides a variety of convenience constructors
that allow specifying the member variable values directly or from various
types of `Interaction`.

For code that does not need to be aware of numeric error in the
intersection point, a method provides it as a regular `Point3`.

A second `PDF()` method comes along with this sampling approach. It
returns the shape’s probability of sampling a point on the light such
that the incident direction at the reference point is `wi`. As
with the corresponding sampling method, this density should be with respect
to solid angle at the reference point. As with the other `Shape`
`PDF()` method, this should only be called for a direction that is
known to intersect the shape from the reference point; as such,
implementations are not responsible for checking that case.

Some of the `PDF()` method implementations will need to trace a ray
from the reference point in the direction to see if it intersects the
shape. The following `ShapeSampleContext` methods should be used to find the
origin or the ray itself rather than using the point returned by
`ShapeSampleContext::p()`. This, too, stems from a subtlety related to
handling numeric error in intersection points. The implementation of these
methods and discussion of the underlying issues can be found in
Section 6.8.6.