10.2 Texture Coordinate Generation
Almost all the textures in this chapter are functions that take a 2D or 3D coordinate and return a texture value. Sometimes there are obvious ways to choose these texture coordinates; for parametric surfaces, such as the quadrics in Chapter 6, there is a natural 2D parameterization of the surface, and for all types of surfaces the shading point is a natural choice for a 3D coordinate.
In other cases, there is no natural parameterization, or the natural parameterization may be undesirable. For instance, the values near the poles of spheres are severely distorted. Therefore, this section introduces classes that provide an interface to different techniques for generating these parameterizations as well as a number of implementations of them.
The Texture implementations later in this chapter store a tagged pointer to a 2D or 3D mapping function as appropriate and use it to compute the texture coordinates at each point at which they are evaluated. Thus, it is easy to add new mappings to the system without having to modify all the Texture implementations, and different mappings can be used for different textures associated with the same surface. In pbrt, we will use the convention that 2D texture coordinates are denoted by ; this helps make clear the distinction between the intrinsic parameterization of the underlying surface and the possibly different coordinate values used for texturing.
TextureMapping2D defines the interface for 2D texture coordinate generation. It is defined in the file base/texture.h. The implementations of the texture mapping classes are in textures.h and textures.cpp.
The TextureMapping2D interface consists of a single method, Map(). It is given a TextureEvalContext that stores relevant geometric information at the shading point and returns a small structure, TexCoord2D, that stores the texture coordinates and estimates for the change in with respect to pixel and coordinates so that textures that use the mapping can determine the sampling rate and filter accordingly.
In previous versions of pbrt, the Map() interface was defined to take a complete SurfaceInteraction; the TextureEvalContext structure did not exist. For this version, we have tightened up the interface to only include specific values that are useful for texture coordinate generation. This change was largely motivated by the GPU rendering path: with the CPU renderer, all the relevant information is already at hand in the functions that call the Map() methods; most likely the SurfaceInteraction is already in the CPU cache. On the GPU, the necessary values have to be read from off-chip memory. TextureEvalContext makes it possible for the GPU renderer to only read the necessary values from memory, which in turn has measurable performance benefits.
TextureEvalContext provides three constructors, not included here. Two initialize the various fields using corresponding values from either an Interaction or a SurfaceInteraction and the third allows specifying them directly.
10.2.1 Mapping
UVMapping uses the coordinates in the TextureEvalContext to compute the texture coordinates, optionally scaling and offsetting their values in each dimension.
The scale-and-shift computation to compute coordinates is straightforward:
For a general 2D mapping function , the screen-space derivatives of and are given by the chain rule:
We will skip past the straightforward fragment that implements Equation (10.7) to initialize dsdx, dsdy, dtdx, and dtdy.
10.2.2 Spherical Mapping
Another useful mapping effectively wraps a sphere around the object. Each point is projected along the vector from the sphere’s center through the point on to the sphere’s surface. Since this mapping is based on spherical coordinates, Equation (3.8) can be applied, with the angles it returns remapped to :
Figure 10.9 shows the use of this mapping with an object in the Kroken scene.
The SphericalMapping further stores a transformation that is applied to points before this mapping is performed; this effectively allows the mapping sphere to be arbitrarily positioned and oriented with respect to the object.
The Map() function starts by computing the texture-space point pt.
For a mapping function based on a 3D point , the generalization of Equation (10.6) is
These quantities are computed using the texture-space position pt.
The final differentials are then found using the four dot products from Equation (10.9).
Finally, previously defined spherical geometry utility functions compute the mapping of Equation (10.8).
10.2.3 Cylindrical Mapping
The cylindrical mapping effectively wraps a cylinder around the object and then uses the cylinder’s parameterization.
See Figure 10.10 for an example of its use.
Note that the texture coordinate it returns is not necessarily between 0 and 1; the mapping should either be scaled in so that the object being textured has or the texture being used should return results for coordinates outside that range that match the desired result.
CylindricalMapping also supports a transformation to orient the mapping cylinder.
Because the texture coordinate is computed in the same way as it is with the spherical mapping, the cylindrical mapping’s matches the sphere’s in Equation (10.10). The partial derivative in can easily be seen to be . Since the cylindrical mapping function and derivative computation are only slight variations on the spherical mapping’s, we will not include the implementation of its Map() function here.
10.2.4 Planar Mapping
Another classic mapping method is planar mapping. The point is effectively projected onto a plane; a 2D parameterization of the plane then gives texture coordinates for the point. For example, a point might be projected onto the plane to yield texture coordinates given by and .
One way to define such a parameterized plane is with two nonparallel vectors and and offsets and . The texture coordinates are given by the coordinates of the point with respect to the plane’s coordinate system, which are computed by taking the dot product of the vector from the point to the origin with each vector and and then adding the corresponding offset:
A straightforward constructor, not included here, initializes the following member variables.
The planar mapping differentials can be computed directly using the partial derivatives of the mapping function, which are easily found. For example, the partial derivative of the texture coordinate with respect to screen-space is just .
10.2.5 3D Mapping
We will also define a TextureMapping3D class that defines the interface for generating 3D texture coordinates.
The Map() method it specifies returns a 3D point and partial derivative vectors in the form of a TexCoord3D structure.
TexCoord3D parallels TexCoord2D, storing both the point and its screen-space derivatives.
The natural 3D mapping takes the rendering-space coordinate of the point and applies a linear transformation to it. This will often be a transformation that takes the point back to the primitive’s object space. Such a mapping is implemented by the PointTransformMapping class.
Because it applies a linear transformation, the differential change in texture coordinates can be found by applying the same transformation to the partial derivatives of position.