6.8 Managing Rounding Error
Thus far, we have been discussing ray–shape intersection algorithms with respect to idealized arithmetic operations based on the real numbers. This approach has gotten us far, although the fact that computers can only represent finite quantities and therefore cannot actually represent all the real numbers is important. In place of real numbers, computers use floating-point numbers, which have fixed storage requirements. However, error may be introduced each time a floating-point operation is performed, since the result may not be representable in the designated amount of memory.
The accumulation of this error has several implications for the accuracy of intersection tests. First, it is possible that it will cause valid intersections to be missed completely—for example, if a computed intersection’s value is negative even though the precise value is positive. Furthermore, computed ray–shape intersection points may be above or below the actual surface of the shape. This leads to a problem: when new rays are traced starting from computed intersection points for shadow rays and reflection rays, if the ray origin is below the actual surface, we may find an incorrect reintersection with the surface. Conversely, if the origin is too far above the surface, shadows and reflections may appear detached. (See Figure 6.38.)
Typical practice to address this issue in ray tracing is to offset spawned rays by a fixed “ray epsilon” value, ignoring any intersections along the ray closer than some value. Figure 6.39 shows why this approach requires fairly high values to work effectively: if the spawned ray is oblique to the surface, incorrect ray intersections may occur quite some distance from the ray origin. Unfortunately, large values cause ray origins to be relatively far from the original intersection points, which in turn can cause valid nearby intersections to be missed, leading to loss of fine detail in shadows and reflections.
In this section, we will introduce the ideas underlying floating-point arithmetic and describe techniques for analyzing the error in floating-point computations. We will then apply these methods to the ray–shape algorithms introduced earlier in this chapter and show how to compute ray intersection points with bounded error. This will allow us to conservatively position ray origins so that incorrect self-intersections are never found, while keeping ray origins extremely close to the actual intersection point so that incorrect misses are minimized. In turn, no additional “ray epsilon” values are needed.
6.8.1 Floating-Point Arithmetic
Computation must be performed on a finite representation of numbers that fits in a finite amount of memory; the infinite set of real numbers cannot be represented on a computer. One such finite representation is fixed point, where given a 16-bit integer, for example, one might map it to positive real numbers by dividing by 256. This would allow us to represent the range with equal spacing of between values. Fixed-point numbers can be implemented efficiently using integer arithmetic operations (a property that made them popular on early PCs that did not support floating-point computation), but they suffer from a number of shortcomings; among them, the maximum number they can represent is limited, and they are not able to accurately represent very small numbers near zero.
An alternative representation for real numbers on computers is floating-point numbers. These are based on representing numbers with a sign, a significand, and an exponent: essentially, the same representation as scientific notation but with a fixed number of digits devoted to significand and exponent. (In the following, we will assume base-2 digits exclusively.) This representation makes it possible to represent and perform computations on numbers with a wide range of magnitudes while using a fixed amount of storage.
Programmers using floating-point arithmetic are generally aware that floating-point values may be inaccurate; this understanding sometimes leads to a belief that floating-point arithmetic is unpredictable. In this section we will see that floating-point arithmetic has a carefully designed foundation that in turn makes it possible to compute conservative bounds on the error introduced in a particular computation. For ray-tracing calculations, this error is often surprisingly small.
Modern CPUs and GPUs nearly ubiquitously implement a model of floating-point arithmetic based on a standard promulgated by the Institute of Electrical and Electronics Engineers (1985, 2008). (Henceforth when we refer to floats, we will specifically be referring to 32-bit floating-point numbers as specified by IEEE 754.) The IEEE 754 technical standard specifies the format of floating-point numbers in memory as well as specific rules for precision and rounding of floating-point computations; it is these rules that make it possible to reason rigorously about the error present in a computed floating-point value.
Floating-Point Representation
The IEEE standard specifies that 32-bit floats are represented with a sign bit, 8 bits for the exponent, and 23 bits for the significand. The exponent stored in a float ranges from 0 to 255. We will denote it by , with the subscript indicating that it is biased; the actual exponent used in computation, , is computed as
The significand actually has bits of precision when a normalized floating-point value is stored. When a number expressed with significand and exponent is normalized, there are no leading 0s in the significand. In binary, this means that the leading digit of the significand must be one; in turn, there is no need to store this value explicitly. Thus, the implicit leading 1 digit with the 23 digits encoding the fractional part of the significand gives a total of 24 bits of precision.
Given a sign , significand , and biased exponent , the corresponding floating-point value is
For example, with a normalized significand, the floating-point number 6.5 is written as , where the 2 subscript denotes a base-2 value. (If non-whole binary numbers are not immediately intuitive, note that the first number to the right of the radix point contributes , and so forth.) Thus, we have
, so and .
Floats are laid out in memory with the sign bit at the most significant bit of the 32-bit value (with negative signs encoded with a 1 bit), then the exponent, and the significand. Thus, for the value 6.5 the binary in-memory representation of the value is
Similarly, the floating-point value has and , so and its binary representation is:
This hexadecimal number is a value worth remembering, as it often comes up in memory dumps when debugging graphics programs.
An implication of this representation is that the spacing between representable floats between two adjacent powers of two is uniform throughout the range. (It corresponds to increments of the significand bits by one.) In a range , the spacing is
Thus, for floating-point numbers between 1 and 2, , and the spacing between floating-point values is . This spacing is also referred to as the magnitude of a unit in last place (“ulp”); note that the magnitude of an ulp is determined by the floating-point value that it is with respect to—ulps are relatively larger at numbers with larger magnitudes than they are at numbers with smaller magnitudes.
As we have described the representation so far, it is impossible to exactly represent zero as a floating-point number. This is obviously an unacceptable state of affairs, so the minimum exponent , or , is set aside for special treatment. With this exponent, the floating-point value is interpreted as not having the implicit leading 1 bit in the significand, which means that a significand of all 0 bits results in
Eliminating the leading 1 significand bit also makes it possible to represent denormalized numbers: if the leading 1 was always present, then the smallest 32-bit float would be
Without the leading 1 bit, the minimum value is
(The exponent is used because denormalized numbers are encoded with but are interpreted as if so that there is no excess gap between them and the adjacent smallest regular floating-point number.) Providing some capability to represent these small values can make it possible to avoid needing to round very small values to zero.
Note that there is both a “positive” and “negative” zero value with this representation. This detail is mostly transparent to the programmer. For example, the standard guarantees that the comparison -0.0 == 0.0 evaluates to true, even though the in-memory representations of these two values are different. Conveniently, a floating-point zero value with an unset sign bit is represented by the value 0 in memory.
The maximum exponent, , is also reserved for special treatment. Therefore, the largest regular floating-point value that can be represented has (or ) and is approximately
With , if the significand bits are all 0, the value corresponds to positive or negative infinity, according to the sign bit. Infinite values result when performing computations like in floating point, for example. Arithmetic operations with infinity and a noninfinite value usually result in infinity, though dividing a finite value by infinity gives 0. For comparisons, positive infinity is larger than any noninfinite value and similarly for negative infinity.
The Infinity constant is initialized to be the “infinity” floating-point value. We make it available in a separate constant so that code that uses its value does not need to use the wordy C++ standard library call.
With , nonzero significand bits correspond to special “not a number” (NaN) values, which result from invalid operations like taking the square root of a negative number or trying to compute . NaNs propagate through computations: any arithmetic operation where one of the operands is a NaN itself always returns NaN. Thus, if a NaN emerges from a long chain of computations, we know that something went awry somewhere along the way. In debug builds, pbrt has many assertion statements that check for NaN values, as we almost never expect them to come up in the regular course of events. Any comparison with a NaN value returns false; thus, checking for !(x == x) serves to check if a value is not a number.
By default, the majority of floating-point computation in pbrt uses 32-bit floats. However, as discussed in Section 1.3.3, it is possible to configure it to use 64-bit double-precision values instead. In addition to the sign bit, doubles allocate 11 bits to the exponent and 52 to the significand. pbrt also supports 16-bit floats (which are known as halfs) as an in-memory representation for floating-point values stored at pixels in images. Halfs use 5 bits for the exponent and 10 for the significand. (A convenience Half class, not discussed further in the text, provides capabilities for working with halfs and converting to and from 32-bit floats.)
Arithmetic Operations
IEEE 754 provides important guarantees about the properties of floating-point arithmetic: specifically, it guarantees that addition, subtraction, multiplication, division, and square root give the same results given the same inputs and that these results are the floating-point number that is closest to the result of the underlying computation if it had been performed in infinite-precision arithmetic. It is remarkable that this is possible on finite-precision digital computers at all; one of the achievements in IEEE 754 was the demonstration that this level of accuracy is possible and can be implemented fairly efficiently in hardware.
Using circled operators to denote floating-point arithmetic operations and for floating-point square root, these accuracy guarantees can be written as:
where indicates the result of rounding a real number to the closest floating-point value and where denotes the fused multiply add operation, which only rounds once. It thus gives better accuracy than computing .
This bound on the rounding error can also be represented with an interval of real numbers: for example, for addition, we can say that the rounded result is within an interval
for some . The amount of error introduced from this rounding can be no more than half the floating-point spacing at —if it was more than half the floating-point spacing, then it would be possible to round to a different floating-point number with less error (Figure 6.40).
For 32-bit floats, we can bound the floating-point spacing at from above using Equation (6.18) (i.e., an ulp at that value) by , so half the spacing is bounded from above by and so . This bound is the machine epsilon. For 32-bit floats, .
Thus, we have
Analogous relations hold for the other arithmetic operators and the square root operator.
A number of useful properties follow directly from Equation (6.19). For a floating-point number ,
- .
- .
- .
- .
- and are exact; no rounding is performed to compute the final result. More generally, any multiplication by or division by a power of two gives an exact result (assuming there is no overflow or underflow).
- for all integer , assuming does not overflow.
All of these properties follow from the principle that the result must be the nearest floating-point value to the actual result; when the result can be represented exactly, the exact result must be computed.
Utility Routines
A few basic utility routines will be useful in the following. First, we define our own IsNaN() function to check for NaN values. It comes with the baggage of a use of C++’s enable_if construct to declare its return type in a way that requires that this function only be called with floating-point types.
We also define IsNaN() for integer-based types; it trivially returns false, since NaN is not representable in those types. One might wonder why we have bothered with enable_if and this second definition that tells us something that we already know. One motivation is the templated Tuple2 and Tuple3 classes from Section 3.2, which are used with both Float and int for their element types. Given these two functions, they can freely have assertions that their elements do not store NaN values without worrying about which particular type their elements are.
For similar motivations, we define a pair of IsInf() functions that test for infinity.
Once again, because infinity is not representable with integer types, the integer variant of this function returns false.
A pair of IsFinite() functions check whether a number is neither infinite or NaN.
Although fused multiply add is available through the standard library, we also provide our own FMA() function.
A separate version for integer types allows calling FMA() from code regardless of the numeric type being used.
For certain low-level operations, it can be useful to be able to interpret a floating-point value in terms of its constituent bits and to convert the bits representing a floating-point value to an actual float or double. A natural approach to this would be to take a pointer to a value to be converted and cast it to a pointer to the other type:
However, modern versions of C++ specify that it is illegal to cast a pointer of one type, float, to a different type, uint32_t. (This restriction allows the compiler to optimize more aggressively in its analysis of whether two pointers may point to the same memory location, which can inhibit storing values in registers.) Another popular alternative, using a union with elements of both types, assigning to one type and reading from the other, is also illegal: the C++ standard says that reading an element of a union different from the last one assigned to is undefined behavior.
Fortunately, as of C++20, the standard library provides a std::bit_cast function that performs such conversions. Because this version of pbrt only requires C++17, we provide an implementation in the pstd library that is used by the following conversion functions.
(Versions of these functions that convert between double and uint64_t are also available but are similar and are therefore not included here.)
The corresponding integer type with a sufficient number of bits to store pbrt’s Float type is available through FloatBits.
Given the ability to extract the bits of a floating-point value and given the description of their layout in Section 6.8.1, it is easy to extract various useful quantities from a float.
These conversions can be used to implement functions that bump a floating-point value up or down to the next greater or next smaller representable floating-point value. They are useful for some conservative rounding operations that we will need in code to follow. Thanks to the specifics of the in-memory representation of floats, these operations are quite efficient.
There are two important special cases: first, if v is positive infinity, then this function just returns v unchanged. Second, negative zero is skipped forward to positive zero before continuing on to the code that advances the significand. This step must be handled explicitly, since the bit patterns for and are not adjacent.
Conceptually, given a floating-point value, we would like to increase the significand by one, where if the result overflows, the significand is reset to zero and the exponent is increased by one. Fortuitously, adding one to the in-memory integer representation of a float achieves this: because the exponent lies at the high bits above the significand, adding one to the low bit of the significand will cause a one to be carried all the way up into the exponent if the significand is all ones and otherwise will advance to the next higher significand for the current exponent. (This is yet another example of the careful thought that was applied to the development of the IEEE floating-point specification.) Note also that when the highest representable finite floating-point value’s bit representation is incremented, the bit pattern for positive floating-point infinity is the result.
For negative values, subtracting one from the bit representation similarly advances to the next higher value.
The NextFloatDown() function, not included here, follows the same logic but effectively in reverse. pbrt also provides versions of these functions for doubles.
Error Propagation
Using the guarantees of IEEE floating-point arithmetic, it is possible to develop methods to analyze and bound the error in a given floating-point computation. For more details on this topic, see the excellent book by Higham (2002), as well as Wilkinson’s earlier classic (1994).
Two measurements of error are useful in this effort: absolute and relative. If we perform some floating-point computation and get a rounded result , we say that the magnitude of the difference between and the result of doing that computation in the real numbers is the absolute error, :
Relative error, , is the ratio of the absolute error to the precise result:
as long as . Using the definition of relative error, we can thus write the computed value as a perturbation of the exact result :
As a first application of these ideas, consider computing the sum of four numbers, , , , and , represented as floats. If we compute this sum as r = (((a + b) + c) + d), Equation (6.20) gives us
Because is small, higher-order powers of can be bounded by an additional term, and so we can bound the terms with
(As a practical matter, almost bounds these terms, since higher powers of get very small very quickly, but the above is a fully conservative bound.)
This bound lets us simplify the result of the addition to:
The term in square brackets gives the absolute error: its magnitude is bounded by
Thus, if we add four floating-point numbers together with the above parenthesization, we can be certain that the difference between the final rounded result and the result we would get if we added them with infinite-precision real numbers is bounded by Equation (6.22); this error bound is easily computed given specific values of , , , and .
This is a fairly interesting result; we see that the magnitude of makes a relatively large contribution to the error bound, especially compared to . (This result gives a sense for why, if adding a large number of floating-point numbers together, sorting them from small to large magnitudes generally gives a result with a lower final error than an arbitrary ordering.)
Our analysis here has implicitly assumed that the compiler would generate instructions according to the expression used to define the sum. Compilers are required to follow the form of the given floating-point expressions in order to not break carefully crafted computations that may have been designed to minimize round-off error. Here again is a case where certain transformations that would be valid on expressions with integers cannot be safely applied when floats are involved.
What happens if we change the expression to the algebraically equivalent float r = (a + b) + (c + d)? This corresponds to the floating-point computation
If we use the same process of applying Equation (6.20), expanding out terms, converting higher-order terms to , we get absolute error bounds of
which are lower than the first formulation if is relatively large, but possibly higher if is relatively large.
This approach to computing error is known as forward error analysis; given inputs to a computation, we can apply a fairly mechanical process that provides conservative bounds on the error in the result. The derived bounds in the result may overstate the actual error—in practice, the signs of the error terms are often mixed, so that there is cancellation when they are added. An alternative approach is backward error analysis, which treats the computed result as exact and finds bounds on perturbations on the inputs that give the same result. This approach can be more useful when analyzing the stability of a numerical algorithm but is less applicable to deriving conservative error bounds on the geometric computations we are interested in here.
The conservative bounding of by is somewhat unsatisfying since it adds a whole term purely to conservatively bound the sum of various higher powers of . Higham (2002, Section 3.1) gives an approach to more tightly bound products of error terms. If we have , it can be shown that this value is bounded by , where
as long as (which will certainly be the case for the calculations we consider). Note that the denominator of this expression will be just less than one for reasonable values, so it just barely increases to achieve a conservative bound.
We will denote this bound by :
The function that computes its value is declared as constexpr so that any invocations with compile-time constants will be replaced with the corresponding floating-point return value.
Using the notation, our bound on the error of the first sum of four values is
An advantage of this approach is that quotients of terms can also be bounded with the function. Given
the interval is bounded by . Thus, can be used to collect terms from both sides of an equality over to one side by dividing them through; this will be useful in some of the following derivations. (Note that because terms represent intervals, canceling them would be incorrect:
the bounds must be used instead.)
Given inputs to some computation that themselves carry some amount of error, it is instructive to see how this error is carried through various elementary arithmetic operations. Given two values, and , that each carry accumulated error from earlier operations, consider their product. Using the definition of , the result is in the interval:
where we have used the relationship , which follows directly from Equation (6.23).
The relative error in this result is bounded by
and so the final error is no more than roughly ulps at the value of the product—about as good as we might hope for, given the error going into the multiplication. (The situation for division is similarly good.)
Unfortunately, with addition and subtraction, it is possible for the relative error to increase substantially. Using the same definitions of the values being operated on, consider
which is in the interval and so the absolute error is bounded by .
If the signs of and are the same, then the absolute error is bounded by and the relative error is approximately ulps around the computed value.
However, if the signs of and differ (or, equivalently, they are the same but subtraction is performed), then the relative error can be quite high. Consider the case where : the relative error is
The numerator’s magnitude is proportional to the original value yet is divided by a very small number, and thus the relative error is quite high. This substantial increase in relative error is called catastrophic cancellation. Equivalently, we can have a sense of the issue from the fact that the absolute error is in terms of the magnitude of , though it is in relation to a value much smaller than .
Running Error Analysis
In addition to working out error bounds algebraically, we can also have the computer do this work for us as some computation is being performed. This approach is known as running error analysis. The idea behind it is simple: each time a floating-point operation is performed, we compute intervals based on Equation (6.20) that bound its true value.
The Interval class, which is defined in Section B.2.15, provides this functionality. The Interval class also tracks rounding errors in floating-point arithmetic and is useful even if none of the initial values are intervals. While computing error bounds in this way has higher runtime overhead than using derived expressions that give an error bound directly, it can be convenient when derivations become unwieldy.
6.8.2 Conservative Ray–Bounds Intersections
Floating-point round-off error can cause the ray–bounding box intersection test to miss cases where a ray actually does intersect the box. While it is acceptable to have occasional false positives from ray–box intersection tests, we would like to never miss an intersection—getting this right is important for the correctness of the BVHAggregate acceleration data structure in Section 7.3 so that valid ray–shape intersections are not missed. The ray–bounding box test introduced in Section 6.1.2 is based on computing a series of ray–slab intersections to find the parametric along the ray where the ray enters the bounding box and the where it exits. If , the ray passes through the box; otherwise, it misses it. With floating-point arithmetic, there may be error in the computed values—if the computed value is greater than purely due to round-off error, the intersection test will incorrectly return a false result.
Recall that the computation to find the value for a ray intersection with a plane perpendicular to the axis at a point is . Expressed as a floating-point computation and applying Equation (6.19), we have
and so
The difference between the computed result and the precise result is bounded by .
If we consider the intervals around the computed values that bound the true value of , then the case we are concerned with is when the intervals overlap; if they do not, then the comparison of computed values will give the correct result (Figure 6.41). If the intervals do overlap, it is impossible to know the true ordering of the values. In this case, increasing by twice the error bound, , before performing the comparison ensures that we conservatively return true in this case.
We can now define the fragment for the ray–bounding box test in Section 6.1.2 that makes this adjustment.
The fragments for the Bounds3::IntersectP() method, <<Update tMax and tyMax to ensure robust bounds intersection>> and <<Update tzMax to ensure robust bounds intersection>>, are similar and therefore not included here.
6.8.3 Accurate Quadratic Discriminants
Recall from Sections 6.2.2 and 6.3.2 that intersecting a ray with a sphere or cylinder involves finding the zeros of a quadratic equation, which requires calculating its discriminant, . If the discriminant is computed as written, then when the sphere is far from the ray origin, and catastrophic cancellation occurs. This issue is made worse since the magnitudes of the two terms of the discriminant are related to the squared distance between the sphere and the ray origin. Even for rays that are far from ever hitting the sphere, a discriminant may be computed that is exactly equal to zero, leading to the intersection code reporting an invalid intersection. See Figure 6.42, which shows that this error can be meaningful in practice.
Algebraically rewriting the discriminant computation makes it possible to compute it with more accuracy. First, if we rewrite the quadratic discriminant as
and then substitute in the values of , , and from Equation (6.3) to the terms inside the parentheses, we have
where we have denoted the vector from to the ray’s origin as and is the ray’s normalized direction.
Now consider the decomposition of into the sum of two vectors, and , where is parallel to and is perpendicular to it. Those vectors are given by
Rearranging terms gives
Expressing the right hand side in terms of the sphere quadratic coefficients from Equation (6.3) gives
Note that the left hand side is equal to the term in square brackets in Equation (6.24).
Computing that term in this way eliminates from the discriminant, which is of great benefit since its magnitude is proportional to the squared distance to the origin, with accordingly limited accuracy. In the implementation below, we take advantage of the fact that the discriminant is now the difference of squared values and make use of the identity to reduce the magnitudes of the intermediate values, which further reduces error.
One might ask, why go through this trouble when we could use the DifferenceOfProducts() function to compute the discriminant, presumably with low error? The reason that is not an equivalent alternative is that the values , , and already suffer from rounding error. In turn, a result computed by DifferenceOfProducts() will be inaccurate if its inputs already are inaccurate themselves. is particularly problematic, since it is the difference of two positive values, so is susceptible to catastrophic cancellation.
A similar derivation gives a more accurate discriminant for the cylinder.
6.8.4 Robust Triangle Intersections
The details of the ray–triangle intersection algorithm described in Section 6.5.3 were carefully designed to avoid cases where rays could incorrectly pass through an edge or vertex shared by two adjacent triangles without generating an intersection. Fittingly, an intersection algorithm with this guarantee is referred to as being watertight.
Recall that the algorithm is based on transforming triangle vertices into a coordinate system with the ray’s origin at its origin and the ray’s direction aligned along the axis. Although round-off error may be introduced by transforming the vertex positions to this coordinate system, this error does not affect the watertightness of the intersection test, since the same transformation is applied to all triangles. (Further, this error is quite small, so it does not significantly impact the accuracy of the computed intersection points.)
Given vertices in this coordinate system, the three edge functions defined in Equation (6.5) are evaluated at the point ; the corresponding expressions, Equation (6.6), are quite straightforward. The key to the robustness of the algorithm is that with floating-point arithmetic, the edge function evaluations are guaranteed to have the correct sign. In general, we have
First, note that if , then Equation (6.26) evaluates to exactly zero, even in floating point. We therefore just need to show that if , then is never negative. If , then must be greater than or equal to . In turn, their difference must be greater than or equal to zero. (These properties both follow from the fact that floating-point arithmetic operations are all rounded to the nearest representable floating-point value.)
If the value of the edge function is zero, then it is impossible to tell whether it is exactly zero or whether a small positive or negative value has rounded to zero. In this case, the fragment <<Fall back to double-precision test at triangle edges>> reevaluates the edge function with double precision; it can be shown that doubling the precision suffices to accurately distinguish these cases, given 32-bit floats as input.
The overhead caused by this additional precaution is minimal: in a benchmark with 88 million ray intersection tests, the double-precision fallback had to be used in less than 0.0000023% of the cases.
6.8.5 Bounding Intersection Point Error
We can apply the machinery introduced in this section for analyzing rounding error to derive conservative bounds on the absolute error in computed ray–shape intersection points, which allows us to construct bounding boxes that are guaranteed to include an intersection point on the actual surface (Figure 6.43). These bounding boxes provide the basis of the algorithm for generating spawned ray origins that will be introduced in Section 6.8.6.
It is illuminating to start by looking at the sources of error in conventional approaches to computing intersection points. It is common practice in ray tracing to compute 3D intersection points by first solving the parametric ray equation for a value where a ray intersects a surface and then computing the hit point with . If carries some error , then we can bound the error in the computed intersection point. Considering the coordinate, for example, we have
The error term (in square brackets) is bounded by
There are two things to see from Equation (6.27): first, the magnitudes of the terms that contribute to the error in the computed intersection point (, , and ) may be quite different from the magnitude of the intersection point. Thus, there is a danger of catastrophic cancellation in the intersection point’s computed value. Second, ray intersection algorithms generally perform tens of floating-point operations to compute values, which in turn means that we can expect to be at least of magnitude , with in the tens (and possibly much more, due to catastrophic cancellation).
Each of these terms may introduce a significant amount of error in the computed point . We introduce better approaches in the following.
Reprojection: Quadrics
We would like to reliably compute surface intersection points with just a few ulps of error rather than the orders of magnitude greater error that intersection points computed with the parametric ray equation may have. Previously, Woo et al. (1996) suggested using the first intersection point computed as a starting point for a second ray–plane intersection, for ray–polygon intersections. From the bounds in Equation (6.27), we can see why the second intersection point will often be much closer to the surface than the first: the value along the second ray will be quite close to zero, so that the magnitude of the absolute error in will be quite small, and thus using this value in the parametric ray equation will give a point quite close to the surface (Figure 6.44). Further, the ray origin will have similar magnitude to the intersection point, so the term will not introduce much additional error.
Although the second intersection point computed with this approach is much closer to the plane of the surface, it still suffers from error by being offset due to error in the first computed intersection. The farther away the ray origin is from the intersection point (and thus, the larger the absolute error is in ), the larger this error will be. In spite of this error, the approach has merit: we are generally better off with a computed intersection point that is quite close to the actual surface, even if offset from the most accurate possible intersection point, than we are with a point that is some distance above or below the surface (and likely also far from the most accurate intersection point).
Rather than doing a full reintersection computation, which may not only be computationally costly but also will still have error in the computed value, an effective alternative is to refine computed intersection points by reprojecting them to the surface. The error bounds for these reprojected points are often remarkably small. (It should be noted that these reprojection error bounds do not capture tangential errors that were present in the original intersection —the main focus here is to detect errors that might cause the reprojected point to fall below the surface.)
Consider a ray–sphere intersection: given a computed intersection point (e.g., from the ray equation) with a sphere at the origin with radius , we can reproject the point onto the surface of the sphere by scaling it with the ratio of the sphere’s radius to the computed point’s distance to the origin, computing a new point with
and so forth. The floating-point computation is
Because , , and are all positive, the terms in the square root can share the same term, and we have
Thus, the absolute error of the reprojected coordinate is bounded by (and similarly for and ) and is thus no more than 2.5 ulps in each dimension from a point on the surface of the sphere.
Here is the fragment that reprojects the intersection point for the Sphere shape.
The error bounds follow from Equation (6.28).
Reprojection algorithms and error bounds for other quadrics can be defined similarly: for example, for a cylinder along the axis, only the and coordinates need to be reprojected, and the error bounds in and turn out to be only times their magnitudes.
The disk shape is particularly easy; we just need to set the coordinate of the point to lie on the plane of the disk.
In turn, we have a point with zero error; it lies exactly on the surface on the disk.
The quadrics’ Sample() methods also use reprojection. For example, the Sphere’s area sampling method is based on SampleUniformSphere(), which uses std::sin() and std::cos(). Therefore, the error bounds on the computed pObj value depend on the accuracy of those functions. By reprojecting the sampled point to the sphere’s surface, the error bounds derived earlier in Equation (6.28) can be used without needing to worry about those functions’ accuracy.
The same issue and solution apply to sampling cylinders.
Parametric Evaluation: Triangles
Another effective approach to computing accurate intersection points near the surface of a shape uses the shape’s parametric representation. For example, the triangle intersection algorithm in Section 6.5.3 computes three edge function values , , and and reports an intersection if all three have the same sign. Their values can be used to find the barycentric coordinates
Attributes at the triangle vertices (including the vertex positions) can be interpolated across the face of the triangle by
We can show that interpolating the positions of the vertices in this manner gives a point very close to the surface of the triangle. First consider precomputing the reciprocal of the sum of :
Because all have the same sign if there is an intersection, we can collect the terms and conservatively bound :
If we now consider interpolation of the coordinate of the position in the triangle corresponding to the edge function values, we have
Using the bounds on ,
Thus, we can finally see that the absolute error in the computed value is in the interval
which is bounded by
(Note that the term could have a factor instead of , but the difference between the two is very small, so we choose a slightly simpler final expression.) Equivalent bounds hold for and .
Equation (6.29) lets us bound the error in the interpolated point computed in Triangle::Intersect().
The bounds for a sampled point on a triangle can be found in a similar manner.
Parametric Evaluation: Bilinear Patches
Bilinear patch intersection points are found by evaluating the bilinear function from Equation (6.11). The computation performed is
Considering just the coordinate, we can find that its error is bounded by
Because and are between 0 and 1, here we will use the looser but more computationally efficient bounds of the form
The same bounds apply for points sampled in the BilinearPatch::Sample() method.
Parametric Evaluation: Curves
Because the Curve shape orients itself to face incident rays, rays leaving it must be offset by the curve’s width in order to not incorrectly reintersect it when it is reoriented to face them. For wide curves, this bound is significant and may lead to visible errors in images. In that case, the Curve shape should probably be replaced with one or more bilinear patches.
Effect of Transformations
The last detail to attend to in order to bound the error in computed intersection points is the effect of transformations, which introduce additional rounding error when they are applied.
The quadric Shapes in pbrt transform rendering-space rays into object space before performing ray–shape intersections, and then transform computed intersection points back to rendering space. Both of these transformation steps introduce rounding error that needs to be accounted for in order to maintain robust rendering-space bounds around intersection points.
If possible, it is best to try to avoid coordinate-system transformations of rays and intersection points. For example, it is better to transform triangle vertices to rendering space and intersect rendering-space rays with them than to transform rays to object space and then transform intersection points to rendering space. Transformations are still useful—for example, for the quadrics and for object instancing—so we will show how to bound the error that they introduce.
We will discuss these topics in the context of the Transform operator() method that takes a Point3fi, which is the Point3 variant that uses an Interval for each of the coordinates.
This method starts by computing the transformed position of the point where each coordinate is at the midpoint of its respective interval in p. The fragment that implements that computation, <<Compute transformed coordinates from point (x, y, z)>>, is not included here; it implements the same matrix/point multiplication as in Section 3.10.
Next, error bounds are computed, accounting both for rounding error when applying the transformation as well as the effect of non-empty intervals, if p is not exact.
If has no accumulated error, then given a non-projective transformation matrix with elements denoted by , the transformed coordinate is
Thus, the absolute error in the result is bounded by
Similar bounds follow for the transformed and coordinates, and the implementation follows directly.
Now consider the case of the point p having error that is bounded by , , and in each dimension. The transformed coordinate is given by:
Applying the definitions of floating-point addition and multiplication and their error bounds, we have
Transforming to use , we can find the absolute error term to be bounded by
We have not included the fragment <<Compute error for transformed approximate p>> that implements this computation, as it is nearly 20 lines of code for the direct translation of Equation (6.31).
It would have been much easier to implement this method using the Interval class to automatically compute error bounds. We found that that approach gives bounds that are generally 3–6 wider and cause the method to be six times slower than the implementation presented here. Given that transformations are frequently applied during rendering, deriving and then using tighter bounds is worthwhile.
Note that the code that computes error bounds is buggy if the matrix is projective and the homogeneous coordinate of the projected point is not one; this nit is not currently a problem for pbrt’s usage of this method.
The Transform class also provides methods to transform vectors and rays, returning the resulting error. The vector error bound derivations (and thence, implementations) are very similar to those for points, and so also are not included here.
6.8.6 Robust Spawned Ray Origins
Computed intersection points and their error bounds give us a small 3D box that bounds a region of space. We know that the precise intersection point must be somewhere inside this box and that thus the surface must pass through the box (at least enough to present the point where the intersection is). (Recall Figure 6.43.) Having these boxes makes it possible to position the origins of rays leaving the surface so that they are always on the right side of the surface and do not incorrectly reintersect it. When tracing spawned rays leaving the intersection point , we offset their origins enough to ensure that they are past the boundary of the error box and thus will not incorrectly reintersect the surface.
In order to ensure that the spawned ray origin is definitely on the right side of the surface, we move far enough along the normal so that the plane perpendicular to the normal is outside the error bounding box. To see how to do this, consider a computed intersection point at the origin, where the equation for the plane going through the intersection point is
The plane is implicitly defined by , and the normal is .
For a point not on the plane, the value of the plane equation gives the offset along the normal that gives a plane that goes through the point. We would like to find the maximum value of for the eight corners of the error bounding box; if we offset the plane plus and minus this offset, we have two planes that do not intersect the error box that should be (locally) on opposite sides of the surface, at least at the computed intersection point offset along the normal (Figure 6.45).
If the eight corners of the error bounding box are given by , then the maximum value of is easily computed:
Computing spawned ray origins by offsetting along the surface normal in this way has a few advantages: assuming that the surface is locally planar (a reasonable assumption, especially at the very small scale of the intersection point error bounds), moving along the normal allows us to get from one side of the surface to the other while moving the shortest distance. In general, minimizing the distance that ray origins are offset is desirable for maintaining shadow and reflection detail.
OffsetRayOrigin() is a short function that implements this computation.
We also must handle round-off error when computing the offset point: when offset is added to p, the result will in general need to be rounded to the nearest floating-point value. In turn, it may be rounded down toward p such that the resulting point is in the interior of the error box rather than on its boundary (Figure 6.46). Therefore, the offset point is rounded away from p here to ensure that it is not inside the box.
Alternatively, the floating-point rounding mode could have been set to round toward plus or minus infinity (based on the sign of the value). Changing the rounding mode is fairly expensive on many processors, so we just shift the floating-point value by one ulp here. This will sometimes cause a value already outside of the error box to go slightly farther outside it, but because the floating-point spacing is so small, this is not a problem in practice.
For convenience, Interaction provides two variants of this functionality via methods that perform the ray offset computation using its stored position and surface normal. The first takes a ray direction, like the stand-alone OffsetRayOrigin() function.
The second takes a position for the ray’s destination that is used to compute a direction w to pass to the first method.
There are also some helper functions for the Ray class that generate rays leaving intersection points that account for these offsets.
To generate a ray between two points requires offsets at both endpoints before the vector between them is computed.
We can also implement Interaction methods that generate rays leaving intersection points.
A variant of Interaction::SpawnRayTo() that takes an Interaction is similar and not included here.
The ShapeSampleContext class also provides OffsetRayOrigin() and SpawnRay() helper methods that correspond to the ones we have added to Interaction here. Their implementations are essentially the same, so they are not included here.
The approach we have developed so far addresses the effect of floating-point error at the origins of rays leaving surfaces; there is a related issue for shadow rays to area light sources: we would like to find any intersections with shapes that are close to the light source and actually occlude it, while avoiding reporting incorrect intersections with the surface of the light source. Unfortunately, our implementation does not address this issue, so we set the tMax value of shadow rays to be just under one so that they stop before the surface of light sources.
One last issue must be dealt with in order to maintain robust spawned ray origins: error introduced when performing transformations. Given a ray in one coordinate system where its origin was carefully computed to be on the appropriate side of some surface, transforming that ray to another coordinate system may introduce error in the transformed origin such that the origin is no longer on the correct side of the surface it was spawned from.
Therefore, whenever a ray is transformed by the Ray variant of Transform::operator() (which was implemented in Section 3.10.4), its origin is advanced to the edge of the bounds on the error that was introduced by the transformation. This ensures that the origin conservatively remains on the correct side of the surface it was spawned from, if any.
6.8.7 Avoiding Intersections behind Ray Origins
Bounding the error in computed intersection points allows us to compute ray origins that are guaranteed to be on the right side of the surface so that a ray with infinite precision would not incorrectly intersect the surface it is leaving. However, a second source of rounding error must also be addressed: the error in parametric values computed for ray–shape intersections. Rounding error can lead to an intersection algorithm computing a value for the intersection point even though the value for the actual intersection is negative (and thus should be ignored).
It is possible to show that some intersection test algorithms always return a value with the correct sign; this is the best case, as no further computation is needed to bound the actual error in the computed value. For example, consider the ray–axis-aligned slab computation: . The IEEE floating-point standard guarantees that if , then (and if , then ). To see why this is so, note that if , then the real number must be greater than zero. When rounded to a floating-point number, the result must be either zero or a positive float; there is no a way a negative floating-point number could be the closest floating-point number. Second, floating-point division returns the correct sign; these together guarantee that the sign of the computed value is correct. (Or that , but this case is fine, since our test for an intersection is carefully chosen to be .)
For shape intersection routines that are based on the Interval class, the computed value in the end has an error bound associated with it, and no further computation is necessary to perform this test. See the definition of the fragment <<Check quadric shape t0 and t1 for nearest intersection>> in Section 6.2.2.
Triangles
Interval introduces computational overhead that we would prefer to avoid for more commonly used shapes where efficient intersection code is more important. For these shapes, we can derive efficient-to-evaluate conservative bounds on the error in computed values. The ray–triangle intersection algorithm in Section 6.5.3 computes a final value by computing three edge function values and using them to compute a barycentric-weighted sum of transformed vertex coordinates, :
By successively bounding the error in these terms and then in the final value, we can conservatively check that it is positive.
Given a ray with origin , direction , and a triangle vertex , the projected coordinate is
Applying the usual approach, we can find that the maximum error in for each of three vertices of the triangle is bounded by , and we can thus find a conservative upper bound for the error in any of the positions by taking the maximum of these errors:
The edge function values are computed as the difference of two products of transformed and vertex positions:
Bounds for the error in the transformed positions and are
Taking the maximum error over all three of the vertices, the products in the edge functions are bounded by
which have an absolute error bound of
Dropping the (negligible) higher-order terms of products of and terms, the error bound on the difference of two and terms for the edge function is
Again bounding error by taking the maximum of error over all the terms, the error bound for the computed value of the numerator of in Equation (6.32) is
A computed value (before normalization by the sum of ) must be greater than this value for it to be accepted as a valid intersection that definitely has a positive value.
Although it may seem that we have made a number of choices to compute looser bounds than we might have, in practice the bounds on error in are extremely small. For a regular scene that fills a bounding box roughly in each dimension, our error bounds near ray origins are generally around .
Bilinear Patches
Recall from Section 6.6.1 that the value for a bilinear patch intersection is found by taking the determinant of a matrix. Each matrix element includes round-off error from the series of floating-point computations used to compute its value. While it is possible to derive bounds on the error in the computed using a similar approach as was used for triangle intersections, the algebra becomes unwieldy because the computation involves many more operations.
Therefore, here we compute an epsilon value that is based on the magnitudes of all of the inputs of the computation of .
6.8.8 Discussion
Minimizing and bounding numerical error in other geometric computations (e.g., partial derivatives of surface positions, interpolated texture coordinates, etc.) are much less important than they are for the positions of ray intersections. In a similar vein, the computations involving color and light in physically based rendering generally do not present trouble with respect to round-off error; they involve sums of products of positive numbers (usually with reasonably close magnitudes); hence catastrophic cancellation is not a commonly encountered issue. Furthermore, these sums are of few enough terms that accumulated error is small: the variance that is inherent in the Monte Carlo algorithms used for them dwarfs any floating-point error in computing them.
Interestingly enough, we saw an increase of roughly 20% in overall ray-tracing execution time after replacing the previous version of pbrt’s old ad hoc method to avoid incorrect self-intersections with the method described in this section. (In comparison, rendering with double-precision floating point causes an increase in rendering time of roughly 30%.) Profiling showed that very little of the additional time was due to the additional computation to find error bounds; this is not surprising, as the incremental computation our approach requires is limited—most of the error bounds are just scaled sums of absolute values of terms that have already been computed.
The majority of this slowdown is due to an increase in ray–object intersection tests. The reason for this increase in intersection tests was first identified by Wächter (2008, p. 30); when ray origins are very close to shape surfaces, more nodes of intersection acceleration hierarchies must be visited when tracing spawned rays than if overly loose offsets are used. Thus, more intersection tests are performed near the ray origin. While this reduction in performance is unfortunate, it is a direct result of the greater accuracy of the method; it is the price to be paid for more accurate resolution of valid nearby intersections.