② With optically dense inhomogeneous volume regions,
GridDensityMedium::Tr() may spend a lot of time finding the
attenuation between lights and intersection points. One approach to
reducing this expense is to take advantage of the facts that the amount of
attenuation for nearby rays is generally smoothly varying and that the rays
to a point or directional light source can be parameterized over a
straightforward 2D domain. Given these conditions, it’s possible to use
precomputed approximations to the attenuation.
For example, Kajiya and Von Herzen (1984) computed the attenuation to a
directional light source at a grid of points in 3D space and then found
attenuation at any particular point by interpolating among nearby grid
samples. A more memory-efficient approach was developed by Lokovic and Veach (2000) in the form of deep shadow maps, based on a clever compression
technique that takes advantage of the smoothness of the attenuation.
Implement one of these approaches in pbrt, and measure how much it speeds
up rendering with the VolPathIntegrator. Under what sorts of
situations do approaches like these result in noticeable image errors?
② Another effective method for speeding up
GridDensityMedium::Tr() is to use Russian roulette: if the
accumulated transmittance Tr goes below some threshold, randomly
terminate it and return 0 transmittance; otherwise, scale it based on 1
over the survival probability. Modify pbrt to optionally use this
approach, and measure the change in Monte Carlo efficiency. How does
varying the termination threshold affect your results?
③ Read the papers by Yue et al. (2010, 2011) on improving
delta-tracking’s efficiency by decomposing inhomogeneous media using a
spatial data structure and then applying delta tracking separately in each
region of space. Apply their approach to the GridDensityMedium, and
measure the change in efficiency compared to the current implementation.
③ The current sampling algorithm in the
GridDensityMedium is based purely on sampling based on the
accumulated attenuation. While this is more effective than sampling
uniformly, it misses the factor that it’s desirable to sample scattering
events at points where the scattering coefficient is relatively large as
well, as these points contribute more to the overall result.
Kulla and Fajardo (2012) describe an approach based on sampling the medium
at a number of points along each ray and computing a PDF for the product of
the transmittance and the scattering coefficient. Sampling
from this distribution gives much better results than sampling based on the
transmittance alone.
Implement Kulla and Fajardo’s technique in pbrt, and compare the Monte
Carlo efficiency of their method to the method currently implemented in
GridDensityMedium. Are there scenes where their approach is less
effective?
③ As described in Section 15.3.1,
the current VolPathIntegrator implementation will spend unnecessary
effort computing ray–primitive intersections in scenes with optically
dense scattering media: closer medium interactions will more often be sampled
than the surface intersections. Modify the system so that medium
interactions are sampled before ray–primitive intersections are tested.
Reduce the ray’s tMax extent when a medium interaction is sampled
before performing primitive intersections. Measure the change in
performance for scenes with both optically thin and optically thick
participating media. (Use a fairly geometrically complex scene so that the
cost of ray–primitive intersections isn’t negligible.) If your results
show that the most efficient approach varies depending on the medium
scattering properties, implement an approach to automatically choose
between the two strategies at run time based on the medium’s
characteristics.
③ The Medium abstraction currently doesn’t make it
possible to represent emissive media, and the volume-aware integrators don’t
account for volumetric emission. Modify the system so that emission from a
3D volume can be described, and update one or more Integrator
implementations to account for emissive media in their lighting
calculations. For the code related to sampling incident radiance, it may
be worthwhile to read the paper by Villemin and Hery (2013) on Monte Carlo
sampling of 3D emissive volumes.
③ Compare rendering subsurface scattering with a BSSRDF to
brute force integration of the same underlying medium properties with the
VolPathIntegrator. (Recall that in high-albedo media, paths of
hundreds or thousands of bounces may be necessary to compute accurate
results.) Compare scenes with a variety of scattering properties,
including both low and high albedos. Render images that demonstrate cases
where the BSSRDF approximation introduces noticeable error but Monte Carlo
computes a correct result. How much slower is the Monte Carlo approach for
cases where the BSSRDF is accurate?
③ Donner et al. (2009) performed extensive numerical
simulation of subsurface scattering from media with a wide range of
scattering properties and then computed coefficients to fit an analytical
model to the resulting data. They have shown that rendering with this
model is more efficient than full Monte Carlo integration, while handling
well many cases where the approximations of many BSSRDF models are
unacceptable. For example, their model accounts for directional variation
in the scattered radiance and handles media with low and medium albedos
well. Read their paper and download the data files of coefficients.
Implement a new BSSRDF in pbrt that uses their model, and render
images showing cases where it gives better results than the current BSSRDF
implementation.