Further Reading

Sampling Theory and Aliasing

One of the best books on signal processing, sampling, reconstruction, and the Fourier transform is Bracewell’s The Fourier Transform and Its Applications (2000). Glassner’s Principles of Digital Image Synthesis (1995) has a series of chapters on the theory and application of uniform and nonuniform sampling and reconstruction to computer graphics. For an extensive survey of the history of and techniques for interpolation of sampled data, including the sampling theorem, see Meijering (2002). Unser (2000) also surveyed recent developments in sampling and reconstruction theory including the recent move away from focusing purely on band-limited functions. For more recent work in this area, see Eldar and Michaeli (2009).

Crow (1977) first identified aliasing as a major source of artifacts in computer-generated images. Using nonuniform sampling to turn aliasing into noise was introduced by Cook (1986) and Dippé and Wold (1985); their work was based on experiments by Yellot (1983), who investigated the distribution of photoreceptors in the eyes of monkeys. Dippé and Wold also first introduced the pixel filtering equation to graphics and developed a Poisson sample pattern with a minimum distance between samples. Lee, Redner, and Uselton (1985) developed a technique for adaptive sampling based on statistical tests that computed images to a given error tolerance. Mitchell investigated sampling patterns for ray tracing extensively. His 1987 and 1991 SIGGRAPH papers on this topic have many key insights.

Heckbert (1990a) wrote an article that explains possible pitfalls when using floating-point coordinates for pixels and develops the conventions used here.

Mitchell (1996b) investigated how much better stratified sampling patterns are than random patterns in practice. In general, the smoother the function being sampled, the more effective they are. For very quickly changing functions (e.g., pixel regions overlapped by complex geometry), sophisticated stratified patterns perform no better than unstratified random patterns. Therefore, for scenes with complex variation in the high-dimensional image function, the advantages of fancy sampling schemes compared to a simple stratified pattern are reduced.

Chiu, Shirley, and Wang (1994) suggested a multijittered 2D sampling technique based on randomly shuffling the x and y coordinates of a canonical jittered pattern that combines the properties of stratified and Latin hypercube approaches. More recently, Kensler (2013) showed that using the same permutation for both dimensions with their method gives much better results than independent permutations; he showed that this approach gives lower discrepancy than the Sobol prime pattern while also maintaining the perceptual advantages of turning aliasing into noise due to using jittered samples.

Lagae and Dutré (2008c) surveyed the state of the art in generating Poisson disk sample patterns and compared the quality of the point sets that various algorithms generated. Of recent work in this area, see in particular the papers by Jones (2005), Dunbar and Humphreys (2006), Wei (2008), Li et al. (2010), and Ebeida et al. (2011, 2012). We note, however, the importance of Mitchell’s (1991) observations that an n -dimensional Poisson disk distribution is not the ideal one for general integration problems in graphics; while it’s useful for the projection of the first two dimensions on the image plane to have the Poisson-disk property, it’s important that the other dimensions be more widely distributed than the Poisson-disk quality alone guarantees. Recently, Reinert et al. (2015) proposed a construction for n -dimensional Poisson disk samples that retain their characteristic sample separation under projection onto lower dimensional subsets.

pbrt doesn’t include samplers that perform adaptive sampling—taking more samples in parts of the image with large variation. Though adaptive sampling has been an active area of research, our experience with the resulting algorithms has been that while most work well in some cases, few are robust across a wide range of scenes. Since initial work in adaptive sampling by Lee et al. (1985), Kajiya (1986), and Purgathofer (1987), a number of sophisticated and effective adaptive sampling methods have been developed in recent years. Notable work includes Hachisuka et al. (2008a), who adaptively sampled in the 5D domain of image location, time, and lens position, rather than just in image location, and introduced a novel multidimensional filtering method; Shinya (1993) and Egan et al. (2009), who developed adaptive sampling and reconstruction methods focused on rendering motion blur; and Overbeck et al. (2009), who developed adaptive sampling algorithms based on wavelets for image reconstruction. Recently, Belcour et al. (2013) computed covariance of 5D imaging (image, time, and lens defocus) and applied adaptive sampling and high-quality reconstruction and Moon et al. (2014) have applied local regression theory to this problem.

Kirk and Arvo (1991) identified a subtle problem with adaptive sampling algorithms: in short, if a set of samples is both used to decide if more samples should be taken and is also added to the image, the end result is biased and doesn’t converge to the correct result in the limit. Mitchell (1987) observed that standard image reconstruction techniques fail in the presence of adaptive sampling: the contribution of a dense clump of samples in part of the filter’s extent may incorrectly have a large effect on the final value purely due to the number of samples taken in that region. He described a multi-stage box filter that addresses this issue.

Compressed sensing is a recent approach to sampling where the required sampling rate depends on the sparsity of the signal, not its frequency content. Sen and Darabi (2011) applied compressed sensing to rendering, allowing them to generate high-quality images at very low sampling rates.

Low-Discrepancy Sampling

Shirley (1991) first introduced the use of discrepancy to evaluate the quality of sample patterns in computer graphics. This work was built upon by Mitchell (1992), Dobkin and Mitchell (1993), and Dobkin, Eppstein, and Mitchell (1996). One important observation in Dobkin et al.’s paper is that the box discrepancy measure used in this chapter and in other work that applies discrepancy to pixel sampling patterns isn’t particularly appropriate for measuring a sampling pattern’s accuracy at randomly oriented edges through a pixel and that a discrepancy measure based on random edges should be used instead. This observation explains why some theoretically good low-discrepancy patterns do not perform as well as expected when used for image sampling.

Mitchell’s first paper on discrepancy introduced the idea of using deterministic low-discrepancy sequences for sampling, removing all randomness in the interest of lower discrepancy (Mitchell 1992). Such quasi-random sequences are the basis of quasi–Monte Carlo methods, which will be described in Chapter 13. The seminal book on quasi-random sampling and algorithms for generating low-discrepancy patterns was written by Niederreiter (1992). For a more recent treatment, see Dick and Pillichshammer’s excellent book (2010).

Faure (1992) described a deterministic approach for computing permutations for scrambled radical inverses. The implementation of the ComputeRadicalInversePermutations() function in this chapter uses random permutations, which are simpler to implement and work nearly as well in practice. The algorithms used for computing sample indices within given pixels in Sections 7.4 and 7.7 were introduced by Grünschloß et al. (2012).

Keller and collaborators have investigated quasi-random sampling patterns for a variety of applications in graphics (Keller 1996, Keller 1997, 2001). The left-parenthesis 0 comma 2 right-parenthesis -sequence sampling techniques used in the ZeroTwoSequenceSampler are based on a paper by Kollig and Keller (2002). left-parenthesis 0 comma 2 right-parenthesis -sequences are one instance of a general type of low-discrepancy sequence known as left-parenthesis t comma s right-parenthesis -sequences and left-parenthesis t comma m comma s right-parenthesis -nets. These are discussed further by Niederreiter (1992) and Dick and Pillichshammer (2010). Some of Kollig and Keller’s techniques are based on algorithms developed by Friedel and Keller (2000). Keller (2001, 2006) argued that because low-discrepancy patterns tend to converge more quickly than others, they are the most efficient sampling approach for generating high-quality imagery.

The MaxMinDistSampler in Section 7.6 is based on generator matrices found by Grünschloß and collaborators (2008, 2009). Sobol prime (1967) introduced the family of generator matrices used in Section 7.7; Wächter’s Ph.D. dissertation discusses high-performance implementation of base-2 generator matrix operations (Wächter 2008). The Sobol prime generator matrices our implementation uses are improved versions derived by Joe and Kuo (2008).

Filtering and Reconstruction

Cook (1986) first introduced the Gaussian filter to graphics. Mitchell and Netravali (1988) investigated a family of filters using experiments with human observers to find the most effective ones; the MitchellFilter in this chapter is the one they chose as the best. Kajiya and Ullner (1981) investigated image filtering methods that account for the effect of the reconstruction characteristics of Gaussian falloff from pixels in CRTs, and, more recently, Betrisey et al. (2000) described Microsoft’s ClearType technology for display of text on LCDs. Alim (2013) has recently applied reconstruction techniques that attempt to minimize the error between the reconstructed image and the original continuous image, even in the presence of discontinuities.

There has been quite a bit of research into reconstruction filters for image resampling applications. Although this application is not the same as reconstructing nonuniform samples for image synthesis, much of this experience is applicable. Turkowski (1990a) reported that the Lanczos windowed sinc filter gives the best results of a number of filters for image resampling. Meijering et al. (1999) tested a variety of filters for image resampling by applying a series of transformations to images such that if perfect resampling had been done the final image would be the same as the original. They also found that the Lanczos window performed well (as did a few others) and that truncating the sinc without a window gave some of the worst results. Other work in this area includes papers by Möller et al. (1997) and Machiraju and Yagel (1996).

Even with a fixed sampling rate, clever reconstruction algorithms can be useful to improve image quality. See, for example, Reshetov (2009), who used image gradients to find edges across multiple pixels to estimate pixel coverage for antialiasing and Guertin et al. (2014), who developed a filtering approach for motion blur.

Lee and Redner (1990) first suggested using a median filter, where the median of a set of samples is used to find each pixel’s value, as a noise reduction technique. More recently, Lehtinen et al. (2011, 2012), Kalantari and Sen (2013), Rousselle et al. (2012, 2013), Delbracio et al. (2014), Munkberg et al. (2014), and Bauszat et al. (2015) have developed filtering techniques to reduce noise in images rendered using Monte Carlo algorithms. Kalantari et al. (2015) applied machine learning to the problem of finding effective denoising filters and demonstrated impressive results.

Jensen and Christensen (1995) observed that it can be more effective to separate out the contributions to pixel values based on the type of illumination they represent; low-frequency indirect illumination can be filtered differently from high-frequency direct illumination, thus reducing noise in the final image. They developed an effective filtering technique based on this observation. An improvement to this approach was developed by Keller and collaborators with the discontinuity buffer (Keller 1998; Wald et al. 2002). In addition to filtering slowly varying quantities like indirect illumination separately from more quickly varying quantities like surface reflectance, the discontinuity buffer uses geometric quantities like the surface normal at nearby pixels to determine whether their corresponding values can be reasonably included at the current pixel. Kontkanen et al. (2004) built on these approaches to build a filtering approach for indirect illumination when using the irradiance caching algorithm.

Lessig et al. (2014) proposed a general framework for constructing quadrature rules tailored to specific integration problems such as stochastic ray tracing, spherical harmonics projection, and scattering by surfaces. When targeting band-limited functions, their approach subsumes the frequency-space approach presented in this chapter. An excellent tutorial about the underlying theory of reproducing kernel bases is provided in the article’s supplemental material.

Perceptual Issues

A number of different approaches have been developed for mapping out-of-gamut colors to the displayable range; see Rougeron and Péroche’s survey article for discussion of this issue and references to various approaches (Rougeron and Péroche 1998). This topic was also covered by Hall (1989).

Tone reproduction—algorithms for displaying high-dynamic-range images on low-dynamic-range display devices—became an active area of research starting with the work of Tumblin and Rushmeier (1993). The survey article of Devlin et al. (2002) summarizes most of the work in this area through 2002, giving pointers to the original papers. See Reinhard et al.’s book (2010) on high dynamic range imaging, which includes comprehensive coverage of this topic through 2010. More recently, Reinhard et al. (2012) have developed tone reproduction algorithms that consider both accurate brightness and color reproduction together, also accounting for the display and viewing environment.

The human visual system generally causes the brain to perceive that surfaces have the color of the underlying surface, regardless of the illumination spectrum—for example, white paper is perceived to be white, even under the yellow-ish illumination of an incandescent lightbulb. A number of methods have been developed to process photographs to perform white balancing to eliminate the tinge of light source colors; see Gijsenij et al. (2011) for a survey. White balancing is challenging, since the only information available to white balancing algorithms is the final pixel values. In a renderer, the problem is easier, as information is available directly about the light sources and the surface reflection properties; Wilkie and Weidlich (2009) developed an efficient method to perform accurate white balancing in a renderer with limited computational overhead.

For background information on properties of the human visual system, Wandell’s book on vision is an excellent starting point (Wandell 1995). Ferwerda (2001) presented an overview of the human visual system for applications in graphics, and Malacara (2002) gave a concise overview of color theory and basic properties of how the human visual system processes color.

References

  1. Alim, U. R. Rendering in shift-invariant spaces. In Proceedings of Graphics Interface 2013, 189–96.
  2. Bauszat, P., M. Eisemann, E. Eisemann, and M. Magnor. General and robust error estimation and reconstruction for Monte Carlo rendering. Computer Graphics Forum (Procedings of Eurographics 2015) 34 (2), 597–608.
  3. Belcour, L., C. Soler, K. Subr, N. Holzschuch, and F. Durand. 5D covariance tracing for efficient defocus and motion blur. ACM Transactions on Graphics 32 (3), 31:1–31:18.
  4. Betrisey, C., J. F. Blinn, B. Dresevic, B. Hill, G. Hitchcock, B. Keely, D. P. Mitchell, J. C. Platt, and T. Whitted. 2000. Displaced filtering for patterned displays. Society for Information Display International Symposium. Digest of Technical Papers 31, 296–99.
  5. Bracewell, R. N. 2000. The Fourier Transform and Its Applications. New York: McGraw-Hill.
  6. Chiu, K., P. Shirley, and C. Wang. 1994. Multi-jittered sampling. In P. Heckbert (Ed.), Graphics Gems IV, 370–74. San Diego: Academic Press.
  7. Cook, R. L. 1986. Stochastic sampling in computer graphics. ACM Transactions on Graphics 5 (1), 51–72.
  8. Crow, F. C. 1977. The aliasing problem in computer-generated shaded images. Communications of the ACM 20 (11), 799–805.
  9. Dammertz, S., and A. Keller. Image synthesis by rank-1 lattices. Monte Carlo and Quasi-Monte Carlo Methods 2006, 217–36.
  10. Delbracio, M., P. Musé, A. Buades, J. Chauvier, N. Phelps, and J.-M. Morel. Boosting Monte Carlo rendering by ray histogram fusion. ACM Transactions on Graphics 33 (1), 8:1–8:15.
  11. Devlin, K., A. Chalmers, A. Wilkie, and W. Purgathofer. 2002. Tone reproduction and physically based spectral rendering. In D. Fellner and R. Scopignio (Eds.), Proceedings of Eurographics 2002, 101–23. The Eurographics Association.
  12. Dick, J., and F. Pillichshammer. Digital Nets and Sequences: Discrepancy Theory and Quasi-Monte Carlo Integration. Cambridge: Cambridge University Press.
  13. Dippé, M. A. Z., and E. H. Wold. 1985. Antialiasing through stochastic sampling. Computer Graphics (SIGGRAPH ’85 Proceedings), 19, 69–78.
  14. Dobkin, D. P., and D. P. Mitchell. 1993. Random-edge discrepancy of supersampling patterns. In Proceedings of Graphics Interface 1993, Toronto, Ontario, 62–69. Canadian Information Processing Society.
  15. Dobkin, D. P., D. Eppstein, and D. P. Mitchell. 1996. Computing the discrepancy with applications to supersampling patterns. ACM Transactions on Graphics 15 (4), 354–76.
  16. Dorsey, J. O., F. X. Sillion, and D. P. Greenberg. 1991. Design and simulation of opera lighting and projection effects. In Computer Graphics (Proceedings of SIGGRAPH ’91), 25, 41–50.
  17. Dunbar, D., and G. Humphreys. 2006. A spatial data structure for fast Poisson-disk sample generation. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2006) 25 (3), 503–08.
  18. Ebeida, M., A. Davidson, A. Patney, P. Knupp, S. Mitchell, and J. D. Owens. Efficient maximal Poisson-disk sampling. ACM Transactions on Graphics 30 (4), 49:1–49:12.
  19. Ebeida, M., S. Mitchell, A. Patney, A. Davidson, and J. D. Owens. A simple algorithm for maximal Poisson-disk sampling in high dimensions. Computer Graphics Forum (Proceedings of Eurographics 2012) 31 (2), 785–94.
  20. Egan, K., Y.-T. Tseng, N. Holzschuch, F. Durand, and R. Ramamoorthi. 2009. Frequency analysis and sheared reconstruction for rendering motion blur. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2009) 28 (3), 93:1–93:13.
  21. Eldar, Y. C., and T. Michaeli. Beyond bandlimited sampling. IEEE Signal Processing Magazine 26 (3), 48–68.
  22. Faure, H. Good permutations for extreme discrepancy. Journal of Number Theory 42, 47–56.
  23. Ferwerda, J. A. 2001. Elements of early vision for computer graphics. IEEE Computer Graphics and Applications 21 (5), 22–33.
  24. Friedel, I., and A. Keller. 2000. Fast generation of randomized low discrepancy point sets. In Monte Carlo and Quasi-Monte Carlo Methods 2000, 257–73. Berlin: Springer-Verlag.
  25. Gershbein, R., and P. M. Hanrahan. 2000. A fast relighting engine for interactive cinematic lighting design. In Proceedings of ACM SIGGRAPH 2000, Computer Graphics Proceedings, Annual Conference Series, 353–58.
  26. Gijsenij, A., T. Gevers, J. van de Weijer. Computational color constancy: survey and experiments. IEEE Transactions on Image Processing 20 (9), 2475–89.
  27. Glassner, A. 1995. Principles of Digital Image Synthesis. San Francisco: Morgan Kaufmann.
  28. Gortler, S. J., R. Grzeszczuk, R. Szeliski, and M. F. Cohen. 1996. The lumigraph. In Proceedings of SIGGRAPH ’96, Computer Graphics Proceedings, Annual Conference Series, 43–54.
  29. Grünschloß, L., and A. Keller. 2009. (t, m, s)-nets and maximized minimum distance, Part II. In P. L’Ecuyer and A. Owen (Eds.), Monte Carlo and Quasi-Monte Carlo Methods 2008.
  30. Grünschloß, L., J. Hanika, R. Schwede, and A. Keller. 2008. (t, m, s)-nets and maximized minimum distance. In A. Keller, S. Heinrich, and H. Niederreiter (eds.), Monte Carlo and Quasi-Monte Carlo Methods 2006. Berlin: Springer Verlag.
  31. Grünschloß, L., M. Raab, and A. Keller. Enumerating quasi-Monte Carlo point sequences in elementary intervals. In H. Wozniakowski and L. Plaskota (Eds.), Monte Carlo and Quasi-Monte Carlo Methods 2010.
  32. Guertin, J.-P., M. McGuire, and D. Nowrouzezahrai. A fast and stable feature-aware motion blur filter. In Proceedings of High Performance Graphics 2014.
  33. Hachisuka, T., W. Jarosz, R. P. Weistroffer, K. Dale, G. Humphreys, M. Zwicker, and H. W. Jensen. 2008a. Multidimensional adaptive sampling and reconstruction for ray tracing. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2008) 27 (3), 33:1–33:10.
  34. Hall, R. 1989. Illumination and Color in Computer Generated Imagery. New York: Springer-Verlag.
  35. Heckbert, P. S. 1990a. What are the coordinates of a pixel? In A. S. Glassner (Ed.), Graphics Gems I, 246–48. San Diego: Academic Press.
  36. Jensen, H. W., and N. Christensen. 1995. Optimizing path tracing using noise reduction filters. In Proceedings of WSCG, 134–42.
  37. Joe, S., and F.-Y. Kuo. Constructing Sobol prime sequences with better two-dimensional projections. SIAM J. Sci. Comput. 30, 2635–54.
  38. Jones, T. 2005. Efficient generation of Poisson-disk sampling patterns. Journal of Graphics Tools 11 (2), 27–36.
  39. Kajiya, J. T. 1986. The rendering equation. In Computer Graphics (SIGGRAPH ’86 Proceedings), 20, 143–50.
  40. Kajiya, J., and M. Ullner. 1981. Filtering high quality text for display on raster scan devices. In Computer Graphics (Proceedings of SIGGRAPH ’81), 7–15.
  41. Kalantari, N. K., and P. Sen. Removing the noise in Monte Carlo rendering with general image denoising algorithms. Computer Graphics Forum (Proceedings of Eurographics 2013) 32 (2), 93–102.
  42. Kalantari, N. K., S. Bako, and P. Sen. A machine learning approach for filtering Monte Carlo noise. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2015) 34 (4), 122:1–122:12.
  43. Keller, A. Myths of computer graphics. In Monte Carlo and Quasi-Monte Carlo Methods 2004, Berlin: Springer-Verlag, 217–43.
  44. Keller, A. Stratification by rank-1 lattices. Monte Carlo and Quasi-Monte Carlo Methods 2002. Berlin: Springer-Verlag.
  45. Keller, A. 1998. Quasi-Monte Carlo methods for photorealistic image synthesis. Ph.D. thesis, Shaker Verlag Aachen.
  46. Keller, A. 1996. Quasi-Monte Carlo radiosity. In X. Pueyo and P. Schröder (Eds.), Eurographics Rendering Workshop 1996, 101–10.
  47. Keller, A. 1997. Instant radiosity. In Proceedings of SIGGRAPH ’97, Computer Graphics Proceedings, Annual Conference Series, Los Angeles, 49–56.
  48. Keller, A. 2001. Strictly deterministic sampling methods in computer graphics. mental images Technical Report. Also in SIGGRAPH 2003 Monte Carlo Course Notes.
  49. Kensler, A. Correlated multi-jittered sampling. Pixar Technical Memo 13-01.
  50. Kirk, D. B., and J. Arvo. 1991. Unbiased sampling techniques for image synthesis. Computer Graphics (SIGGRAPH ’91 Proceedings), Volume 25, 153–56.
  51. Kollig, T., and A. Keller. 2002. Efficient multidimensional sampling. Computer Graphics Forum (Proceedings of Eurographics 2002), Volume 21, 557–63.
  52. Kontkanen, J., J. Räsänen, and A. Keller. 2004. Irradiance filtering for Monte Carlo ray tracing. Monte Carlo and Quasi-Monte Carlo Methods, 259–72.
  53. Lagae, E., and P. Dutré. 2008c. A comparison of methods for generating Poisson disk distributions. Computer Graphics Forum 27 (1), 114–29.
  54. Lee, M. E., R. A. Redner, and S. P. Uselton. 1985. Statistically optimized sampling for distributed ray tracing. In Computer Graphics (Proceedings of SIGGRAPH ’85), Volume 19, 61–67.
  55. Lee, M., and R. Redner. 1990. A note on the use of nonlinear filtering in computer graphics. IEEE Computer Graphics and Applications 10 (3), 23–29.
  56. Lehtinen, J., T. Aila, J. Chen, S. Laine, and F. Durand. Temporal light field reconstruction for rendering distribution effects. ACM SIGGRAPH 2011 Papers. 55:1–55:12.
  57. Lehtinen, J., T. Aila, S. Laine, and F. Durand. Reconstructing the indirect light field for global illumination. ACM Transactions on Graphics 31 (4). 51:1–51:10.
  58. Lessig, C., M. Desbrun, and E. Fiume. A constructive theory of sampling for image synthesis using reproducing kernel bases. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2014) 33 (4), 55:1–55:14.
  59. Levoy, M., and P. M. Hanrahan. 1996. Light field rendering. In Proceedings of SIGGRAPH ’96, Computer Graphics Proceedings, Annual Conference Series, 31–42.
  60. Li, H., L.-Y. Wei, P. Sander, and C.-W. Fu. Anisotropic blue noise sampling. ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2010) 29 (6), 167:1–167:12.
  61. Möller, T., R. Machiraju, K. Mueller, and R. Yagel. 1997. Evaluation and design of filters using a Taylor series expansion. IEEE Transactions on Visualization and Computer Graphics 3 (2), 184–99.
  62. Machiraju, R., and R. Yagel. 1996. Reconstruction error characterization and control: a sampling theory approach. IEEE Transactions on Visualization and Computer Graphics 2 (4).
  63. Malacara, D. 2002. Color Vision and Colorimetry: Theory and Applications. SPIE—The International Society for Optical Engineering.
  64. Meijering, E. 2002. A chronology of interpolation: from ancient astronomy to modern signal and image processing. In Proceedings of the IEEE 90 (3), 319–42.
  65. Meijering, E. H. W., W. J. Niessen, J. P. W. Pluim, and M. A. Viergever. 1999. Quantitative comparison of sinc-approximating kernels for medical image interpolation. In C. Taylor and A. Colchester (Eds.), Medical Image Computing and Computer-Assisted Intervention—MICCAI 1999, 210–17. Berlin: Springer-Verlag.
  66. Mitchell, D. P. 1987. Generating antialiased images at low sampling densities. Computer Graphics (SIGGRAPH ’87 Proceedings), Volume 21, 65–72.
  67. Mitchell, D. P. 1991. Spectrally optimal sampling for distributed ray tracing. Computer Graphics (SIGGRAPH ’91 Proceedings), Volume 25, 157–64.
  68. Mitchell, D. P. 1992. Ray tracing and irregularities of distribution. In Third Eurographics Workshop on Rendering, Bristol, United Kingdom, 61–69.
  69. Mitchell, D. P. 1996b. Consequences of stratified sampling in graphics. In Proceedings of SIGGRAPH ’96, Computer Graphics Proceedings, Annual Conference Series, New Orleans, Louisiana, 277–80.
  70. Mitchell, D. P., and A. N. Netravali. 1988. Reconstruction filters in computer graphics. Computer Graphics (SIGGRAPH ’88 Proceedings), Volume 22, 221–28.
  71. Moon, B., N. Carr, and S.-E.  Yoon. Adaptive rendering based on weighted local regression. ACM Transactions on Graphics 33 (5), 170:1–170:14.
  72. Munkberg, J., K. Vaidyanathan, J. Hasselgren, P. Clarberg, and T. Akenine-Möller. Layered reconstruction for defocus and motion blur. Computer Graphics Forum 33, 81–92.
  73. Niederreiter, H. 1992. Random Number Generation and Quasi–Monte Carlo Methods. Philadelphia: Society for Industrial and Applied Mathematics.
  74. Overbeck, R., C. Donner, and R. Ramamoorthi. 2009. Adaptive wavelet rendering. ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH Asia 2009) 28 (5), 140:1–140:12.
  75. Perlin, K. 1985a. An image synthesizer. In Computer Graphics (SIGGRAPH ’85 Proceedings), Volume 19, 287–96.
  76. Purgathofer, W. 1987. A statistical mothod for adaptive stochastic sampling. Computers & Graphics 11 (2), 157–62.
  77. Reinert, B., T. Ritschel, H.-P. Seidel, and I. Georgiev. Projective blue-noise sampling. In Computer Graphics Forum.
  78. Reinhard, E., G. Ward, P. Debevec, S. Pattanaik, W. Heidrich, and K. Myszkowski. 2010. High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting. San Francisco: Morgan Kaufmann.
  79. Reinhard, E., T. Pouli, T. Kunkel, B. Long, A. Ballestad, and G. Damberg. Calibrated image appearance reproduction. ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2012) 31 (6), 201:1–201:11.
  80. Reshetov, A. Morphological antialiasing. In Proceedings of High Performance Graphics 2009.
  81. Rougeron, G., and B. Péroche. 1998. Color fidelity in computer graphics: a survey. Computer Graphics Forum 17 (1), 3–16.
  82. Rousselle, F., C. Knaus, and M. Zwicker. Adaptive rendering with non-local means filtering. ACM Transactions on Graphics 31 (6), 195:1–195:11.
  83. Rousselle, F., M. Manzi, and M. Zwicker. Robust denoising using feature and color information. Computer Graphics Forum (Proceedings of Pacific Graphics) 32 (7), 121–30.
  84. Saito, T., and T. Takahashi. 1990. Comprehensible rendering of 3-D shapes. In Computer Graphics (Proceedings of SIGGRAPH ’90), Volume 24, 197–206.
  85. Sen, P., and S. Darabi. Compressive rendering: a rendering application of compressed sensing. IEEE Transactions on Visualization and Computer Graphics 17 (4), 487–99.
  86. Shade, J., S. J. Gortler, L. W. He, and R. Szeliski. 1998. Layered depth images. In Proceedings of SIGGRAPH 98, Computer Graphics Proceedings, Annual Conference Series, 231–42.
  87. Shinya, M. 1993. Spatial anti-aliasing for animation sequences with spatio-temporal filtering. In Proceedings of SIGGRAPH ’93, Computer Graphics Proceedings, Annual Conference Series, 289–96.
  88. Shirley, P. 1991. Discrepancy as a quality measure for sample distributions. Eurographics ’91, 183–94.
  89. Smith, A. R. 1995. A pixel is not a little square, a pixel is not a little square, a pixel is not a little square! (and a voxel is not a little cube). Microsoft Technical Memo 6.
  90. Sobol prime , I. 1967. On the distribution of points in a cube and the approximate evaluation of integrals. Zh. vychisl. Mat. mat. Fiz. 7 (4), 784–802.
  91. Tumblin, J., and H. E. Rushmeier. 1993. Tone reproduction for realistic images. IEEE Computer Graphics and Applications 13 (6), 42–48.
  92. Turkowski, K. 1990a. Filters for common resampling tasks. In A. S. Glassner (Ed.), Graphics Gems I, 147–65. San Diego: Academic Press.
  93. Unser, M. 2000. Sampling—50 years after Shannon. In Proceedings of the IEEE 88 (4), 569–87.
  94. Wächter, C. A. Quasi Monte Carlo light transport simulation by efficient ray tracing. Ph.D. thesis, University of Ulm.
  95. Wald, I., T. Kollig, C. Benthin, A. Keller, and P. Slusallek. 2002. Interactive global illumination using fast ray tracing. In Rendering Techniques 2002: 13th Eurographics Workshop on Rendering, 15–24.
  96. Wandell, B. 1995. Foundations of Vision. Sunderland, Massachusetts: Sinauer Associates.
  97. Warren, H. 2006. Hacker’s Delight. Reading, Massachusetts: Addison-Wesley.
  98. Wei, L.-Y. 2008. Parallel Poisson disk sampling. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2008) 27 (3), 20:1–20:10.
  99. Wilkie, A., and A. Weidlich. A robust illumination estimate for chromatic adaptation in rendered images. Computer Graphics Forum (Proceedings of the 2009 Eurographics Symposium on Rendering) 28 (4), 1101–09.
  100. Yellot, J. I. 1983. Spectral consequences of photoreceptor sampling in the Rhesus retina. Science 221, 382–85.