6.1 Camera Model

The abstract Camera base class holds generic camera options and defines the interface that all camera implementations must provide. It is defined in the files core/camera.h and core/camera.cpp.

<<Camera Declarations>>= 
class Camera { public: <<Camera Interface>> 
Camera(const AnimatedTransform &CameraToWorld, Float shutterOpen, Float shutterClose, Film *film, const Medium *medium); virtual ~Camera(); virtual Float GenerateRay(const CameraSample &sample, Ray *ray) const = 0; virtual Float GenerateRayDifferential(const CameraSample &sample, RayDifferential *rd) const; virtual Spectrum We(const Ray &ray, Point2f *pRaster2 = nullptr) const; virtual void Pdf_We(const Ray &ray, Float *pdfPos, Float *pdfDir) const; virtual Spectrum Sample_Wi(const Interaction &ref, const Point2f &u, Vector3f *wi, Float *pdf, Point2f *pRaster, VisibilityTester *vis) const;
<<Camera Public Data>> 
AnimatedTransform CameraToWorld; const Float shutterOpen, shutterClose; Film *film; const Medium *medium;
};

The base Camera constructor takes several parameters that are appropriate for all camera types. One of the most important is the transformation that places the camera in the scene, which is stored in the CameraToWorld member variable. The Camera stores an AnimatedTransform (rather than just a regular Transform) so that the camera itself can be moving over time.

Real-world cameras have a shutter that opens for a short period of time to expose the film to light. One result of this nonzero exposure time is motion blur: objects that are in motion relative to the camera during the exposure are blurred. All Cameras therefore store a shutter open and shutter close time and are responsible for generating rays with associated times at which to sample the scene. Given an appropriate distribution of ray times between the shutter open time and the shutter close time, it is possible to compute images that exhibit motion blur.

Cameras also contain a pointer to an instance of the Film class to represent the final image (Film is described in Section 7.9), and a pointer to a Medium instance to represent the scattering medium that the camera lies in (Medium is described in Section 11.3).

Camera implementations must pass along parameters that set these values to the Camera constructor. We will only show the constructor’s prototype here because its implementation just copies the parameters to the corresponding member variables.

<<Camera Interface>>= 
Camera(const AnimatedTransform &CameraToWorld, Float shutterOpen, Float shutterClose, Film *film, const Medium *medium);

<<Camera Public Data>>= 
AnimatedTransform CameraToWorld; const Float shutterOpen, shutterClose; Film *film; const Medium *medium;

The first method that camera subclasses need to implement is Camera::GenerateRay(), which should compute the ray corresponding to a given sample. It is important that the direction component of the returned ray be normalized—many other parts of the system will depend on this behavior.

<<Camera Interface>>+=  
virtual Float GenerateRay(const CameraSample &sample, Ray *ray) const = 0;

The CameraSample structure holds all of the sample values needed to specify a camera ray. Its pFilm member gives the point on the film to which the generated ray carries radiance. The point on the lens the ray passes through is in pLens (for cameras that include the notion of lenses), and CameraSample::time gives the time at which the ray should sample the scene; implementations should use this value to linearly interpolate within the shutterOpenshutterClose time range. (Choosing these various sample values carefully can greatly increase the quality of final images; this is the topic of much of Chapter 7.)

GenerateRay() also returns a floating-point value that affects how much the radiance arriving at the film plane along the generated ray will contribute to the final image. Simple camera models can just return a value of 1, but cameras that simulate real physical lens systems like the one in Section 6.4 set this value to indicate how much light the ray carries through the lenses based on their optical properties. (See Sections 6.4.7 and 13.6.6 for more information about how exactly this weight is computed and used.)

<<Camera Declarations>>+=  
struct CameraSample { Point2f pFilm; Point2f pLens; Float time; };

The GenerateRayDifferential() method computes a main ray like GenerateRay() but also computes the corresponding rays for pixels shifted one pixel in the x and y directions on the film plane. This information about how camera rays change as a function of position on the film helps give other parts of the system a notion of how much of the film area a particular camera ray’s sample represents, which is particularly useful for anti-aliasing texture map lookups.

<<Camera Method Definitions>>= 
Float Camera::GenerateRayDifferential(const CameraSample &sample, RayDifferential *rd) const { Float wt = GenerateRay(sample, rd); <<Find camera ray after shifting one pixel in the x direction>> 
CameraSample sshift = sample; sshift.pFilm.x++; Ray rx; Float wtx = GenerateRay(sshift, &rx); if (wtx == 0) return 0; rd->rxOrigin = rx.o; rd->rxDirection = rx.d;
<<Find camera ray after shifting one pixel in the y direction>> 
sshift.pFilm.x--; sshift.pFilm.y++; Ray ry; Float wty = GenerateRay(sshift, &ry); if (wty == 0) return 0; rd->ryOrigin = ry.o; rd->ryDirection = ry.d;
rd->hasDifferentials = true; return wt; }

Finding the ray for one pixel over in x is just a matter of initializing a new CameraSample and copying the appropriate values returned by calling GenerateRay() into the RayDifferential structure. The implementation of the fragment <<Find ray after shifting one pixel in the y direction>> follows similarly and isn’t included here.

<<Find camera ray after shifting one pixel in the x direction>>= 
CameraSample sshift = sample; sshift.pFilm.x++; Ray rx; Float wtx = GenerateRay(sshift, &rx); if (wtx == 0) return 0; rd->rxOrigin = rx.o; rd->rxDirection = rx.d;

6.1.1 Camera Coordinate Spaces

We have already made use of two important modeling coordinate spaces, object space and world space. We will now introduce an additional coordinate space, camera space, which has the camera at its origin. We have:

  • Object space: This is the coordinate system in which geometric primitives are defined. For example, spheres in pbrt are defined to be centered at the origin of their object space.
  • World space: While each primitive may have its own object space, all objects in the scene are placed in relation to a single world space. Each primitive has an object-to-world transformation that determines where it is located in world space. World space is the standard frame that all other spaces are defined in terms of.
  • Camera space: A camera is placed in the scene at some world space point with a particular viewing direction and orientation. This camera defines a new coordinate system with its origin at the camera’s location. The z axis of this coordinate system is mapped to the viewing direction, and the y axis is mapped to the up direction. This is a handy space for reasoning about which objects are potentially visible to the camera. For example, if an object’s camera space bounding box is entirely behind the z equals 0 plane (and the camera doesn’t have a field of view wider than 180 degrees), the object will not be visible to the camera.