5.3 Spherical Camera

One advantage of ray tracing compared to scan line or rasterization-based rendering methods is that it is easy to employ unusual image projections. We have great freedom in how the image sample positions are mapped into ray directions, since the rendering algorithm does not depend on properties such as straight lines in the scene always projecting to straight lines in the image.

In this section, we will describe a camera model that traces rays in all directions around a point in the scene, giving a view of everything that is visible from that point. The SphericalCamera supports two spherical parameterizations from Section 3.8 to map points in the image to associated directions. Figure 5.16 shows this camera in action with the San Miguel model.

<<SphericalCamera Definition>>= 
class SphericalCamera : public CameraBase { public: <<SphericalCamera::Mapping Definition>> 
enum Mapping { EquiRectangular, EqualArea };
<<SphericalCamera Public Methods>> 
SphericalCamera(CameraBaseParameters baseParameters, Mapping mapping) : CameraBase(baseParameters), mapping(mapping) { <<Compute minimum differentials for SphericalCamera>> 
FindMinimumDifferentials(this);
} static SphericalCamera *Create(const ParameterDictionary &parameters, const CameraTransform &cameraTransform, Film film, Medium medium, const FileLoc *loc, Allocator alloc = {}); PBRT_CPU_GPU pstd::optional<CameraRay> GenerateRay(CameraSample sample, SampledWavelengths &lambda) const; PBRT_CPU_GPU pstd::optional<CameraRayDifferential> GenerateRayDifferential( CameraSample sample, SampledWavelengths &lambda) const { return CameraBase::GenerateRayDifferential(this, sample, lambda); } PBRT_CPU_GPU SampledSpectrum We(const Ray &ray, SampledWavelengths &lambda, Point2f *pRaster2 = nullptr) const { LOG_FATAL("We() unimplemented for SphericalCamera"); return {}; } PBRT_CPU_GPU void PDF_We(const Ray &ray, Float *pdfPos, Float *pdfDir) const { LOG_FATAL("PDF_We() unimplemented for SphericalCamera"); } PBRT_CPU_GPU pstd::optional<CameraWiSample> SampleWi(const Interaction &ref, Point2f u, SampledWavelengths &lambda) const { LOG_FATAL("SampleWi() unimplemented for SphericalCamera"); return {}; } std::string ToString() const;
private: <<SphericalCamera Private Members>> 
Mapping mapping;
};

Figure 5.16: The San Miguel scene rendered with the SphericalCamera, which traces rays in all directions from the camera position. (a) Rendered using an equirectangular mapping. (b) Rendered with an equal-area mapping. (Scene courtesy of Guillermo M. Leal Llaguno.)

SphericalCamera does not derive from ProjectiveCamera since the projections that it uses are nonlinear and cannot be captured by a single 4 times 4 matrix.

<<SphericalCamera Public Methods>>= 
SphericalCamera(CameraBaseParameters baseParameters, Mapping mapping) : CameraBase(baseParameters), mapping(mapping) { <<Compute minimum differentials for SphericalCamera>> 
FindMinimumDifferentials(this);
}

The first mapping that SphericalCamera supports is the equirectangular mapping that was defined in Section 3.8.3. In the implementation here, theta values range from 0 at the top of the image to  pi at the bottom of the image, and  phi values range from 0 to 2 pi , moving from left to right across the image.

The equirectangular mapping is easy to evaluate and has the advantage that lines of constant latitude and longitude on the sphere remain straight. However, it preserves neither area nor angles between curves on the sphere (i.e., it is not conformal). These issues are especially evident at the top and bottom of the image in Figure 5.16(a).

Therefore, the SphericalCamera also supports the equal-area mapping from Section 3.8.3. With this mapping, any finite solid angle of directions on the sphere maps to the same area in the image, regardless of where it is on the sphere. (This mapping is also used by the ImageInfiniteLight, which is described in Section 12.5.2, and so images rendered using this camera can be used as light sources.) The equal-area mapping’s use with the SphericalCamera is shown in Figure 5.16(b).

An enumeration reflects which mapping should be used.

<<SphericalCamera::Mapping Definition>>= 
enum Mapping { EquiRectangular, EqualArea };

<<SphericalCamera Private Members>>= 
Mapping mapping;

The main task of the GenerateRay() method is to apply the requested mapping. The rest of it follows the earlier GenerateRay() methods.

<<SphericalCamera Method Definitions>>= 
pstd::optional<CameraRay> SphericalCamera::GenerateRay( CameraSample sample, SampledWavelengths &lambda) const { <<Compute spherical camera ray direction>> 
Point2f uv(sample.pfilm.x / film.FullResolution().x, sample.pfilm.y / film.FullResolution().y); Vector3f dir; if (mapping == EquiRectangular) { <<Compute ray direction using equirectangular mapping>> 
Float theta = Pi * uv[1], phi = 2 * Pi * uv[0]; dir = SphericalDirection(std::sin(theta), std::cos(theta), phi);
} else { <<Compute ray direction using equal-area mapping>>  } pstd::swap(dir.y, dir.z);
Ray ray(Point3f(0, 0, 0), dir, SampleTime(sample.time), medium); return CameraRay{RenderFromCamera(ray)}; }

For the use of both mappings, left-parenthesis u comma v right-parenthesis coordinates in NDC space are found by dividing the raster space sample location by the image’s overall resolution. Then, after the mapping is applied, the y and z coordinates are swapped to account for the fact that both mappings are defined with z as the “up” direction, while y is “up” in camera space.

<<Compute spherical camera ray direction>>= 
Point2f uv(sample.pfilm.x / film.FullResolution().x, sample.pfilm.y / film.FullResolution().y); Vector3f dir; if (mapping == EquiRectangular) { <<Compute ray direction using equirectangular mapping>> 
Float theta = Pi * uv[1], phi = 2 * Pi * uv[0]; dir = SphericalDirection(std::sin(theta), std::cos(theta), phi);
} else { <<Compute ray direction using equal-area mapping>>  } pstd::swap(dir.y, dir.z);

For the equirectangular mapping, the left-parenthesis u comma v right-parenthesis coordinates are scaled to cover the left-parenthesis theta comma phi right-parenthesis range and the spherical coordinate formula is used to compute the ray direction.

<<Compute ray direction using equirectangular mapping>>= 
Float theta = Pi * uv[1], phi = 2 * Pi * uv[0]; dir = SphericalDirection(std::sin(theta), std::cos(theta), phi);

The left-parenthesis u comma v right-parenthesis values for the CameraSample may be slightly outside of the range left-bracket 0 comma 1 right-bracket squared , due to the pixel sample filter function. A call to WrapEqualAreaSquare() takes care of handling the boundary conditions before EqualAreaSquareToSphere() performs the actual mapping.

<<Compute ray direction using equal-area mapping>>=