12.4 Area Lights

Area lights are defined by the combination of a Shape and a directional distribution of radiance at each point on its surface. In general, computing radiometric quantities related to area lights requires computing integrals over the surface of the light that often cannot be computed in closed form, though they are well suited to Monte Carlo integration. The reward for this complexity (and computational expense) is soft shadows and more realistic lighting effects, rather than the hard shadows and stark lighting that come from point lights. (See Figure 12.15, which shows the effect of varying the size of an area light source used to illuminate the dragon; compare its soft look to illumination from a point light in Figure 12.3.)

Figure 12.15: Dragon Model Illuminated by Disk Area Lights. (a) The disk’s radius is relatively small; the shadow has soft penumbrae, but otherwise the image looks similar to the one with a point light. (b) The effect of using a much larger disk: not only have the penumbrae become much larger, to the point of nearly eliminating the shadow of the tail, for example, but note also how the shading on the body is smoother, with the specular highlights less visible due to illumination coming from a wider range of directions. (Dragon model courtesy of the Stanford Computer Graphics Laboratory.)

The DiffuseAreaLight class defines an area light where emission at each point on the surface has a uniform directional distribution.

<<DiffuseAreaLight Definition>>= 
class DiffuseAreaLight : public LightBase { public: <<DiffuseAreaLight Public Methods>> 
DiffuseAreaLight(const Transform &renderFromLight, const MediumInterface &mediumInterface, Spectrum Le, Float scale, const Shape shape, FloatTexture alpha, Image image, const RGBColorSpace *imageColorSpace, bool twoSided); static DiffuseAreaLight *Create(const Transform &renderFromLight, Medium medium, const ParameterDictionary &parameters, const RGBColorSpace *colorSpace, const FileLoc *loc, Allocator alloc, const Shape shape, FloatTexture alpha); void Preprocess(const Bounds3f &sceneBounds) {} SampledSpectrum Phi(SampledWavelengths lambda) const; PBRT_CPU_GPU pstd::optional<LightLeSample> SampleLe(Point2f u1, Point2f u2, SampledWavelengths &lambda, Float time) const; PBRT_CPU_GPU void PDF_Le(const Interaction &, Vector3f w, Float *pdfPos, Float *pdfDir) const; pstd::optional<LightBounds> Bounds() const; PBRT_CPU_GPU void PDF_Le(const Ray &, Float *pdfPos, Float *pdfDir) const { LOG_FATAL("Shouldn't be called for area lights"); } std::string ToString() const; SampledSpectrum L(Point3f p, Normal3f n, Point2f uv, Vector3f w, const SampledWavelengths &lambda) const { <<Check for zero emitted radiance from point on area light>> 
if (!twoSided && Dot(n, w) < 0) return SampledSpectrum(0.f); if (AlphaMasked(Interaction(p, uv))) return SampledSpectrum(0.f);
if (image) { <<Return DiffuseAreaLight emission using image>> 
RGB rgb; uv[1] = 1 - uv[1]; for (int c = 0; c < 3; ++c) rgb[c] = image.BilerpChannel(uv, c); RGBIlluminantSpectrum spec(*imageColorSpace, ClampZero(rgb)); return scale * spec.Sample(lambda);
} else return scale * Lemit->Sample(lambda); } pstd::optional<LightLiSample> SampleLi(LightSampleContext ctx, Point2f u, SampledWavelengths lambda, bool allowIncompletePDF) const; Float PDF_Li(LightSampleContext ctx, Vector3f wi, bool allowIncompletePDF) const;
private: <<DiffuseAreaLight Private Members>> 
Shape shape; FloatTexture alpha; Float area; bool twoSided; const DenselySampledSpectrum *Lemit; Float scale; Image image; const RGBColorSpace *imageColorSpace;
<<DiffuseAreaLight Private Methods>> 
bool AlphaMasked(const Interaction &intr) const { if (!alpha) return false; Float a = UniversalTextureEvaluator()(alpha, intr); if (a >= 1) return false; if (a <= 0) return true; return HashFloat(intr.p()) > a; }
};

Its constructor, not included here, sets the following member variables from the parameters provided to it. If an alpha texture has been associated with the shape to cut away parts of its surface, it is used here so that there is no illumination from those parts of the shape. (Recall that alpha masking was introduced in Section 7.1.1.) The area of the emissive Shape is needed in a number of the following methods and so is cached in a member variable.

<<DiffuseAreaLight Private Members>>= 
Shape shape; FloatTexture alpha; Float area;

A number of parameters specify emission from DiffuseAreaLights. By default, emission is only on one side of the surface, where the surface normal is outward-facing. A scaling transform that flips the normal or the ReverseOrientation directive in the scene description file can be used to cause emission to be on the other side of the surface. If twoSided is true, then the light emits on both sides.

Emission that varies over the surface can be defined using an Image; if one is provided to the constructor, the surface will have spatially varying emission defined by its color values. Otherwise, spatially uniform emitted spectral radiance is given by a provided Lemit spectrum. For both methods of specifying emission, an additional scale factor in scale is applied to the returned radiance.

<<DiffuseAreaLight Private Members>>+= 
bool twoSided; const DenselySampledSpectrum *Lemit; Float scale; Image image; const RGBColorSpace *imageColorSpace;

Recall from Section 12.1 that the Light interface includes an L() method that area lights must implement to provide the emitted radiance at a specified point on their surface. This method is called if a ray happens to intersect an emissive surface, for example. DiffuseAreaLight’s implementation starts by checking a few cases in which there is no emitted radiance before calculating emission using the Image, if provided, and otherwise the specified constant radiance.

<<DiffuseAreaLight Public Methods>>= 
SampledSpectrum L(Point3f p, Normal3f n, Point2f uv, Vector3f w, const SampledWavelengths &lambda) const { <<Check for zero emitted radiance from point on area light>> 
if (!twoSided && Dot(n, w) < 0) return SampledSpectrum(0.f); if (AlphaMasked(Interaction(p, uv))) return SampledSpectrum(0.f);
if (image) { <<Return DiffuseAreaLight emission using image>> 
RGB rgb; uv[1] = 1 - uv[1]; for (int c = 0; c < 3; ++c) rgb[c] = image.BilerpChannel(uv, c); RGBIlluminantSpectrum spec(*imageColorSpace, ClampZero(rgb)); return scale * spec.Sample(lambda);
} else return scale * Lemit->Sample(lambda); }

Two cases allow immediately returning no emitted radiance: the first is if the light is one-sided and the outgoing direction omega Subscript faces away from the surface normal and the second is if the point on the light’s surface has been cut away by an alpha texture.

<<Check for zero emitted radiance from point on area light>>= 
if (!twoSided && Dot(n, w) < 0) return SampledSpectrum(0.f); if (AlphaMasked(Interaction(p, uv))) return SampledSpectrum(0.f);

The AlphaMasked() method performs a stochastic alpha test for a point on the light.

<<DiffuseAreaLight Private Methods>>= 
bool AlphaMasked(const Interaction &intr) const { if (!alpha) return false; Float a = UniversalTextureEvaluator()(alpha, intr); if (a >= 1) return false; if (a <= 0) return true; return HashFloat(intr.p()) > a; }

If an Image has been provided to specify emission, then the emitted radiance is found by looking up an RGB value and converting it to the requested spectral samples. Note that the v coordinate is inverted before being passed to BilerpChannel(); in this way, the parameterization matches the image texture coordinate conventions that were described in Section 10.4.2. (See Figure 6.26 for a scene with an area light source with emission defined using an image.)

<<Return DiffuseAreaLight emission using image>>= 
RGB rgb; uv[1] = 1 - uv[1]; for (int c = 0; c < 3; ++c) rgb[c] = image.BilerpChannel(uv, c); RGBIlluminantSpectrum spec(*imageColorSpace, ClampZero(rgb)); return scale * spec.Sample(lambda);

For convenience, we will add a method to the SurfaceInteraction class that makes it easy to compute the emitted radiance at a surface point intersected by a ray.

<<SurfaceInteraction Method Definitions>>+= 
SampledSpectrum SurfaceInteraction::Le(Vector3f w, const SampledWavelengths &lambda) const { return areaLight ? areaLight.L(p(), n, uv, w, lambda) : SampledSpectrum(0.f); }

All the SampleLi() methods so far have been deterministic: because all the preceding light models have been defined in terms of Dirac delta distributions of either position or direction, there has only been a single incident direction along which illumination arrives at any point. This is no longer the case with area lights and we will finally make use of the uniform 2D sample u.

<<DiffuseAreaLight Method Definitions>>= 
pstd::optional<LightLiSample> DiffuseAreaLight::SampleLi(LightSampleContext ctx, Point2f u, SampledWavelengths lambda, bool allowIncompletePDF) const { <<Sample point on shape for DiffuseAreaLight>> 
ShapeSampleContext shapeCtx(ctx.pi, ctx.n, ctx.ns, 0 /* time */); pstd::optional<ShapeSample> ss = shape.Sample(shapeCtx, u); if (!ss || ss->pdf == 0 || LengthSquared(ss->intr.p() - ctx.p()) == 0) return {}; ss->intr.mediumInterface = &mediumInterface;
<<Check sampled point on shape against alpha texture, if present>> 
if (AlphaMasked(ss->intr)) return {};
<<Return LightLiSample for sampled point on shape>> 
Vector3f wi = Normalize(ss->intr.p() - ctx.p()); SampledSpectrum Le = L(ss->intr.p(), ss->intr.n, ss->intr.uv, -wi, lambda); if (!Le) return {}; return LightLiSample(Le, wi, ss->pdf, ss->intr);
}

The second variant of Shape::Sample(), which takes a receiving point and returns a point on the shape and PDF expressed with respect to solid angle at the receiving point, is an exact match for the Light SampleLi() interface. Therefore, the implementation starts by calling that method.

The astute reader will note that if an image is being used to define the light’s emission, leaving the sampling task to the shape alone may not be ideal. Yet, extending the Shape’s sampling interface to optionally take a reference to an Image or some other representation of spatially varying emission would be a clunky addition. pbrt’s solution to this problem is that BilinearPatch shapes (but no others) allow specifying an image to use for sampling. To have to specify this information twice in the scene description is admittedly not ideal, but it suffices to make the common case of a quadrilateral emitter with an image work out.

<<Sample point on shape for DiffuseAreaLight>>= 
ShapeSampleContext shapeCtx(ctx.pi, ctx.n, ctx.ns, 0 /* time */); pstd::optional<ShapeSample> ss = shape.Sample(shapeCtx, u); if (!ss || ss->pdf == 0 || LengthSquared(ss->intr.p() - ctx.p()) == 0) return {}; ss->intr.mediumInterface = &mediumInterface;

If the sampled point has been masked by the alpha texture, an invalid sample is returned.

<<Check sampled point on shape against alpha texture, if present>>= 
if (AlphaMasked(ss->intr)) return {};

If the shape has generated a valid sample, the next step is to compute the emitted radiance at the sample point. If that is a zero-valued spectrum, then an unset sample value is returned; calling code can then avoid the expense of tracing an unnecessary shadow ray.

<<Return LightLiSample for sampled point on shape>>= 
Vector3f wi = Normalize(ss->intr.p() - ctx.p()); SampledSpectrum Le = L(ss->intr.p(), ss->intr.n, ss->intr.uv, -wi, lambda); if (!Le) return {}; return LightLiSample(Le, wi, ss->pdf, ss->intr);

The PDF for sampling a given direction from a receiving point is also easily handled, again thanks to Shape providing a corresponding method.

<<DiffuseAreaLight Method Definitions>>+=  
Float DiffuseAreaLight::PDF_Li(LightSampleContext ctx, Vector3f wi, bool allowIncompletePDF) const { ShapeSampleContext shapeCtx(ctx.pi, ctx.n, ctx.ns, 0 /* time */); return shape.PDF(shapeCtx, wi); }

Emitted power from an area light with uniform emitted radiance over the surface can be computed in closed form: from Equation (4.1) it follows that it is pi times the surface area times the emitted radiance. If an image has been specified for the emission, its average value is computed in a fragment that is not included here. That computation neglects the effect of any alpha texture and effectively assumes that there is no distortion in the surface’s left-parenthesis u comma v right-parenthesis parameterization. If these are not the case, there will be error in the normal upper Phi value.

<<DiffuseAreaLight Method Definitions>>+=  
SampledSpectrum DiffuseAreaLight::Phi(SampledWavelengths lambda) const { SampledSpectrum L(0.f); if (image) { <<Compute average light image emission>> 
for (int y = 0; y < image.Resolution().y; ++y) for (int x = 0; x < image.Resolution().x; ++x) { RGB rgb; for (int c = 0; c < 3; ++c) rgb[c] = image.GetChannel({x, y}, c); L += RGBIlluminantSpectrum(*imageColorSpace, ClampZero(rgb)).Sample(lambda); } L *= scale / (image.Resolution().x * image.Resolution().y);
} else L = Lemit->Sample(lambda) * scale; return Pi * (twoSided ? 2 : 1) * area * L; }