B Scene Description Interface
This appendix describes the application programming interface (API) that is used to describe the scene to be rendered to pbrt. Users of the renderer typically don’t call the functions in this interface directly but instead describe their scenes using the text file format described in documentation on the pbrt Web site (pbrt.org). The statements in these text files have a direct correspondence to the API functions described here.
The need for such an interface to the renderer is clear: there must be a convenient way in which all of the properties of the scene to be rendered can be communicated to the renderer. The interface should be well defined and general purpose, so that future extensions to the system fit into its structure cleanly. It shouldn’t be too complicated, so that it’s easy to describe scenes, but it should be expressive enough that it doesn’t leave any of the renderer’s capabilities hidden.
A key decision to make when designing a rendering API is whether to expose the system’s internal algorithms and structures or offer a high-level abstraction for describing the scene. These have historically been the two main approaches to scene description in graphics: the interface may specify how to render the scene, configuring a rendering pipeline at a low level using deep knowledge of the renderer’s internal algorithms, or it may specify what the scene’s objects, lights, and material properties are and leave it to the renderer to decide how to transform that description into the best possible image.
The first approach has been successfully used for interactive graphics. In APIs such as OpenGL® or Direct3D®, it is not possible to just mark an object as a mirror and have reflections appear automatically; rather, the user must choose an algorithm for rendering reflections, render the scene multiple times (e.g., to generate an environment map), store those images in a texture, and then configure the graphics pipeline to use the environment map when rendering the reflective object. The advantage of this approach is that the full flexibility of the rendering pipeline is exposed to the user, making it possible to carefully control the actual computation being done and to use the pipeline very efficiently. Furthermore, because APIs like these impose a very thin abstraction layer between the user and the renderer, the user can be confident that unexpected inefficiencies won’t be introduced by the API.
The second approach to scene description, based on describing the geometry, materials, and lights at a higher level of abstraction, has been most successful for applications like high-quality offline rendering. There, users are generally willing to cede control of the low-level rendering details to the renderer in exchange for the ability to specify the scene’s properties at a high level. An important advantage of the high-level approach is that the implementations of these renderers have greater freedom to make major changes to the internal algorithms of the system, since the API exposes less of them.
For pbrt, we will use an interface based on the descriptive approach. Because pbrt is fundamentally physically based, the API is necessarily less flexible in some ways than APIs for many nonphysically based rendering packages. For example, it is not possible to have some lights illuminate only some objects in the scene.
Another key decision to make in graphics API design is whether to use an immediate mode or a retained mode style. In an immediate mode API, the user specifies the scene via a stream of commands that the renderer processes as they arrive. In general, the user cannot make changes to the scene description data already specified (e.g., “change the material of that sphere I described previously from plastic to glass”); once it has been given to the renderer, the information is no longer accessible to the user. Retained mode APIs give the user some degree of access to the data structures that the renderer has built to represent the scene. The user can then modify the scene description in a variety of ways before finally instructing the renderer to render the scene.
Immediate mode has been very successful for interactive graphics APIs since it allows graphics hardware to draw the objects in the scene as they are supplied by the user. Since they do not need to build data structures to store the scene and since they can apply techniques like immediately culling objects that are outside of the viewing frustum without worrying that the user will change the camera position before rendering, these APIs have been key to high-performance interactive graphics.
For ray-tracing-based renderers like pbrt, where the entire scene must be described and stored in memory before rendering can begin, some of these advantages of an immediate mode interface aren’t applicable. Nonetheless, we will use immediate mode semantics in our API, since it leads to a clean and straightforward scene description language. This choice makes it more difficult to use pbrt for applications like quickly rerendering a scene after making a small change to it (e.g., by moving a light source) and may make rendering animations less straightforward, since the entire scene needs to be redescribed for each frame of an animation. Adding a retained mode interface to pbrt would be a challenging but useful project.
pbrt’s rendering API consists of just over 40 carefully chosen functions, all of which are declared in the core/api.h header file. The implementation of these functions is in core/api.cpp. This appendix will focus on the general process of turning the API function calls into instances of the classes that represent scenes.