A.1 The Alias Method

If many samples need to be generated from a discrete distribution, using the approach implemented in the SampleDiscrete() function would be wasteful: each generated sample would require upper O left-parenthesis n right-parenthesis computation. That approach could be improved to upper O left-parenthesis log n right-parenthesis time by computing a cumulative distribution function (CDF) table once and then using binary search to generate each sample, but there is another option that is even more efficient, requiring just upper O left-parenthesis 1 right-parenthesis time for each sample; that approach is the alias method.

To understand how the alias method works, first consider the task of sampling from n discrete outcomes, each with equal probability. In that case, computing the value left floor n xi right floor gives a uniformly distributed index between 0 and n minus 1 and the corresponding outcome can be selected—no further work is necessary. The alias method allows a similar searchless sampling method if the outcomes have arbitrary probabilities p Subscript i .

The alias method is based on creating n bins, one for each outcome. Bins are sampled uniformly and then two values stored in each bin are used to generate the final sample: if the i th bin was sampled, then q Subscript i gives the probability of sampling the i th outcome, and otherwise the alias is chosen; it is the index of a single alternative outcome. Though we will not include the proof here, it can be shown that this representation—the i th bin associated with the i th outcome and no more than a single alias per bin—is sufficient to represent arbitrary discrete probability distributions.

With the alias method, if the probabilities are all the same, then each bin’s probability q Subscript i is one, and it reduces to the earlier example with uniform probabilities. Otherwise, for outcomes i where the associated probability p Subscript i is greater than the average probability, the outcome i will be stored as the alias in one or more of the other bins. For outcomes i where the associated p Subscript i is less than the average probability, q Subscript i will be less than one and the alias will point to one of the higher-probability outcomes.

For a specific example, consider the probabilities p Subscript i Baseline equals StartSet 1 slash 2 comma 1 slash 4 comma 1 slash 8 comma 1 slash 8 EndSet . A corresponding alias table is shown in Table A.1. It is possible to see that, for example, the first sample is chosen with probability 1 slash 2 : there is a 1 slash 4 probability of choosing the first table entry, in which case the first sample is always chosen. Otherwise, there is a 1 slash 4 probability of choosing the second and third table entries, and for each, there is a 1 slash 2 chance of choosing the alias, giving in sum an additional 1 slash 4 probability of choosing the first sample. The other probabilities can be verified similarly.

Table A.1: A Simple Alias Table. This alias table makes it possible to generate samples from the distribution of discrete probabilities StartSet 1 slash 2 comma 1 slash 4 comma 1 slash 8 comma 1 slash 8 EndSet . To generate a sample, an entry is first chosen with uniform probability. Given an entry i , its corresponding sample is chosen with probability q Subscript i and the sample corresponding to its alias index is chosen with probability 1 minus q Subscript i .

Index q Subscript i Alias Index
1 1 n/a
2 0.5 1
3 0.5 1
4 0.5 2

One way to interpret an alias table is that each bin represents 1 slash n of the total probability mass function. If outcomes are first allocated to their corresponding bins, then the probability mass of outcomes that are greater than 1 slash n must be distributed to other bins that have associated probabilities less than 1 slash n . This idea is illustrated in Figure A.1, which corresponds to the example of Table A.1.

Figure A.1: Graphical Representation of the Alias Table in Table A.1. One bin is allocated for each outcome and is filled by the outcome’s probability, up to 1 slash n . Excess probability is allocated to other bins that have probabilities less than 1 slash n and thus extra space.

The AliasTable class implements algorithms for generating and sampling from alias tables. As with the other sampling code, its implementation is found in util/sampling.h and util/sampling.cpp.

<<AliasTable Definition>>= 
class AliasTable { public: <<AliasTable Public Methods>> 
AliasTable() = default; AliasTable(Allocator alloc = {}) : bins(alloc) {} AliasTable(pstd::span<const Float> weights, Allocator alloc = {}); PBRT_CPU_GPU int Sample(Float u, Float *pmf = nullptr, Float *uRemapped = nullptr) const; std::string ToString() const; size_t size() const { return bins.size(); } Float PMF(int index) const { return bins[index].p; }
private: <<AliasTable Private Members>> 
struct Bin { Float q, p; int alias; }; pstd::vector<Bin> bins;
};

Its constructor takes an array of weights, not necessarily normalized, that give the relative probabilities for the possible outcomes.

<<AliasTable Method Definitions>>= 
AliasTable::AliasTable(pstd::span<const Float> weights, Allocator alloc) : bins(weights.size(), alloc) { <<Normalize weights to compute alias table PDF>> 
Float sum = std::accumulate(weights.begin(), weights.end(), 0.); for (size_t i = 0; i < weights.size(); ++i) bins[i].p = weights[i] / sum;
<<Create alias table work lists>> 
struct Outcome { Float pHat; size_t index; }; std::vector<Outcome> under, over; for (size_t i = 0; i < bins.size(); ++i) { <<Add outcome i to an alias table work list>> 
Float pHat = bins[i].p * bins.size(); if (pHat < 1) under.push_back(Outcome{pHat, i}); else over.push_back(Outcome{pHat, i});
}
<<Process under and over work item together>> 
while (!under.empty() && !over.empty()) { <<Remove items un and ov from the alias table work lists>> 
Outcome un = under.back(), ov = over.back(); under.pop_back(); over.pop_back();
<<Initialize probability and alias for un>> 
bins[un.index].q = un.pHat; bins[un.index].alias = ov.index;
<<Push excess probability on to work list>> 
Float pExcess = un.pHat + ov.pHat - 1; if (pExcess < 1) under.push_back(Outcome{pExcess, ov.index}); else over.push_back(Outcome{pExcess, ov.index});
}
<<Handle remaining alias table work items>> 
while (!over.empty()) { Outcome ov = over.back(); over.pop_back(); bins[ov.index].q = 1; bins[ov.index].alias = -1; } while (!under.empty()) { Outcome un = under.back(); under.pop_back(); bins[un.index].q = 1; bins[un.index].alias = -1; }
}

The Bin structure represents an alias table bin. It stores the probability q , the corresponding outcome’s probability p , and an alias.

<<AliasTable Private Members>>= 
struct Bin { Float q, p; int alias; }; pstd::vector<Bin> bins;

We have found that with large numbers of outcomes, especially when the magnitudes of their weights vary significantly, it is important to use double precision to compute their sum so that the alias table initialization algorithm works correctly. Therefore, here std::accumulate takes the double-precision value 0. as its initial value, which in turn causes all its computation to be in double precision. Given the sum of weights, the normalized probabilities can be computed.

<<Normalize weights to compute alias table PDF>>= 
Float sum = std::accumulate(weights.begin(), weights.end(), 0.); for (size_t i = 0; i < weights.size(); ++i) bins[i].p = weights[i] / sum;

The first stage of the alias table initialization algorithm is to split the outcomes into those that have probability less than the average and those that have probability higher than the average. Two std::vectors of the Outcome structure are used for this.

<<Create alias table work lists>>= 
struct Outcome { Float pHat; size_t index; }; std::vector<Outcome> under, over; for (size_t i = 0; i < bins.size(); ++i) { <<Add outcome i to an alias table work list>> 
Float pHat = bins[i].p * bins.size(); if (pHat < 1) under.push_back(Outcome{pHat, i}); else over.push_back(Outcome{pHat, i});
}

Here and in the remainder of the initialization phase, we will scale the individual probabilities by the number of bins n , working in terms of ModifyingAbove p With caret Subscript i Baseline equals p Subscript i Baseline n . Thus, the average ModifyingAbove p With caret value is 1, which will be convenient in the following.

<<Add outcome i to an alias table work list>>= 
Float pHat = bins[i].p * bins.size(); if (pHat < 1) under.push_back(Outcome{pHat, i}); else over.push_back(Outcome{pHat, i});

To initialize the alias table, one outcome is taken from under and one is taken from over. Together, they make it possible to initialize the element of bins that corresponds to the outcome from under. After that bin has been initialized, the outcome from over will still have some excess probability that is not yet reflected in bins. It is added to the appropriate work list and the loop executes again until under and over are empty. This algorithm runs in upper O left-parenthesis n right-parenthesis time.

It is not immediately obvious that this approach will successfully initialize the alias table, or that it will necessarily terminate. We will not rigorously show that here, but informally, we can see that at the start, there must be at least one item in each work list unless they all have the same probability (in which case, initialization is trivial). Then, each time through the loop, we initialize one bin, which consumes ModifyingAbove p With caret equals 1 worth of probability mass. With one less bin to initialize and that much less probability to distribute, we have the same average probability over the remaining bins. That brings us to the same setting as the starting condition: some of the remaining items in the list must be above the average and some must be below, unless they are all equal to it.

<<Process under and over work item together>>= 
while (!under.empty() && !over.empty()) { <<Remove items un and ov from the alias table work lists>> 
Outcome un = under.back(), ov = over.back(); under.pop_back(); over.pop_back();
<<Initialize probability and alias for un>> 
bins[un.index].q = un.pHat; bins[un.index].alias = ov.index;
<<Push excess probability on to work list>> 
Float pExcess = un.pHat + ov.pHat - 1; if (pExcess < 1) under.push_back(Outcome{pExcess, ov.index}); else over.push_back(Outcome{pExcess, ov.index});
}

<<Remove items un and ov from the alias table work lists>>= 
Outcome un = under.back(), ov = over.back(); under.pop_back(); over.pop_back();

The probability ModifyingAbove p With caret Subscript normal u normal n of un must be less than one. We can initialize its bin’s q with ModifyingAbove p With caret Subscript normal u normal n , as that is equal to the probability it should be sampled if its bin is chosen. In order to allocate the remainder of the bin’s probability mass, the alias is set to ov. Because ModifyingAbove p With caret Subscript normal o normal v Baseline greater-than-or-equal-to 1 , it certainly has enough probability to fill the remainder of the bin—we just need 1 minus ModifyingAbove p With caret Subscript normal u normal n of it.

<<Initialize probability and alias for un>>= 
bins[un.index].q = un.pHat; bins[un.index].alias = ov.index;

In initializing bins[un.index], we have consumed ModifyingAbove p With caret equals 1 worth of the scaled probability mass. The remainder, un.pHat + ov.pHat - 1, is the as-yet unallocated probability for ov.index; it is added to the appropriate work list based on how much is left.

<<Push excess probability on to work list>>= 
Float pExcess = un.pHat + ov.pHat - 1; if (pExcess < 1) under.push_back(Outcome{pExcess, ov.index}); else over.push_back(Outcome{pExcess, ov.index});

Due to floating-point round-off error, there may be work items remaining on either of the two work lists with the other one empty. These items have probabilities slightly less than or slightly greater than one and should be given probability q equals 1 in the alias table. The fragment that handles this, <<Handle remaining alias table work items>>, is not included in the book.

Given an initialized alias table, sampling is easy. As described before, an entry is chosen with uniform probability and then either the corresponding sample or its alias is returned. As with the SampleDiscrete() function, a new uniform random sample derived from the original one is optionally returned.

<<AliasTable Method Definitions>>+= 
int AliasTable::Sample(Float u, Float *pmf, Float *uRemapped) const { <<Compute alias table offset and remapped random sample up>> 
int offset = std::min<int>(u * bins.size(), bins.size() - 1); Float up = std::min<Float>(u * bins.size() - offset, OneMinusEpsilon);
if (up < bins[offset].q) { <<Return sample for alias table at offset>> 
if (pmf) *pmf = bins[offset].p; if (uRemapped) *uRemapped = std::min<Float>(up / bins[offset].q, OneMinusEpsilon); return offset;
} else { <<Return sample for alias table at alias[offset]>> 
int alias = bins[offset].alias; if (pmf) *pmf = bins[alias].p; if (uRemapped) *uRemapped = std::min<Float>((up - bins[offset].q) / (1 - bins[offset].q), OneMinusEpsilon); return alias;
} }

The index for the chosen entry is found by multiplying the random sample by the number of entries. Because u was only used for the discrete sampling decision of selecting an initial entry, it is possible to derive a new uniform random sample from it. That computation is done here to get an independent uniform sample up that is used to decide whether to sample the alias at the current entry.

<<Compute alias table offset and remapped random sample up>>= 
int offset = std::min<int>(u * bins.size(), bins.size() - 1); Float up = std::min<Float>(u * bins.size() - offset, OneMinusEpsilon);

If the initial entry is selected, the various return values are easily computed.

<<Return sample for alias table at offset>>= 
if (pmf) *pmf = bins[offset].p; if (uRemapped) *uRemapped = std::min<Float>(up / bins[offset].q, OneMinusEpsilon); return offset;

Otherwise the appropriate values for the alias are returned.

<<Return sample for alias table at alias[offset]>>= 
int alias = bins[offset].alias; if (pmf) *pmf = bins[alias].p; if (uRemapped) *uRemapped = std::min<Float>((up - bins[offset].q) / (1 - bins[offset].q), OneMinusEpsilon); return alias;

Beyond sampling, it is useful to be able to query the size of the table and the probability of a given outcome. These two operations are easily provided.

<<AliasTable Public Methods>>= 
size_t size() const { return bins.size(); } Float PMF(int index) const { return bins[index].p; }