1. Write a program that compares Monte Carlo and one or more alternative numerical integration techniques. Structure this program so that it is easy to replace the particular function being integrated. Verify that the different techniques compute the same result (given a sufficient number of samples for each of them). Modify your program so that it draws samples from distributions other than the uniform distribution for the Monte Carlo estimate, and verify that it still computes the correct result when the correct estimator, Equation (2.7), is used. (Make sure that any alternative distributions you use have nonzero probability of choosing any value of x where f left-parenthesis x right-parenthesis greater-than 0 .)
  2. Write a program that computes unbiased Monte Carlo estimates of the integral of a given function. Compute an estimate of the variance of the estimates by performing a series of trials with successively more samples and computing the mean squared error for each one. Demonstrate numerically that variance decreases at a rate of upper O left-parenthesis n right-parenthesis .
  3. The algorithm for sampling the linear interpolation function in Section 2.3.2 implicitly assumes that a comma b greater-than-or-equal-to 0 and that thus f left-parenthesis x right-parenthesis greater-than-or-equal-to 0 . If f is negative, then the importance sampling PDF should be proportional to StartAbsoluteValue f left-parenthesis x right-parenthesis EndAbsoluteValue . Generalize SampleLinear() and the associated PDF and inversion functions to handle the case where f is always negative as well as the case where it crosses zero due to a and b having different signs.