Gaussian processes (GPs) play a pivotal role in many complex machine learning algorithms. For example, sequential decision-making strategies such as Bayesian optimization frequently use GPs to represent different actions’ possible outcomes. Actions are then chosen by maximizing the conditional expectation of a chosen reward functional with respect to the GP posterior. These expectations quickly become intractable when dealing with more expressive reward functions, but may be efficiently estimated via Monte Carlo methods.
Sampling from GP posteriors is usually accomplished using location-scale transforms. Given a set of points at which to sample, we first compute the conditional distribution of given observations . We then use the mean and covariance of this Gaussian conditional to draw samples. While this generative procedure is relatively error-free, it incurs cubic complexity. Below, we detail a novel sampling scheme based off of pathwise conditioning: rather than conditioning the prior as a distribution, we update the prior as realized in terms of sample paths. This approach conveys a number of immediate advantages.
- Its complexity is linear in the number of test points.
- It yields an actual function draw that we may freely evaluate and differentiate anywhere.
- Its discretization error is easy to understand and control.
The end result of this process, efficient sampling, uses the stengths of location-scale methods to counteract the weaknesses of popular Fourier-feature-based alternatives, and vice-versa.
Pathwise updates for Gaussian process posteriors
A Gaussian process is a distribution over functions with Gaussian marginals, meaning that is multivariate normal for any finite set of locations. Given observations , the ensuing posterior is typically portayed as a Gaussian distribution with centered moments
This way of writing Gaussian posteriors mirrors the standard way of thinking about them: in terms of mean vectors and covariance matrices.1 A less familiar but equally valid way of expressing Gaussian conditionals is given by Matheron’s rule: if are jointly Gaussian, then
Accordingly, we may sample by updating a joint draw to account for the residual , thereby inducing a corresponding change in by virtue of its covariance with . This procedure is illustrated below for the simple case of bivariate normal random variables.
Extending this concept to Gaussian process priors leads to a pathwise characterization of their posteriors, namely
As in finite dimensional cases, we may sample from the posterior via pathwise updating of draws from the prior. For sparse GPs, this process involves generating a separate draw from the inducing distribution . In addition to specifying an update rule, this pathwise representation decomposes GP posteriors as dependent sums of prior and update terms. Examining both terms, we see that prior and update, respectively, scale in cubic and linear fashion with the number of test points. Efficiently sampling from the posterior therefore reduces to efficiently sampling from the prior.
Efficiently sampling from the prior
Different choices of prior afford different avenues for fast sampling. Here, we focus on the standard setting of stationary kernels . In such cases, the kernel can be viewed an inner product in a reproducing kernel Hilbert space , and approximated by random Fourier features2 such that
where is a feature map and is an -dimensional approximation thereof.3 If , then the right-hand side of
is a random function with Gaussian marginals whose covariance is approximately . This means that realizations of this function are draws from an approximate GP prior. Unlike location-scale methods, this approximate draw is an actual function and exhibits linear time complexity in the number of test locations. Moreover, the error introduced by random Fourier feature methods is well-understood4 and controlled by the number of basis functions used in the approximation.
Efficient sampling
Putting these ideas together, we obtain the efficient sampling approximation
where is defined in terms a feature matrix with rows . Here, we have chosen to explicitly represent the update as a sum over canonical basis functions to further emphasize that efficient sampling produces function draws. This is visualized below.
Unlike previous approximate GPs, this approach is specifically tailored for sampling. Just as the Fourier basis excells at representing the prior, the canonical basis excells at representing the data.5 Hence, using Matheron’s rule to separate out prior from update allows us obtain the best of both worlds by utilizing a suitable basis for each term.
Takeaways
Efficient sampling is a general-purpose technique for efficiently drawing functions from GP posteriors. In addition to use cases outlined above, this technique can be employed as a plug-in approach to sampling from many common types of GP posteriors, such as those arising from sparse approximations or Gaussian observations. These expressions are given in the paper. Together with its ease-of-use and pathwise differentiability, efficient sampling’s linear time complexity makes it an ideal choice for GP-based Monte Carlo methods.
References
C. E. Rasmussen and C. K. Williams. Gaussian Processes for Machine Learning. 2006.
A. Rahimi and B. Recht. Random features for large-scale kernel machines. NeurIPS, 2008.
Using the random Fourier feature method, we set , where are drawn from the kernel’s spectral measure, and .
D. J. Sutherland and J. Schneider. On the error of random fourier features. UAI, 2015.
D. R. Burt, C. E. Rasmussen, and M. van der Wilk. Rates of convergence for sparse variational Gaussian process regression. ICML, 2019.