Bayesian interpretation of regularization

In statistics and machine learning, a Bayesian interpretation of regularization for kernel methods is often useful. Kernel methods are central to both the regularization and the Bayesian point of views in machine learning.

In statistics and machine learning, a Bayesian interpretation of regularization for kernel methods is often useful. Kernel methods are central to both the regularization and the Bayesian point of views in machine learning. In regularization they are a natural choice for the hypothesis space and the regularization functional through the notion of reproducing kernel Hilbert spaces. In Bayesian probability they are a key component of Gaussian processes, where the kernel function is known as the covariance function. Kernel methods have traditionally been used in supervised learning problems where the input space is usually a space of vectors while the output space is a space of scalars. More recently these methods have been extended to problems that deal with multiple outputs such as in multi-task learning.[1]

In this article we analyze the connections between the regularization and Bayesian point of views for kernel methods in the case of scalar outputs. A mathematical equivalence between the regularization and the Bayesian point of views is easily proved in cases where the reproducing kernel Hilbert space is finite-dimensional. The infinite-dimensional case raises subtle mathematical issues; we will consider here the finite-dimensional case. We start with a brief review of the main ideas underlying kernel methods for scalar learning, and briefly introduce the concepts of regularization and Gaussian processes. We then show how both point of views arrive at essentially equivalent estimators, and show the connection that ties them together.

 

Contents

  [hide
  • 1 The Supervised Learning Problem
  • 2 A Regularization Perspective
    • 2.1 Reproducing Kernel Hilbert Space
    • 2.2 The Regularized Functional
    • 2.3 Derivation of the Estimator
  • 3 A Bayesian Perspective
    • 3.1 A Review of Bayesian Probability
    • 3.2 The Gaussian Process
    • 3.3 Derivation of the Estimator
  • 4 The Connection Between Regularization and Bayes
  • 5 References

 

The Supervised Learning Problem[edit]

The classical supervised learning problem requires estimating the output for some new input point \mathbf{x}' by learning a scalar-valued estimator \hat{f}(\mathbf{x}') on the basis of a training set S consisting of n input-output pairs, S = (\mathbf{X},\mathbf{Y}) = (\mathbf{x}_1,y_1),\ldots,(\mathbf{x}_n,y_n).[2][3][4] Given a symmetric and positive bivariate function k(\cdot,\cdot) called a kernel, one of the most popular estimators in machine learning is given by

                \hat{f}(\mathbf{x}') = \mathbf{k}^\top(\mathbf{K} + \lambda n \mathbf{I})^{-1} \mathbf{Y},

 

 

 

 

 

(1)

where \mathbf{K} \equiv k(\mathbf{X},\mathbf{X}) is the kernel matrix with entries \mathbf{K}_{ij} = k(\mathbf{x}_i,\mathbf{x}_j) \mathbf{k} = [k(\mathbf{x}_1,\mathbf{x}'),\ldots,k(\mathbf{x}_n,\mathbf{x}')]^\top, and \mathbf{Y} = [y_1,\ldots,y_n]^\top. We will see how this estimator can be derived both from a regularization and a Bayesian perspective.

A Regularization Perspective[edit]

The main assumption in the regularization perspective is that the set of functions \mathcal{F} is assumed to belong to a reproducing kernel Hilbert space \mathcal{H}_k.[2][5][6][7]

Reproducing Kernel Hilbert Space[edit]

reproducing kernel Hilbert space (RKHS) \mathcal{H}_k is a Hilbert space of functions defined by a symmetricpositive-definite function k : \mathcal{X} \times \mathcal{X} \rightarrow \mathbb{R} called the reproducing kernel such that the function k(\mathbf{x},\cdot) belongs to \mathcal{H}_k for all \mathbf{x} \in \mathcal{X}.[8][9][10] There are three main properties make an RKHS appealing:

1. The reproducing property, which gives name to the space,

f(\mathbf{x}) = \langle f,k(\mathbf{x},\cdot) \rangle_k, \quad \forall \ f \in \mathcal{H}_k,

where \langle \cdot,\cdot \rangle_k is the inner product in \mathcal{H}_k.

2. Functions in an RKHS are in the closure of the linear combination of the kernel at given points,

f(\mathbf{x}) = \sum_i k(\mathbf{x}_i,\mathbf{x})c_i.

This allows the construction in a unified framework of both linear and generalized linear models.

3. The norm in an RKHS can be written as

\|f\|_k = \sum_{i,j} k(\mathbf{x}_i,\mathbf{x}_j) c_i c_j

and is a natural measure of how complex the function is.

The Regularized Functional[edit]

The estimator is derived as the minimizer of the regularized functional

                \frac{1}{n} \sum_{i=1}^{n}(f(\mathbf{x}_i)-y_i)^2 + \lambda \|f\|_k^2,

 

 

 

 

 

(2)

where f \in \mathcal{H}_k and \|\cdot\|_k is the norm in \mathcal{H}_k. The first term in this functional, which measures the average of the squares of the errors between the f(\mathbf{x}_i) and the y_i, is called the empirical risk and represents the cost we pay by predicting f(\mathbf{x}_i) for the true value y_i. The second term in the functional is the squared norm in a RKHS multiplied by a weight \lambda and serves the purpose of stabilizing the problem[5][7] as well as of adding a trade-off between fitting and complexity of the estimator.[2] The weight \lambda, called the regularizer, determines the degree to which instability and complexity of the estimator should be penalized (higher penalty for increasing value of \lambda).

Derivation of the Estimator[edit]

The explicit form of the estimator in equation (1) is derived in two steps. First, the representer theorem[11][12][13] states that the minimizer of the functional (2) can always be written as a linear combination of the kernels centered at the training-set points,

                \hat{f}(\mathbf{x}') = \sum_{i=1}^n c_i k(\mathbf{x}_i,\mathbf{x}') = \mathbf{k}^\top \mathbf{c},

 

 

 

 

 

(3)

for some \mathbf{c} \in \mathbb{R}^n. The explicit form of the coefficients \mathbf{c} = [c_1,\ldots,c_n]^\top can be found by substituting for f(\cdot) in the functional (2). For a function of the form in equation (3), we have that

\begin{align}\|f\|_k^2 & = \langle f,f \rangle_k, \\& = \left\langle \sum_{i=1}^N c_i k(\mathbf{x}_i,\cdot), \sum_{j=1}^N c_j k(\mathbf{x}_j,\cdot) \right\rangle_k, \\& = \sum_{i=1}^N \sum_{j=1}^N c_i c_j \langle k(\mathbf{x}_i,\cdot), k(\mathbf{x}_j,\cdot) \rangle_k, \\& = \sum_{i=1}^N \sum_{j=1}^N c_i c_j k(\mathbf{x}_i,\mathbf{x}_j), \\& = \mathbf{c}^\top \mathbf{K} \mathbf{c}.\end{align}

We can rewrite the functional (2) as

\frac{1}{n} \| \mathbf{y} - \mathbf{K} \mathbf{c} \|^2 + \lambda \mathbf{c}^\top \mathbf{K} \mathbf{c}.

This functional is convex in \mathbf{c} and therefore we can find its minimum by setting the gradient with respect to \mathbf{c} to zero,

\begin{align}-\frac{1}{n} \mathbf{K} (\mathbf{Y} - \mathbf{K} \mathbf{c}) + \lambda \mathbf{K} \mathbf{c} & = 0, \\(\mathbf{K} + \lambda n \mathbf{I}) \mathbf{c} & = \mathbf{Y}, \\\mathbf{c} & = (\mathbf{K} + \lambda n \mathbf{I})^{-1} \mathbf{Y}.\end{align}

Substituting this expression for the coefficients in equation (3), we obtain the estimator stated previously in equation (1),

\hat{f}(\mathbf{x}') = \mathbf{k}^\top(\mathbf{K} + \lambda n \mathbf{I})^{-1} \mathbf{Y}.

A Bayesian Perspective[edit]

The notion of a kernel plays a crucial role in Bayesian probability as the covariance function of a stochastic process called the Gaussian process.

A Review of Bayesian Probability[edit]

As part of the Bayesian framework, the Gaussian process specifies the prior distribution that describes the prior beliefs about the properties of the function being modeled. These beliefs are updated after taking into account observational data by means of alikelihood function that relates the prior beliefs to the observations. Taken together, the prior and likelihood lead to an updated distribution called the posterior distribution that is customarily used for predicting test cases.

The Gaussian Process[edit]

Gaussian process (GP) is a stochastic process in which any finite number of random variables that are sampled follow a joint Normal distribution.[14] The mean vector and covariance matrix of the Gaussian distribution completely specify the GP. GPs are usually used as a priori distribution for functions, and as such the mean vector and covariance matrix can be viewed as functions, where the covariance function is also called the kernel of the GP. Let a function f follow a Gaussian process with mean function m and kernel function k,

f \sim \mathcal{GP}(m,k).

In terms of the underlying Gaussian distribution, we have that for any finite set \mathbf{X} = \{\mathbf{x}_i\}_{i=1}^{n} if we let f(\mathbf{X}) = [f(\mathbf{x}_1),\ldots,f(\mathbf{x}_n)]^\top then

f(\mathbf{X}) \sim \mathcal{N}(\mathbf{m},\mathbf{K}),

where \mathbf{m} = m(\mathbf{X}) = [m(\mathbf{x}_1),\ldots,m(\mathbf{x}_N)]^\top is the mean vector and \mathbf{K} = k(\mathbf{X},\mathbf{X}) is the covariance matrix of the multivariate Gaussian distribution.

Derivation of the Estimator[edit]

In a regression context, the likelihood function is usually assumed to be a Gaussian distribution and the observations to be independent and identically distributed (iid),

p(y|f,\mathbf{x},\sigma^2) = \mathcal{N}(f(\mathbf{x}),\sigma^2).

This assumption corresponds to the observations being corrupted with zero-mean Gaussian noise with variance \sigma^2. The iid assumption makes it possible to factorize the likelihood function over the data points given the set of inputs \mathbf{X} and the variance of the noise \sigma^2, and thus the posterior distribution can be computed analytically. For a test input vector \mathbf{x}', given the training data S = \{\mathbf{X},\mathbf{Y}\}, the posterior distribution is given by

p(f(\mathbf{x}')|S,\mathbf{x}',\boldsymbol{\phi}) = \mathcal{N}(m(\mathbf{x}'),\sigma^2(\mathbf{x}')),

where \boldsymbol{\phi} denotes the set of parameters which include the variance of the noise \sigma^2 and any parameters from the covariance function k and where

\begin{align}m(\mathbf{x}') & = \mathbf{k}^\top (\mathbf{K} + \sigma^2 \mathbf{I})^{-1} \mathbf{Y}, \\\sigma^2(\mathbf{x}') & = k(\mathbf{x}',\mathbf{x}') - \mathbf{k}^\top (\mathbf{K} + \sigma^2 \mathbf{I})^{-1} \mathbf{k}.\end{align}

The Connection Between Regularization and Bayes[edit]

A connection between regularization theory and Bayesian theory can only be achieved in the case of finite dimensional RKHS. Under this assumption, regularization theory and Bayesian theory are connected through Gaussian process prediction.[5][14]

In the finite dimensional case, every RKHS can be described in terms of a feature map \Phi : \mathcal{X} \rightarrow \mathbb{R}^p such that[2]

k(\mathbf{x},\mathbf{x}') = \sum_{i=1}^p \Phi^i(\mathbf{x})\Phi^i(\mathbf{x}').

Functions in the RKHS with kernel \mathbf{K} can be then be written as

f_{\mathbf{w}}(\mathbf{x}) = \sum_{i=1}^p \mathbf{w}^i \Phi^i(\mathbf{x}) = \langle \mathbf{w},\Phi(\mathbf{x}) \rangle,

and we also have that

\|f_{\mathbf{w}} \|_k = \|\mathbf{w}\|.

We can now build a Gaussian process by assuming  \mathbf{w} = [w^1,\ldots,w^p]^\top  to be distributed according to a multivariate Gaussian distribution with zero mean and identity covariance matrix,

\mathbf{w} \sim \mathcal{N}(0,\mathbf{I}) \propto \exp(-\|\mathbf{w}\|^2).

If we assume a Gaussian likelihood we have

P(\mathbf{Y}|\mathbf{X},f) = \mathcal{N}(f(\mathbf{X}),\sigma^2 \mathbf{I}) \propto \exp\left(-\frac{1}{\sigma^2} \| f_{\mathbf{w}}(\mathbf{X}) - \mathbf{Y} \|^2\right),

where  f_{\mathbf{w}}(\mathbf{X}) = (\langle\mathbf{w},\Phi(\mathbf{x}_1)\rangle,\ldots,\langle\mathbf{w},\Phi(\mathbf{x}_n \rangle) . The resulting posterior distribution is the given by

P(f|\mathbf{X},\mathbf{Y}) \propto \exp\left(-\frac{1}{\sigma^2} \|f_{\mathbf{w}}(\mathbf{X}) - \mathbf{Y}\|_n^2 + \|\mathbf{w}\|^2\right)

We can see that a maximum posterior (MAP) estimate is equivalent to the minimization problem defining Tikhonov regularization, where in the Bayesian case the regularization parameter is related to the noise variance.

From a philosophical perspective, the loss function in a regularization setting plays a different role than the likelihood function in the Bayesian setting. Whereas the loss function measures the error that is incurred when predicting f(\mathbf{x}) in place of y, the likelihood function measures how likely the observations are from the model that was assumed to be true in the generative process. From a mathematical perspective, however, the formulations of the regularization and Bayesian frameworks make the loss function and the likelihood function to have the same mathematical role of promoting the inference of functions f that approximate the labels y as much as possible.

References[edit]

  1. Jump up^ Álvarez, Mauricio A.; Rosasco, Lorenzo; Lawrence, Neil D. (June 2011). "Kernels for Vector-Valued Functions: A Review". ArXiv e-prints.
  2. Jump up to:a b c d Vapnik, Vladimir (1998). Statistical learning theory. Wiley. ISBN 9780471030034.
  3. Jump up^ Hastie, Trevor; Tibshirani, Robert; Friedman, Jerome H. (2009). The Elements of Statistical Learning: Data Mining, Inference and Prediction (2, illustrated ed.). Springer. ISBN 9780387848570.
  4. Jump up^ Bishop, Christopher M. (2009). Pattern recognition and machine learning. Springer. ISBN 9780387310732.
  5. Jump up to:a b c Wahba, Grace (1990). Spline models for observational data. SIAM.
  6. Jump up^ Schölkopf, Bernhard; Smola, Alexander J. (2002). Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press. ISBN 9780262194754.
  7. Jump up to:a b Girosi, F.; Poggio, T. (1990). "Networks and the best approximation property". Biological Cybernetics (Springer) 63 (3): 169–176.
  8. Jump up^ Aronszajn, N (May 1950). "Theory of Reproducing Kernels". Transactions of the American Mathematical Society 68 (3): 337–404.
  9. Jump up^ Schwartz, Laurent (1964). "Sous-espaces hilbertiens d’espaces vectoriels topologiques et noyaux associés (noyaux reproduisants)". Journal d'analyse mathématique (Springer) 13 (1): 115–256.
  10. Jump up^ Cucker, Felipe; Smale, Steve (October 5, 2001). "On the mathematical foundations of learning". Bulletin of the American Mathematical Society 39 (1): 1–49.
  11. Jump up^ Kimeldorf, George S.; Wahba, Grace (1970). "A correspondence between Bayesian estimation on stochastic processes and smoothing by splines". The Annals of Mathematical Statistics 41 (2): 495–502. doi:10.1214/aoms/1177697089.
  12. Jump up^ Schölkopf, Bernhard; Herbrich, Ralf; Smola, Alex J. (2001). "A Generalized Representer Theorem". COLT/EuroCOLT 2001, LNCS. 2111/2001: 416–426. doi:10.1007/3-540-44581-1_27.
  13. Jump up^ De Vito, Ernesto; Rosasco, Lorenzo; Caponnetto, Andrea; Piana, Michele; Verri, Alessandro (October 2004). "Some Properties of Regularized Kernel Methods". Journal of Machine Learning Research 5: 1363–1390.
  14. Jump up to:a b Rasmussen, Carl Edward; Williams, Christopher K. I. (2006). Gaussian Processes for Machine Learning. The MIT Press. ISBN 0-262-18253-X.
RELATED ARTICLESExplain
Machine Learning Methods & Algorithms
Supervised learning
Bayesian statistics
Bayesian interpretation of regularization
Bayesian knowledge base
Bayesian network
Naive Bayes classifier
Graph of this discussion
Enter the title of your article


Enter a short (max 500 characters) summation of your article
Enter the main body of your article
Lock
+Comments (0)
+Citations (0)
+About
Enter comment

Select article text to quote
welcome text

First name   Last name 

Email

Skip