In Section 3.8 .4 we derived an expression for the Gaussian posterior for a linear model within

Question:

In Section 3.8 .4 we derived an expression for the Gaussian posterior for a linear model within the context of the Olympic \(100 \mathrm{~m}\) data. Substituting \(\boldsymbol{\mu}_{0}=\) \([0,0, \ldots, 0]^{\top}\), we saw the similarity between the posterior mean

\[\boldsymbol{\mu}_{\mathbf{w}}=\frac{1}{\sigma^{2}}\left(\frac{1}{\sigma^{2}} \mathbf{X}^{\top} \mathbf{X}+\boldsymbol{\Sigma}_{0}^{-1}\right)^{-1} \mathbf{X}^{\top} \mathbf{t}\]

and the regularised least squares solution

\[\widehat{\mathbf{w}}=\left(\mathbf{X}^{\top} \mathbf{X}+N \lambda \mathbf{I}\right)^{-1} \mathbf{X}^{\top} \mathbf{t}\]

For this particular example, find the prior covariance matrix \(\boldsymbol{\Sigma}_{0}\) that makes the two identical. In other words, find \(\boldsymbol{\Sigma}_{0}\) in terms of \(\lambda\).

Data from Section 3.8 .4

image text in transcribed

image text in transcribed

.............

Fantastic news! We've Found the answer you've been seeking!

Step by Step Answer:

Related Book For  book-img-for-question

A First Course In Machine Learning

ISBN: 9781498738484

2nd Edition

Authors: Simon Rogers , Mark Girolam

Question Posted: