Take the linear model (boldsymbol{Y}=mathbf{X} boldsymbol{beta}+varepsilon), where (mathbf{X}) is an (n times p) model matrix, (varepsilon=mathbf{0}), and
Question:
Take the linear model \(\boldsymbol{Y}=\mathbf{X} \boldsymbol{\beta}+\varepsilon\), where \(\mathbf{X}\) is an \(n \times p\) model matrix, \(\varepsilon=\mathbf{0}\), and \(\mathbb{C o v}(\boldsymbol{\varepsilon})=\) \(\sigma^{2} \mathbf{I}_{n}\). Let \(\mathbf{P}=\mathbf{X X} \mathbf{X}^{+}\)be the projection matrix onto the columns of \(\mathbf{X}\).
(a) Using the properties of the pseudo-inverse (see Definition A.2), show that \(\mathbf{P P}^{\top}=\mathbf{P}\).
(b) Let \(\boldsymbol{E}=\boldsymbol{Y}-\widehat{\boldsymbol{Y}}\) be the (random) vector of residuals, where \(\widehat{\boldsymbol{Y}}=\mathbf{P} \boldsymbol{Y}\). Show that the \(i\)-th residual has a normal distribution with expectation 0 and variance \(\sigma^{2} \mathbf{P}_{i i}\) (that is, \(\sigma^{2}\) times
the \(i\)-th leverage).
(c) Show that \(\sigma^{2}\) can be unbiasedly estimated via \[ \begin{equation*} S^{2}:=\frac{1}{n-p} \boldsymbol{Y}-\widehat{\boldsymbol{Y}}^{2}=\frac{1}{n-p} \boldsymbol{Y}-\mathbf{X} \widehat{\boldsymbol{\beta}}^{2} \tag{5.44} \end{equation*} \]
Step by Step Answer:
Data Science And Machine Learning Mathematical And Statistical Methods
ISBN: 9781118710852
1st Edition
Authors: Dirk P. Kroese, Thomas Taimre, Radislav Vaisman, Zdravko Botev