Local Smoothing in R. The goal of this homework is to help you better understand the statistical properties and computational challenges of local smoothing such as loess, Nadaraya-Watson (NW) kernel smoothing, and spline smoothing. For this purpose, we will compute empirical bias and empirical variances based on m = 1000 Monte Carlo runs, where in each run we simulate a data set of n = 101 observations from the additive model Yi = f(xi) + ci (1) with the famous Mexican hat function f(x) = (1-z?) exp(-0.5x2), -2T Ex - 2n, (2) and 61, . . . , En are independent and identically distributed (iid) N(0, 0.2"). This function is known to pose a variety of estimation challenges, and below we explore the difficulties inherent in this function. (1) Let us first consider the (deterministic fixed) design with equi-distant points in [-27, 2x]. (a) For each of m = 1000 Monte Carlo runs, simulate or generate a data set of the form (mi, Yi)) with ri = 27(-1+2 -) and Y, is from the model in (1). Denote such data set as Dj at the j-th Monte Carlo run for j = 1, . .. , m = 1000. (b) For each data set D, or each Monte Carlo run, compute the three different kinds of local smoothing estimates at every point in Dj: loess (with span = 0.75), Nadaraya- Watson (NW) kernel smoothing with Gaussian Kernel and bandwidth = 0.2, and spline smoothing wit the default tuning parameter. (c) At each point Ti, for each local smoothing method, based on m = 1000 Monte Carlo runs, compute the empirical bias Bias{f(m;)} and the empirical variance Var{f(x.)}, where Bias{f(ri)} Sfo(Ii) - f(xi), i= 1 Var{f(zi) } = 1 m - (d) Plot these quantities against ~; for all three kinds of local smoothing estimators: loess, NW kernel, and spline smoothing. (e) Provide a through analysis of what the plots suggest, e.g., which method is better/worse on bias, variance, and mean square error (MSE)? Do think whether it is fair comparison between these three methods? Why or why not