Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

Problem 3 (25%). Robust Linear Regression Suppose we have the generative linear regression model y = XO' + , where e is the error term

image text in transcribedimage text in transcribed
Problem 3 (25%). Robust Linear Regression Suppose we have the generative linear regression model y = XO' + , where e is the error term and e ~ N(0, E) and X has full column rank. The maximum likelihood estimator for 0 is: OLS = argmin |Xe - yll3 BERd = ( X X) X y (a) Suppose the error term, e = [1, E2, ' '' , En] follows the Laplace distribution, i.e. e, bid L(0, b), i = 1,2, ... , n and the probability density function is P() = me . for some b > 0. Under the MLE principle, what is the learning problem? Please write out the derivation process. Figure 1: PDF of Laplace distribution (b) Huber-smoothing. L1-norm minimization OL1 = argmin ||X0 - yll1 is one possible solution for robust regression. However, it is nondifferentiable. We utilize smoothing technique for approximately solving the L1-norm minimization. Huber function is one possibility. The definition and sketch map are shown as below. hu ( 2 ) 1 2+ 4, 121 5 / Then, H, (Z) = Zhu(z, ). j=1 By using Huber smoothing, the approximation of the optimization of L1-norm can be changed to min Hu(X0 - y).\f

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Transportation A Global Supply Chain Perspective

Authors: John J. Coyle, Robert A. Novak, Brian Gibson, Edward J. Bard

8th edition

9781305445352, 1133592961, 130544535X, 978-1133592969

More Books

Students also viewed these Mathematics questions