Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

Dropout and Tikhonov regularization. Consider a simple version of dropout for linear regression with squared-error loss. We have an nd regression matrix X= (Xi,j)Rnd, a

image text in transcribed

Dropout and Tikhonov regularization. Consider a simple version of dropout for linear regression with squared-error loss. We have an nd regression matrix X= (Xi,j)Rnd, a label n-vector yRn, and a weight vector wRd. For simplicity assume all variables are centered so we can ignore intercepts. Consider the following random LS criterion: LI(w)=n1i=1n(yij=1dXijIijwj)2 were the Iij are independent, identically distributed (i.i.d.) variables i,j with Iij={0,1/(1p),withprobabilitypwithprobability1p (This particular form is used so that E(Iij)=1.) Using simple probability, compute the expected objective gradient E[wLI(w)] where the expectation is taken with respect to the Iijs. Write down the solution to the equation E[wLI(w)]=0, i.e., the expression for the weight vector w that satisfies

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image_2

Step: 3

blur-text-image_3

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Beyond Big Data Using Social MDM To Drive Deep Customer Insight

Authors: Martin Oberhofer, Eberhard Hechler

1st Edition

0133509796, 9780133509793

More Books

Students also viewed these Databases questions