5.17 ( ) Consider a squared loss function of the form E = 1 2 {y(x,w)...
Question:
5.17 () Consider a squared loss function of the form E =
1 2
{y(x,w) − t}2 p(x, t) dx dt (5.193)
where y(x,w) is a parametric function such as a neural network. The result (1.89)
shows that the function y(x,w) that minimizes this error is given by the conditional expectation of t given x. Use this result to show that the second derivative of E with respect to two elements wr and ws of the vector w, is given by
∂2E
∂wr∂ws
=
∂y
∂wr
∂y
∂ws p(x) dx. (5.194)
Note that, for a finite sample from p(x), we obtain (5.84).
Fantastic news! We've Found the answer you've been seeking!
Step by Step Answer:
Related Book For
Pattern Recognition And Machine Learning
ISBN: 9780387310732
1st Edition
Authors: Christopher M Bishop
Question Posted: