Answered step by step
Verified Expert Solution
Question
1 Approved Answer
Exercise 11.6 EM for a finite scale mixture of Gaussians (Source: Jaakkola..) Consider the graphical model in Figure 11.23 which defines the following: (11.125) 11.6.
Exercise 11.6 EM for a finite scale mixture of Gaussians (Source: Jaakkola..) Consider the graphical model in Figure 11.23 which defines the following: (11.125) 11.6. Fitting models with missing data 377 where {P1 , . . . , Pm, 1 , . . . ,Pm, q1 , . . . , g, . . . . , ?) are all the parameters. Here p.* P(J.- j) and qk P(K, k) are the equivalent of mixture weights. We can think of this as a mixture of m non-Gaussian components, where each component distribution is a scale mixture, p(x):0) 1 qkN(x;H5 ,), combining Gaussians with different variances (scales) We will now derive a generalized EM algorithm for this model. (Recall that in generalized EM, we do a partial update in the M step, rather than finding the exact maximum.) a. Derive an expression for the responsibilities, P(J-j, Knkn,), needed for the E step b. Write out a full expression for the expected complete log-likelihood new old (11.126) C. Solving the M-step would require us to jointly optimize the means 41, ,m and the variances It will turn out to be simpler to first solve for the 's given fixed %, and subsequently solve for % given the new values of 's. For brevity, we will just do the first part. Derive an 2 1, , . expression for the maximizing ,s given fixed 2pke., solve 0-0 Exercise 11.6 EM for a finite scale mixture of Gaussians (Source: Jaakkola..) Consider the graphical model in Figure 11.23 which defines the following: (11.125) 11.6. Fitting models with missing data 377 where {P1 , . . . , Pm, 1 , . . . ,Pm, q1 , . . . , g, . . . . , ?) are all the parameters. Here p.* P(J.- j) and qk P(K, k) are the equivalent of mixture weights. We can think of this as a mixture of m non-Gaussian components, where each component distribution is a scale mixture, p(x):0) 1 qkN(x;H5 ,), combining Gaussians with different variances (scales) We will now derive a generalized EM algorithm for this model. (Recall that in generalized EM, we do a partial update in the M step, rather than finding the exact maximum.) a. Derive an expression for the responsibilities, P(J-j, Knkn,), needed for the E step b. Write out a full expression for the expected complete log-likelihood new old (11.126) C. Solving the M-step would require us to jointly optimize the means 41, ,m and the variances It will turn out to be simpler to first solve for the 's given fixed %, and subsequently solve for % given the new values of 's. For brevity, we will just do the first part. Derive an 2 1, , . expression for the maximizing ,s given fixed 2pke., solve 0-0
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started