5.18 (Estimating equations) A generalization of the WLS (see Example 5.8) is the following. Let Y denote

Question:

5.18 (Estimating equations) A generalization of the WLS (see Example 5.8) is the following. Let Y denote the vector of observations and θ a vector of parameters of interest. Consider an estimator of θ, say, ˆ θ, which is a solution to the equation W(θ)u(Y, θ) = 0, where W(θ) is a matrix depending on θ and u(y, θ) is a vector-valued function of Y and θ satisfying E{u(Y, θ)} = 0 if θ is the true parameter vector (in other words, the estimating equation is unbiased). Write M(θ) = W(θ)u(Y, θ).

Then, under some regularity conditions, we have by the Taylor expansion, 0 = M(ˆ θ)

≈ M(θ) + ∂M

∂θ ( ˆ θ − θ), where θ represents the true parameter vector. Thus, we have

ˆ θ − θ ≈ −



∂M

∂θ

−1 M(θ)

≈ −

E



∂M

∂θ

 −1 M(θ).

Here, the approximation means that the neglected term is of lower order in a suitable sense. This leads to the following approximation (whose justification, of course, requires some regularity conditions):

Var( ˆ θ) ≈

E



∂M

∂θ

 −1 Var{M(θ)}

E



∂M



∂θ

 −1

≡ V (θ).

Using a similar argument to that in the proof of Lemma 5.1, show that the best estimator ˆ θ corresponds to the estimating equation W∗(θ)u(Y, θ) = 0, where W∗(θ) = E(∂u



/∂θ){Var(u)}−1, in the sense that for any W(θ), V (θ) ≥ V∗(θ)

= E 
∂M∗
∂θ
 −1 Var{M∗(θ)}
E 
∂M ∗
∂θ
 −1 = [Var{M∗(θ)}]−1, where M∗(θ) = W∗(θ)u(Y, θ). Here, we assume that W∗(θ) does not depend on parameters other than θ (why?). Otherwise, a procedure similar to the EBLUE is necessary (see Example 5.8).

Fantastic news! We've Found the answer you've been seeking!

Step by Step Answer:

Related Book For  book-img-for-question
Question Posted: