All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
statistical techniques in business
Questions and Answers of
Statistical Techniques in Business
Nonexistence of unbiased procedures. Let X1,..., Xn be independently distributed with density (1/a) f ((x − ξ)/a), and let θ = (ξ, a). Then no estimator of ξ exists which is unbiased with
Median unbiasedness.(i) A real number m is a median for the random variable Y if P{Y ≥ m} ≥ 1 2 , P{Y ≤m} ≥ 1 2 . Then all real a1, a2 such that m ≤ a1 ≤ a2 or m ≥ a1 ≥ a2 satisfy E|Y
Unbiasedness in point estimation. Suppose that γ is a continuous realvalued function defined over which is not constant in any open subset of , and that the expectation h(θ) = Eθδ(X) is a
The following distributions arise on the basis of assumptions similar to those leading to (1.1)–(1.3).(i) Independent trials with constant probability p of success are carried out until a
15.17. Reproduce the Bayes estimates for the ignorable model in Table 15.10 by data augmentation, and provide a histogram of draws. Use five draws of the missing data under DA to create multiple
15.16. Reproduce the EM estimates for the ignorable model in Table 15.10. Use the bootstrap to compute the standard error of the estimated proportion voting yes.
15.15. Redo Table 15.8 for the data in Table 15.7 with m1 and m2 multiplied by a factor of ten.
15.14. State for each model which parameters (if any) are inestimable. 15.14. Verify the five sets of estimated cell probabilities in Table 15.8.
15.13. For suitable parameterizations of the models, write down factored likelihoods for the models fY1Y2; Y1Mg, fY1Y2; Y2Mg, fY1M; Y2Mg, fY1; Y2Mg in Example
15.12. For the pattern-mixture model (15.18) where missingness of Y2 depends on Y1 þ lY2, show that the ML estimate of c1m1 þ c2m2 is c1y1 þ c2y2 þ ðc1 þ c2bð^ lÞ 211ðm^ 1 y1Þ. Hence
15.11. Show that if l ¼ bð0Þ 122, substituting the ML estimate of bð0Þ 122 in Eqs. (15.27)–(15.29) yields complete-case estimates. That is, if l ¼ bð0Þ 122 is thought to be more
15.10. Fill in the details leading to the ML estimates (15.23)–(15.25) for the patternmixture model (15.18) under restrictions (15.22), and Eqs. (15.27)–(15.29) for the pattern-mixture model
15.9. Section 15.5.2 shows that for the pattern-mixture model (15.18) with MAR restrictions (15.21), the ML estimate of m2 is the same as for the ignorable selection model in Section 7.2.1. Show
15.8. Derive the expressions for the posterior mean and variance of y in Example 15.10. What is the posterior mean and variance of variable 32D when c1 ¼ c2 ¼ 0:5?
15.7. Suppose that for the model of Example 15.7 a random subsample of nonrespondents to Y1 is followed up, and values of Y1 obtained. Write down the loglikelihood for the resulting data and describe
15.6. Consider the selection model of Example 15.7 when (xi ¼ xi1;zi), where zi is a single binary variable predictive of selection but with coefficient zero in the regression on Y1 on X. The
15.5. Review the two-step fitting method for the model of Example 15.7 of Heckman (1976). Contrast the assumptions made by that method and by the ML fitting procedure in Example 15.7 (see, e.g.,
15.4. Justify the M step computations given at the end of Example 15.7. In particular, why is the estimate of s2 1 not given simply from the regression of Y1 on X?
15.3. Derive the expressions for the E step in Example 15.7.
15.2. Derive the expressions for the E step in Example 15.4. Also display the M step for this example explicitly.
15.1. Carry out the integrations needed to derive the E step in Example 15.3.
4.12. Derive Eq. (14.18) from Eqs. (14.1) and (14.2), and hence express the parameters fgc0, gcjg as functions of the general location model parameters y ¼ ðP; G; OÞ. Consider the impact of
14.11. Derive Eq. (14.17) from Eqs. (14.1) and (14.2), and hence express the parameters fbc0, bj, s2g as functions of the general location model parameters y ¼ ðP; G; OÞ. Consider the impact of
14.10. Describe the Bayesian analog of the simplified ML algorithm in Section 14.3.5.
14.9. Derive the maximized loglikelihood of the data in Problem 14.7 for the models of Problems 14.7 and 14.8, and hence derive the likelihood ratio chisquared statistic for testing independence of
14.8. Repeat (b) of Problem 14.7, with the restriction that the variables race and sex are independent.
14.7. A survey of 20 graduates of a university class five years after graduation yielded the following results for the variables sex (1 ¼ male, 2 ¼ female), race (1 ¼ white, 2 ¼ other), and
14.6. Derive the expressions in Section 14.2.3 for the conditional expectations of wimxij and xijxik given xobs;i, Si, and yðtÞ , from properties of the general location model.
14.5. Using Bayes theorem, show that Eq. (14.9) follows from the definition of the general location model, Eqs. (14.1) and (14.2).
14.4. Compare the properties of discriminant analysis and logistic regression for classifying observations into groups on the basis of known covariates. (See, for example, Press and Wilson (1978);
14.3. Suppose that in Problem 14.2, X is fully observed and Y has missing values. Show that ML estimates for the general location model cannot be found by factoring the likelihood, because the
14.2. Using the factored likelihood methods of Chapter 7, derive ML estimates of the general location model for the special case of one fully observed categorical variable Y and one continuous
14.1. Show that Eq. (14.4) provides ML estimates of the parameters for the complete-data loglikelihood Eq. (14.3).
13.17. Consider bivariate monotone data as in Section 13.2, and suppose the data are MCAR. (a) Show that cj in Eq. (13.7) is of smaller order than other terms in the expression. (b) Show that Eq.
13.16. Why can starting values including zero probabilities disrupt proper performance of EM? (Hint: Consider the loglikelihood.)
13.15. Compute ML estimates for the model fSP; SCg for the full data in Table 13.8, with the counts in the supplemental Table 13.8b increased by a factor of 10.
13.14. Using results from Problem 13.13, derive the estimates in Table 13.9 for the models fSPCg, fSC; PCg, and fSP; SCg.
13.13. Display explicit ML estimates for all the models in Table 13.7 except for f12,23,31g.
13.12. Show that in Example 13.10, the factors in the factored likelihood are distinct for models fSP; SC; PCg and fSP; SCg, but are not distinct for fSC; PCg.
13.11. Compute the EM algorithm for the data in Table 13.5 with values superscripteda, b andc, d in the supplemental margins interchanged. Compare the ML estimate of the odds ratio p11p22p 1 12 p 1
13.10. Redo Example 13.4 assuming that the coarsely classified data in Table 13.4 were summarized as ‘‘Improvement’’ or ‘‘No Improvement’’ (stationary or worse).
13.9. Replicate the calculations of Example 13.4 for estimates of p12.
13.8. Fill in the details in the derivation of Eq. (13.7).
13.7. State in words the assumption about the missing-data mechanism under which the estimates in Table 13.4c are ML for Example 13.4.
13.6. Suppose that in Example 13.3 there are no cases with patternd. Which parameters are inestimable, in that they do not appear in the likelihood? Estimate the cell probabilities, assuming specific
13.5. Calculate the expected cell frequencies in the first column of data in Table 13.3b, and compare the answers with those obtained from complete cases.
13.4. Compute the fraction of missing information in Example 13.2, using the methods of Section 9.1.
13.3. Verify the results of the chi-squared test for the MCAR assumption in Example 13.2.
13.2. Derive ML estimates and associated variances for the likelihood (13.1). (Hint: Remember the constraint that the cell probabilities sum to 1.)
13.1. Show that for complete data the Poisson and multinomial models for multiway counted data yield the same likelihood-based inferences for the cell probabilities. Show that the result continues to
12.8. Derive the E step equations in Section 12.3.2.
12.7. Derive the weighting functions (12.11) and (12.13) for the models of Section 12.3.
12.6. Extend the contaminated normal and t models with known df to the case of simple linear regression of X on a fixed covariate Z. Derive EM algorithms for these models. Do cases with Z observed
12.5. Explore ML estimation for the contaminated normal model of Example 12.1 with (a) p known and l unknown and estimated by ML, and (b) p unknown and estimated by ML and l known. (Does the case
12.4. Simulate data from the contaminated normal model and explore sensitivity of inferences to different choices of true and assumed values of p and l.
12.3. Write a program to compute ML estimates for the contaminated normal model of Example 12.1.
12.2. Write down the data augmentation algorithm for Example 12.1.
12.1. Derive the weighting function (12.5) for the model of Example 12.1.
11.14. Review the GEM algorithm in Jennrich and Schluchter (1986) for ML estimation for the model of Eq. (11.20). Under what circumstances is the GEM algorithm an ECM algorithm, as defined in Section
11.13. Develop a Gibbs’ sampler for simulating the posterior distributions of the parameters and predictions of the zi for Example 11.9. Compare the posterior distributions for the predictions
11.12. For Example 11.8, extend the results of Problem 11.11 to compute the means, variances, and covariance of yjþ1 and yjþ2 given yj; yjþ3, and y, for a sequence where yj and yjþ3 are observed,
11.11. Fill in the details leading to the expressions for the mean and variance of yjþ1 given yj; yjþ2, and y in Example 11.8. Comment on the form of the expected value of yjþ1 as b " 1 and b # 0.
11.10. Examine Beale and Little’s (1975) approximate method for estimating the covariance matrix of estimated slopes in Section 11.4.2, for a single predictor X, and data with (a) Y completely
11.9. Derive the EM algorithm for the model of Example 11.4 extended with the specification that m Nð0; t2Þ, where m is treated as missing data. Then consider the case where t2 ! 1, yielding a flat
11.8. Review the discussion in Rubin and Thayer (1978, 1982, 1983) and Bentler and Tanaka (1983) on EM for factor analysis.
11.7. Prove the statement before Eq. (11.9) that complete-data ML estimates of S are obtained from C by simple averaging. (Hint: Consider the covariance matrix of the four variables U1 ¼ Y1 þ Y2 þ
11.6. For bivariate data, find the ML estimate of the correlation r for (a) a bivariate sample of size r, with means ðm1; m2Þ and variances ðs2 1; s2 2Þ assumed known, and (b) a bivariate sample
11.5. Derive the expression for the expected information matrix in Section 11.2.2, for the special case of bivariate data.
11.4. Describe the EM algorithm for bivariate normal data with means ðm1; m2Þ, correlation r, and common variance s2, and an arbitrary pattern of missing values. If you did Problem 11.2, modify the
11.3. Write a computer program for generating draws from the posterior distribution of the parameters, for bivariate normal data with an arbitrary pattern of missing values, and a noninformative
11.2. Write a computer program for the EM algorithm for bivariate normal data with an arbitrary pattern of missing values.
11.1. Show that the available-case estimates of the means and variances of an incomplete multivariate sample, discussed in Section 3.4, are ML when the data are multivariate normal with unrestricted
10.5. Modify the multiple imputation approach of Problem 10.4 to give the correct answer for large r and N=r. (Hint: For example, add sRr 1=2zd to the imputed value for observation i, where the zd
10.3. Suppose in Problem 10.2, imputations are randomly drawn with replacement from the r respondents’ values. (a) Show that y* is unbiased for the population mean Y . (b) Show that conditional
10.2. Consider a simple random sample of size n with r respondents and m ¼ n r nonrespondents, and let yR and s2 R be the sample mean and variance of the respondents’ data, and yNR and s2 NR the
10.1. Reproduce the posterior distributions in Figure 10.1, and compare the posterior mean and variance with that given in Table 10.1. Recalculate the posterior distribution of y using the improper
9.8 Using the reasons given at the end of Section 9.2.1, explain why SEM is more computationally stable than simply numerically differentiating ‘ðyjYobsÞ twice.
9.7 Suppose the model is misspecified, but the ML estimate found by EM is a consistent estimate of the parameter y. Which method of estimating the largesample covariance matrix of ðy ^ yÞ is
9.6 Suppose PX-EM is used to find the ML estimate of y in Example 8.10. Further suppose that SEM is applied to the sequence of PX-EM iterates, assuming the algorithm were EM. Would the resulting
9.5 The SEM algorithm can be extended to the SECM algorithm when ECM is used rather than EM. Details are provided in Van Dyk, Meng and Rubin (1995), but it is more complicated than SEM. Describe how
9.4 Compute standard errors for the data in Table 7.1 using the bootstrap, and compare the results with the standard errors in Table 9.1.
9.3 Suppose ECM is used to find the ML estimate of y in Example 8.6. Are the iterates likely to converge more quickly or more slowly than EM? Further suppose that SEM is applied to the sequence of
9.2 Apply SEM to Example 9.2, but without the normalizing transformations on s22 and r. Compare the confidence intervals for s22 and r based on the results of Example 9.2 and the results in this
9.1 In Example 9.1, show how the EM and SEM answers were obtained for logitð^ yÞ. Compare the interval estimates for y using EM=SEM on the raw and logit scales.
8.16. Suppose that (a) X is Bernoulli with PrðX ¼ 1Þ ¼ 1 PrðX ¼ 0Þ ¼ p, and (b) Y given X ¼ j is normal with mean mj, variance s2, a simple form of the discriminant analysis model.
8.15. Prove the PX-E and PX-M steps in Example 8.10.
8.14. Write down the complete-data loglikelihood in Example 8.6, and verify the two CM steps Eqs. (8.34) and (8.35) in that example.
8.13. For the censored exponential sample in the second part of Example 6.22, suppose y1; ... ; yr are observed and yrþ1; ... ; yn are censored atc. Show that the complete-data sufficient statistic
8.12. Write down the large sample variance of the ML estimate of y in Example 8.2, and compare it with the variance of the ML estimate when the first and third counts (namely, 38 and 125) are
8.11. Verify the E and M steps in Example 8.4.
8.10. Write down the loglikelihood of y for the observed data in Example 8.2. Show directly by differentiating this function that IðyjYobsÞ ¼ 435:3, as found in Example 8.5.
8.9. By hand calculation, carry out the multivariate normal EM algorithm for the data set in Table 7.1, with initial estimates based on the complete observations. Hence verify that for this pattern
8.8. Suppose values yi in Problem 8.7 are missing if and only if yi >c, for some known censoring pointc. Explore the E step of the EM algorithm for estimating (a) b1; ... ; bJ when k is assumed
8.7. Suppose Y ¼ ð y1; ... ; ynÞ T are independent gamma random variables with unknown index k and mean mi ¼ gð P j bjxijÞ, where g is a known function, b ¼ ðb1; ... ; bJ Þ are unknown
8.6. Show that Eqs. (8.20) and (8.21) are the E and M steps for the regular exponential family (8.19).
8.5. Review results concerning the convergence of EM.
8.4. Show how Corollaries 1 and 2 follow from Theorem 8.1.
8.3. Prove that the loglikelihood in Example 8.3 is linear in the statistics in Eq. (8.12).
8.2. Describe in words the function of the E and M steps of the EM algorithm.
8.1. Show that for a scalar parameter, the Newton–Raphson algorithm converges in one step if the loglikelihood is quadratic.
Showing 800 - 900
of 5757
First
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Last