All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
principles of managerial statistics
Questions and Answers of
Principles Of Managerial Statistics
=+Task 10: Using the data in Table 3.3, what are the posterior odds of someone using a laptop in class (compared to not using one), given that they passed the exam?
=+Task 9: From the data in Table 3.3, what is the conditional probability that someone used a laptop in class, given that they passed the exam, p(laptop|pass). What is the conditional probability
=+Task 8: Various studies have shown that students who use laptops in class often do worse on their modules (Payne-Carter, Greenberg, &Waller, 2016; Sana, Weston, & Cepeda, 2013). Table 3.3 shows
=+Task 7: Describe what you understand by the term ‘Bayes factor’?
=+Task 6: What is meta-analysis?
=+Task 5: What is the difference between a confidence interval and a credible interval?
=+Task 4: What are the problems with null hypothesis significance testing?
=+Task 3: Calculate and interpret Cohen’s d for the difference in the mean duration of the celebrity marriages in Chapter 1 (Task 9) and mine and 199 my friends’ marriages in Chapter 2 (Task 13).
=+Task 2: In Chapter 1 (Task 8) we looked at an example of how many games it took a sportsperson before they hit the ‘red zone’, then in Chapter 2 we looked at data from a rival club. Compute and
=+Task 1: What is an effect size and how is it measured?
=+What are the problems with NHST?
=+Task 13: In Chapter 1 (Task 9) we looked at the length in days of 11 celebrity marriages. Here are the lengths in days of eight marriages, one being mine and the other seven being those of some of
=+Task 12: At a rival club to the one I support, they similarly measured the number of consecutive games it took their players before they reached the 150 red zone. The data are: 6, 17, 7, 3, 8, 9,
=+Task 11: In Chapter 1 (Task 8), we looked at an example of how many games it took a sportsperson before they hit the ‘red zone’. Calculate the standard error and confidence interval for those
=+Task 10: Figure 2.18 shows a similar study to the one above, but the means were 10 (singing) and 10.01 (conversation), the standard deviations in both groups were 3, and each group contained 1
=+Task 9: Figure 2.17 shows two experiments that looked at the effect of singing versus conversation on how much time a woman would spend with a man. In both experiments the means were 10 (singing)
=+Task 8: What is statistical power?
=+Task 7: What are Type I and Type II errors?
=+Task 6: What is a test statistic and what does it tell us?
=+Task 5: What do the sum of squares, variance and standard deviation represent? How do they differ?
=+Task 4: In Chapter 1 we used an example of the time taken for 21 heavy smokers to fall off a treadmill at the fastest setting (18, 16, 18, 24, 23, 149 22, 22, 23, 26, 29, 32, 34, 34, 36, 36, 43,
=+Task 3: What’s the difference between the standard deviation and the standard error?
=+Task 2: What is the mean and how do we tell if it’s representative of our data?
=+Task 1: Why do we use samples?
=+What do the differences in values between Tasks 9 and 10 tell us about the influence of unusual scores on these measures?
=+How does this affect the mean, median, range, interquartile range, and standard deviation?
=+Task 10: Repeat Task 9 but excluding Jennifer Anniston and Brad Pitt’s marriage.
=+Task 9: Celebrities always seem to be getting divorced. The(approximate) lengths of some celebrity marriages in days are: 240 (J-Lo and Cris Judd), 144 (Charlie Sheen and Donna Peele), 143 (Pamela
=+Task 8: Sports scientists sometimes talk of a ‘red zone’, which is a period during which players in a team are more likely to pick up injuries because they are fatigued. When a player hits the
=+Task 7: In this chapter we used an example of the time taken for 21 heavy smokers to fall off a treadmill at the fastest setting (18, 16, 18, 24, 23, 22, 22, 23, 26, 29, 32, 34, 34, 36, 36, 43, 42,
=+6: In 2011 I got married and we went to Disney World in Florida for our honeymoon. We bought some bride and groom Mickey Mouse hats and wore them around the parks. The staff at Disney are really
=+Task 5: Sketch the shape of a normal distribution, a positively skewed distribution and a negatively skewed distribution.
=+Task 4: Say I own 857 CDs. My friend has written a computer program that uses a webcam to scan the shelves in my house where I keep my CDs and measure how many I have. His program says that I have
=+Task 3: What is the level of measurement of the following variables?The number of downloads of different bands’ songs on iTunes The names of the bands that were downloaded Their positions in the
=+correlational research?90
=+What are (broadly speaking) the five stages of the research process?Task 2: What is the fundamental difference between experimental and
=+15.21. This exercise is related to Example 15.10.(i) Verify expression (15.42).(ii) Derive an expression for f(v, z2, z3|u) by integrating out α and β over[0, 1] × [0, 1] from (15.42).(iii)
=+15.19. Recall the definition of F below (15.24). Show that F ≤ 1.
=+15.18. Continue with the previous exercise.(i) Verify expressions (15.26) and (15.27).(ii) Verify the identity (15.29).
=+15.6 Exercises 551(ii) Show that df is a stronger measure of descrepancy than the L1-distance in the sense that p − fL1 ≤ df (p, f), wherep − fL1 =|p(x) − f(x)| dx is the L1-distance
=+15.17. This exercise is regarding the measure df defined by the square root of (15.25).(i) Show that df is not a distance [the definition of a distance is given by the requirements 1–4 below
=+15.16. Recall the open question given at the end of Section 15.4 regarding the consistency of MLE in a very simple case of the GLMM. The conjectured answer to the question is yes. Here is evidence
=+15.15. Specify the right side of (15.23) for the mixed logistic model of Example 15.9. Also specify the importance weights wkl by replacing f{y|ψ(k)}by 1, which does not affect the result of the
=+15.14. Regarding the M-H algorithm described below (15.20), show that the acceptance probability of the M-H algorithm [i.e., (15.12)] can be expressed as (15.21).
=+15.13. Verify that under the Gaussian mixed model (15.15), the loglikelihood function can be expressed as (15.16), where c does not depend on the data or parameters. Also verify the expressions of
=+15.12. This exercise is related to some of the details of the rejection sampling chain of Example 15.8 as a special case of the M-H algorithm.(i) Show that the rejection sampling scheme described
=+(15.12) if f(x)q(x, y) > 0, and α = 1 otherwise. Furthermore, the transition kernel is reversible with respect tof, the target density, in the sense of (15.11).[For simplicity, you may assume that
=+15.11. Show that the M-H algorithm, described below (15.12), generates a Markov chain Xt whose transition kernel is given by (15.10) with α defined by
=+15.10. Show that (15.11) implies that f is the stationary distribution with respect to K.
=+15.6. Regarding Examples 15.2 and 15.3, show the following:(i) fX(x), or πX(B) = B fX(x) dx, is a stationary distribution for the Markov chain Xt.(ii) fY (y), or πY (B) = B fY (y) dy, is a
=+15.4. Regarding rejection sampling, introduced in Section 15.1, show that the pdf of the drawn θ conditional on u ≤ p(θ)/bq(θ) is p(θ). (Hint: Use Bayes’theorem; see Appendix A.3).
=+15.2. This exercise is in every book, chapter, or section about Bayesian inference. Suppose that the distribution of y depends on a single parameter,θ. The conditional distribution of the
=+14.30. This exercise is related to Example 14.11.(i) Simplify the expressions for g1(θ) and g2(θ).(ii) It is conjectured that g3 can be expressed as s + o(m−1 ∗ ), where s depends only on the
=+Suppose that m → ∞ while k is fixed. Show that MSPE(ˆvi) = E(ˆvi −vi)2 can be expressed as a + o(m−1), where a depends only on the second and fourth moments of F and G.
=+v, which you may throughout this exercise for simplicity. The EBLUP for the random effect vi is given by vˆi = kσˆ2 v(ˆσ2 e + kσˆ2 v)−1(¯yi· − y¯··).
=+e , which do not require normality (see Section 12.2), are given by ˆσ2 v = (MSA − MSE)/k and ˆσ2 e = MSE, where MSA = SSA/(m − 1) with SSA = km i=1(¯yi· − y¯··)2, ¯yi· = k−1 k
=+ijβ = μ and ni = k, 1 ≤ i ≤ m, where μ is an unknown mean and k ≥ 2.Suppose that the random effects vi are i.i.d. with an unknown distribution F that has mean 0 and finite moment of any
=+14.29. Consider a special case of the nested error regression model (14.85)with x
=+14.28. Show that (14.80) and d2/n → 0 imply (14.81).
=+14.27. Show that, in Example 14.10, the coverage probability of ˜I2,i is 1 − α + o(1); that is, as n → ∞, Pμˆ − zα/2 Aˆ ≤ ζi ≤ μˆ + zα/2 Aˆ= 1 − α + o(1).
=+14.26. This exercise is regarding the special case of block-bootstrapping the sample mean, discussed in Section 14.5.(i) Show that the influence function IF defined in (A2) [above (14.72)] is given
=+14.25. Interpret expression (14.72) of the asymptotic variance σ2.
=+14.24. Verify the two equivalent expressions of (14.69)—that is, (14.70)and (14.71).
=+14.7 Exercises 521
=+14.23. This exercise has three parts.(i) Interpret the expression of the asymptotic variance τ 2 in (14.66) given below the equation.(ii) Show that (14.67) is equivalent to P∗1√nn t=1(X∗t
=+14.22. Is the plug-in principle used in the sieve bootstrap [see Section 14.5, below (14.64)] the same as Efron’s plug-in principle [see Section 14.5, below(14.63)]? Why?
=+14.21. Show that the coefficients φj , j = 0, 1,..., in (14.64) are functions of F, the joint distribution of the X’s and ’s. (You may impose some regularity conditions, if necessary.)
=+14.20. Regarding the plug-in principle of bootstrap summarized below(14.63), what are X, F, R(X, F) for bootstrapping under the dynamic model(14.60)? What are X∗, Fˆ, and R(X∗, Fˆ) in this
=+(iv) Make a histogram of the true distribution of ˆθ for case (ii). This can be done by drawing 2000 sample of X1,...,Xn, and compute ˆθ for each sample.Compare this histogram with that of
=+(iii) Make a histogram of the true distribution of ˆθ for case (i). This can be done by drawing 2000 sample of X1,...,Xn and computing ˆθ for each sample.Compare this histogram with that of
=+(i) F = Uniform[0, 1], θ = the median of F (which is 1/2), ˆθ = the sample median of X1,...,Xn, and ˆθ∗ = the sample median of X∗1 ,...,X∗n, the bootstrap sample. Make a histogram based
=+14.19. In this exercise you are encouraged to study the large-sample behavior of the bootstrap through some simulation studies. Two cases will be considered, as follows. In each case, consider n =
=+(v) Show that the distribution of ζn is the same as that of an(Xn,an−an−1 −Xn−k+1,an−an−1 ), where Xr,n is rth order statistic of X1,...,Xn. Furthermore, show that for any r
=+(iv) Using the above result, argue that (14.54) and (14.55) follow if (14.59)holds for any b > 0.
=+14.18. Continue with the previous exercise.(i) Give an example of a sequence an, n ≥ 0, of positive integers that is strictly increasing and satisfies (14.58) for every k ≥ 0.(ii) Show that for
=+14.17. This exercise is related to some of the details in Example 14.9.(i) Show that ξn = n{θ − X(n)}/θ converges weakly to Exponential(1).(ii) Show that for any k ≥ 1, P{X∗(n) <
=+14.16. Let X1,...,Xn be i.i.d. with cdf F and pdff. For any 0 0. Show that for any sequence m = mn such that m/n = p+o(n−1/2), we have √n{X(m)−νp} d−→N(0, σ2 p), where X(i) is the ith
=+14.15. This exercise involves some of the details in Example 14.8.(i) Show that the functional θ is a special case of (14.48).(ii) Verify that θ(Fn) = n−2 n i=1n j=1 1(Xi+Xj>0), which is a V
=+14.14. Show that, with the functional h defined by (14.48), the derivative ψis given by (14.49). [Hint: Use (14.47). You do not need to justify it rigorously.]
=+14.13. Regarding Remark 1 below (14.32), show the following:(i) That the M-estimators ψˆ−i, 0 ≤ i ≤ m, are c.u. μω at rate m−d implies that they are c.u. at rate m−d.(ii) Conversely,
=+14.12. Prove Theorem 14.2. [Hint: Consider a neighborhood of ψ and show that the values of the function l at the boundary of the neighborhood are greater than that at the center with high
=+14.7 Exercises 519 V −1 = V −1X(XV −1X)−1XV −1 + A(AV A)−1A, where V = V (θ) and A is any n×(n−p) matrix of full rank (n is the dimention of y = (yj )1≤j≤m) such that AX =
=+14.11. Regarding Example 14.7, show that the REML estimator of ψ satisfies (14.29) (according to the definition under non-Gaussian linear mixed models; see Section 12.2), where the fj ’s are
=+14.10. Regarding Example 14.6, show that the MLE of ψ satisfies (14.29)(under regularity conditions that you may need to specify) with a(ψ) = 0 and fj (ψ, yj ), 1 ≤ j ≤ p + q, specified in
=+14.9. This exercise involves some details in Example 14.5.(i) Show that MSPE(˜ζ)=1 − B.(ii) Show that MSPE(ˆζ)=1 − B + 2B/m + o(m−1).(iii) Show that E(ˆζ∗−1 − ˆζ)2 = A(1 − B)
=+14.8. Show that, in Example 14.4 [continued following (14.22)], the MSPE of ˜ζ is equal to σ2 vσ2 e /(σ2 e + niσ2 v), which is the same as var(ζ|yi).
=+14.7. This exercise involves some further details regarding the outline of the proof of Theorem 14.1.(i) Show that ws ≤ 1, s ∈ Sr.(ii) Use the technique of unspecified c (see Section 3.5) to
=+14.6. Regarding the outline of the proof of Theorem 14.1 (near the end of Section 14.2), show the following:n − p d − 1−1n − 1 d − 1= O d n;n − p d − 1−1 n d−n − p d
=+14.5. Show that when d = 1, (14.12) reduces to (14.16), which is the weighted delete-1 jackknife estimator of Var(βˆ).
=+14.4. Verify the representation (14.13) and show that βˆij is the OLS estimator of β based on the following regression model:yi = α + βxi + ei, yj = α + βxj + ej.
=+14.3. Consider the example of sample mean discussed below (14.1). Showed that, in this case, the right side of (14.8) is equal to s2/n, where s2 = (n −1)−1 n i=1(Xi − X¯ )2 is the sample
=+14.2. Continue with the previous exercise. Suppose that Xi has the Uniform[0, 1] distribution. Show that X(i) ∼ Beta(i, n − i + 1), 1 ≤ i ≤ n, and therefore obtain the mean and variance of
=+14.1. Let X1,...,Xn be an i.i.d. sample from a distribution with cdf F and pdff, and X(1) < ··· < X(n) be the order statistics. Then the cdf of X(i)is given by G(i)(x) = n k=in k{F(x)}k{1 −
=+13.26. Show that the estimator given by (13.104) is design-unbiased forμ2 i ; that is, Ed(ˆμ2 i ) = μ2 i . [Hint: Use the index set Ii; see a derivation below(13.103)
=+13.25. Derive the expression of MSPE (13.100) in the case that A is unknown. [Hint: First, note that ˜ζM = y − R(y − Xβ). Also note that E(eRy)=E{eR(μ + v + e)} = E(eRe) = tr(RD).]
=+13.24. Consider Example 13.4.(i) Recall that the best model in terms of BP is the one that maximizes C1(M), defined below (13.95). Show that C1(M) =⎧⎨⎩(49/640)a2m, M = M1 0, M =
=+13.23. Continue with the previous exercise. Show that if there is a true model M ∈ M, then a true model with minimal p is the optimal model under both the GIC and BP.
=+13.22. This exercise involves some details in Section 13.5.(i) Verify that the likelihood function under M is given by (13.89).(ii) Verify (13.93).(iii) Show that the BP of ζ under M, in the sense
=+13.21. This exercise is related to the arguments in Section 13.4 that show d2 → ∞.(i) Show that (13.84) holds for some constant c > 0 and determine this constant.470 13 Small-Area
=+13.20. Establish ineqaulity (13.83).
Showing 800 - 900
of 3052
First
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Last