All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
mathematics
statistics
Questions and Answers of
Statistics
Again, suppose we have a random sample X1,..., Xn from 1/Ï f[(x - 9)/Ï), a location- scale pdf, but we are now interested in estimating Ï2. We can consider three
Let X1,..., Xn be independent random variables with pdfswhere θ > 0. Find a two-dimensional sufficient statistic for θ.
Let X1,..., Xn be a random sample from a gamma (α, β) population. Find a two- dimensional sufficient statistic for (α, β),
Let f(x, y|θ1, θ2, θ3, θ4) be the bivariate pdf for the uniform distribution on the rectangle with lower left corner (θ1, θ2) and upper right corner (θ3, θ4) in ℜ2. The parameters satisfy
For each of the following distributions let X1,..., Xn be a random sample. Find a minimal sufficient statistic for θ.a.b.c.d.e.
One observation is taken on a discrete random variable X with pmf f(x|θ), where 0 {1, 2,3}. Find the MLE of θ.
The independent random variables X1,..., Xn have the common distributionwhere the parameters α and β are positive.a. Find a two-dimensional sufficient statistic for (α, β).b. Find the MLEs
Let X1,..., Xn be iid with pdf f(x|θ) = θxθ-1, 0 ≤ x ≤ 1, 0 < θ < ∞ a. Find the MLE of θ, and show that its variance → 0 as → ∞. b. Find the method of moments estimator of θ.
Let X1,..., Xn be a random sample from a population with pmf Pθ(X = x) = θx (1 - θ)1-x, x = 0 or 1, 0 ≤ θ ≤ 1/2. a. Find the method of moments estimator and MLE of θ. b. Find the mean
Let X1,., Xn be a sample from a population with double exponential pdf f(x|θ) = 1/2e-|x-θ|, - ∞ < x < ∞, - ∞ < θ < ∞ Find the MLE of θ.
Let X1, X2,..., Xn be a sample from the inverse Gaussian pdf,a. Show that the MLEs of μ and λ are b. Tweedie (1957) showed that n and n are independent, n having an inverse
The Borel Paradox (Miscellanea 4.9.3) can also arise in inference problems. Suppose that X1 and X2 are iid exponential(θ) random variables. a. If we observe only X2, show that the MLE of is θ =
Let (X1, Y1),..., (Xn, Yn) be iid bivariate normal random variables (pairs) where all five parameters are unknown.a. Show that the method of moments estimators for μX, μY,
Suppose that the random variables Y1,..., Yn satisfy Yi = βxi + ∈i, i = 1,..., n, where x1, ..., xn are fixed constants, and ∈1,..., ∈n are iid n(0, σ2), σ2 unknown. a. Find a
Let X1,..., Xn be a random sample from a gamma(a, (β) population. a. Find the MLE of β, assuming α is known. b. If α and β are both unknown, there is no explicit formula for the MLEs of α and
Consider Y1,..., Yn as defined in Exercise 7.19. a. Show that ∑Yi/∑xi is an unbiased estimator of β. b. Calculate the exact variance of ∑Yi/∑xi xi and compare it to the variance of the MLE.
Again, let Y1,..., Yn be as defined in Exercise 7.19. a. Show that [∑(Yi/xi)] is also an unbiased estimator of β. b. Calculate the exact variance of [∑(Yi/xi)] /n and compare it to the variances
This exercise will prove the assertions in Example 7.2.16, and more. Let X1,..., Xn be a random sample from a n(θ, σ2) population, and suppose that the prior distribution on θ is n(μ, τ2). Here
If S2 is the sample variance based on a sample of size n from a normal population, we know that (n - l)S2/Ï2 has a X2n-1 distribution. The conjugate prior for Ï2 is the inverted
Let X1,...,Xn be iid Poisson(λ), and let λ have a gamma(α, β) distribution, the conjugate family for the Poisson. a. Find the posterior distribution of λ. b. Calculate the posterior mean and
We examine a generalization of the hierarchical (Bayes) model considered in Example 7.2.16 and Exercise 7.22. Suppose that we observe X1,... , Xn, whereXi|θi ~ n (θi|σ2), i = l,...,n,
In Example 7.2.16 we saw that the normal distribution is its own conjugate family. It is sometimes the case, however, that a conjugate prior does not accurately reflect prior knowledge, and a
Refer to Example 7.2.17.a. Show that the likelihood estimators from the complete-data likelihood (7.2.11) are given by (7.2.12).b. Show that the limit of the EM sequence in (7.2.23) satisfies
An alternative to the model of Example 7.2.17 is the following, where we observe (Yi, Xi), i = 1,2, ...,n, where Yi ~ Poisson(mβÏi) and (X1,..., Xn) ~
Given a random sample X1,..., Xn from a population with pdf f(x|θ), show that maximizing the likelihood function, L(θ|x), as a function of θ is equivalent to maximizing log L(θ|x).
Prove Theorem 7.2.20.a. Show that, using (7.2.19), we can writeand, since (r+1) is a maximum, When is the inequality and equality? b. Now use Jensen's inequality to show that Which together with part
In Example 7.3.5 the MSE of the Bayes estimator, B, of a success probability was calculated (the estimator was derived in Example 7.2.14). Show that the choice α = β = √n/4 yields a constant MSE
The Pitman Estimator of Location (see Lehmann and Casella 1998 Section 3.1, or the original paper by Pitman 1939) is given bywhere we observe a random sample X1,..., Xn from f(x - θ).
Let X1,..., Xn be a random sample from a population with pdf f(x|θ) = 1/2θ, -θ < x < θ, θ > 0. Find, if one exists, a best unbiased estimator of θ.
For each of the following distributions, let X1,..., Xn be a random sample. Is there a function of θ, say g(θ), for which there exists an unbiased estimator whose variance attains the Cramer-Rao
Prove Lemma 7.3.11.
Let X1,..., Xn be iid Bernoulli(p). Show that the variance of attains the Cramer-Rao Lower Bound, and hence is the best unbiased estimator of p.
Let X1,.., Xn be a random sample from a population with mean μ and variance Ï2.(a) Show that the estimatoris an unbiased estimator of μ if (b) Among all
Exercise 7.42 established that the optimal weights are q*i = (1/Ï2i)/(j 1/Ï2j). A result due to Tukey (see Bloch and Moses 1988) states that if W = i
Let X1,..., Xn be iid n(θ, 1). Show that the best unbiased estimator of θ2 is 2 - (1/n). Calculate its variance (use Stein's Identity from Section 3.6), and show that it is greater than the
Let X1, X2,..., Xn be iid from a distribution with mean μ and variance Ï2, and let S2 be the usual unbiased estimator of Ï2. In Example 7.3.4 we saw that, under
Let X1, X2, and X3 be a random sample of size three from a uniform(θ, 2θ) distribution, where θ > 0. (a) Find the method of moments estimator of θ. (b) Find the MLE, θ, and find a constant k
Suppose that when the radius of a circle is measured, an error is made that has a n(0, σ2) distribution. If n independent measurements are made, find an unbiased estimator of the area of the circle.
Suppose that Xi, i = 1,..., n, are iid Bernoulli(p). (a) Show that the variance of the MLE of p attains the Cramer-Rao Lower Bound. (b) For n ≥ 4, show that the product X1X2X3X4 is an unbiased
Let X1,..., Xn be iid exponential (λ). (a) Find an unbiased estimator of λ based only on Y = min{X1,..., Xn}. (b) Find a better estimator than the one in part (a). Prove that it is better. (c) The
Consider estimating the binomial parameter k as in Example 7.2.9. a. Prove the assertion that the integer that satisfies the inequalities and is the MLE is the largest integer less than or equal to
Let X1,..., Xn be iid n(θ, θ2), θ > 0. For this model both and cS are unbiased estimators of θ, where(a) Prove that for any number a the estimator
Gleser and Healy (1976) give a detailed treatment of the estimation problem in the n(θ, aθ2) family, where o is a known constant (of which Exercise 7.50 is a special case). We explore a small part
Let X1,...,Xn be iid Poisson(A), and let and S2 denote the sample mean and variance, respectively. We now complete Example 7.3.8 in a different way. There we used the Cramer-Rao Bound; now we use
Finish some of the details left out of the proof of Theorem 7.3.20. Suppose W is an unbiased estimator of τ(θ), and U is an unbiased estimator of 0. Show that if, for some θ = θo, Covθ0(W, U)
For each of the following pdfs, let X1,..., Xn be a sample from that distribution. In each case, find the best unbiased estimator of θÏ.(a) f(x|θ) = 1/e|, 0(b)
Prove the assertion made in the text preceding Example 7.3.24: If T is a complete sufficient statistic for a parameter 0, and h(X1,... ,Xn) is any unbiased estimator of τ(0), then ϕ(T) =
Let X1,..., Xn+1 be iid Bernoulli(p), and define the function h(p) bythe probability that the first n observations exceed the (n + l)st. (a) Show that is an unbiased estimator of h(p). (b) Find the
Let X1,..., Xn+1 be iid n(p, σ2). Find the best unbiased estimator of σ2, where p is a known positive constant, not necessarily an integer.
Let X1,..., Xn be a random sample from the pdf f(x|θ) = θx-2, 0 < θ < x < ∞. a. What is a sufficient statistic for θ? b. Find the MLE of θ. c. Find the method of moment's estimator of θ.
Show that the log of the likelihood function for estimating a2, based on observing S2 ~ Ï2x2v/v, can be written in the formwhere K1, K2, and K3 are constants, not dependent on
Let X ~ n(μ,1). Let δ* be the Bayes estimator of μ for squared error loss. Compute and graph the risk functions, R(μ,δπ), for π(p) ~ n(0,1) and π(μ) ~ n(0,10). Comment on how the prior
A loss function investigated by Zellner (1986) is the LINEX (LINear-EXponential) loss, a loss function that can handle asymmetries in a smooth way. The LINEX loss is given byL(8,a) =
The jackknife is a general technique for reducing bias in an estimator (Quenouille, 1956). A one-step jackknife estimator is defined as follows. Let X1,..., Xn be a random sample, and let Tn =
Let X1,..., Xn be iid with one of two pdfs. If θ = 0, thenwhile if θ = 1, then Find the MLE of θ.
One observation, X, is taken from a n(0, σ2) population. a. Find an unbiased estimator of σ2. b. Find the MLE of σ. c. Discuss how the method of moments estimator of a might be found.
Let X1,..., Xn be iid with pdf f(x|θ) = 1/θ, 0 ≤ x ≤ θ, θ > 0 Estimate θ using both the method of moments and maximum likelihood. Calculate the means and variances of the two estimators.
In 1,000 tosses of a coin, 560 heads and 440 tails appear. Is it reasonable to assume that the coin is fair? Justify your answer.
Let X1,...,Xn be iid Poisson(A), and let A have a gamma(a,β) distribution, the conjugate family for the Poisson. In Exercise 7.24 the posterior distribution of A was found, including the posterior
In Exercise 7.23 the posterior distribution of σ2, the variance of a normal population, given S2, the sample variance based on a sample of size n, was found using a conjugate prior for σ2 (the
For samples of size n = 1,4,16,64,100 from a normal population with mean p and known variance σ2, plot the power function of the following LRTs. Take a = .05. (a) H0: μ < 0 versus H1: μ > 0 (b)
Let X1,X2 be iid uniform(θ,θ + 1). For testing H0: θ = 0 versus H1: θ > 0, we have two competing tests:(a) Find the value of C so that
For a random sample X1,..., Xn of Bernoulli(p) variables, it is desired to test H0: p- .49 versus H1: p= .51. Use the Central Limit Theorem to determine, approximately, the sample size needed so that
Show that for a random sample X1,..., Xn from a n(0, Ï2) population, the most powerful test of H0 : Ï = Ï0 versus H1: Ï = Ï1 , where
One very striking abuse of a levels is to choose them after seeing the data and to choose them in such a way as to force rejection (or acceptance) of a null hypothesis. To see what the true Type I
Suppose that X1,..., An are iid with a beta(p, 1) pdf and Y1,..., Ym are iid with a beta(0,1) pdf. Also assume that the As are independent of the Ys.(a) Find an LRT of H0: μ = p versus
Let X1,..., Xn be a random sample from a n(θ,σ2) population, σ2 known. An LRT of H0: θ = θ0 versus H1: ≠ θ0 is a test that rejects H0 if | - θ0 | / (σ / √n) > c. (a) Find an expression,
The random variable A has pdf f(x) = e-x,x > 0. One observation is obtained on the random variable Y = Xθ, and a test of H0 : θ =1 versus H1 : θ = 2 needs to be constructed. Find the UMP level α
In a given city it is assumed that the number of automobile accidents in a given year follows a Poisson distribution. In past years the average number of accidents per year was 15, and this year it
Let A be a random variable whose pmf under Ho and Hi is given byUse the Neyman-Pearson Lemma to find the most powerful test for H0 versus H1 with size α = .04. Compute the probability
Let Xi,..., X10 be iid Bernoulli(p). (a) Find the most powerful test of size α = .0547 of the hypotheses H0: p = 1/2 versus H1: p = 1/4. Find the power of this test. (b) For testing H0 : p ≤ 1/2
Suppose A is one observation from a population with beta(0,1) pdf. (a) For testing H0: θ ≤ 1 versus H1: θ > 1, find the size and sketch the power function of the test that rejects H0 if X >
Find the LRT of a simple H0 versus a simple H1. Is this test equivalent to the one obtained from the Neyman-Pearson Lemma? (This relationship is treated in some detail by Solomon 1975.)
Show that each of the following families has an MLR. (a) n(θ,σ2) family with σ2 known (b) Poisson (θ) family (c) binomial(n,θ) family with n known
(a) Show that if a family of pdfs {f(x|θ): θ ∈ ⊝} has an MLR, then the corresponding family of cdfs is stochastically increasing in 6. (See the Miscellanea section.) (b) Show that the converse
Suppose g(t|θ) = h(t)c(θ)ew(θ)t,t is a one-parameter exponential family for the random variable T. Show that this family has an MLR if w(θ) is an increasing function of θ. Give three examples of
Let f(x|θ) be the logistic location pdf
Let X be one observation from a Cauchy(θ) distribution.(a) Show that this family does not have an MLR.(b) Show that the testis most powerful of its size for testing H0: θ =
Here, the LRT alluded to in Example 8.2.9 will be derived. Suppose that we observe m iid Bernoulli(θ) random variables, denoted by Y1,. . . ., Ym. Show that the LRT of H0: θ < θo versus H1: θ >
Let f{x|θ) be the Cauchy scale pdf(a) Show that this family does not have an MLR. (b) If X is one observation from f(x|θ), show that |X| is sufficient for θ and
Let X1,..., Xn be iid Poisson(λ). (a) Find a UMP test of H0: λ < λ0 versus H1: λ > λ0. (b) Consider the specific case H0 : λ < 1 versus H1 : λ > 1. Use the Central Limit Theorem to determine
Let X1,... ,Xn be a random sample from the uniformed(θ,θ + 1) distribution. To test H0: θ = 0 versus H1: θ > 0, use the test reject H0 if Yn ≥ 1 or Y1 > k, where k is a constant, Y1 =
The usual t distribution, as derived in Section 5.3.2, is also known as a central t distribution. It can be thought of as the pdf of a random variable of the form T = n(0,1)/X2v/v, where
Let X1,..., Xn be a random sample from a n(θ, Ï2) population. Consider testingH0: θ θ0.(a) If Ï2 is known, show that the test that
Let X1,..... Xn be iid n(θ, Ï2), where do is a specified value of θ and Ï2 is unknown. We are interested in testingH0: θ = θ0
Let (X1, Y1),...,...,(Xn, Yn) be a random sample from a bivariate normal distribution with parameters μx,μY, p. We are interested in testingH0: μx =
Prove the assertion made in the text-after Definition 8.2.1. If f(x|θ) is the pmf of a discrete random variable, then the numerator of λ(x), the LRT statistic, is the maximum probability of the
Let X1,...,Xn be a random sample from a n(μx, Ï2x), and let Y1,...,Ym be an independent random sample from a n(μY, Ï2Y). We are interested in
The assumption of equal variances, which was made in Exercise 8.41, is not always tenable. In such a case, the distribution of the statistic is no longer a t. Indeed, there is doubt as to the wisdom
Sprott and Farewell (1993) note that in the two-sample t test, a valid t statistic can be derived as long as the ratio of variances is known. Let X1,..., Xn1 be a sample from a n(μ1,
Verify that Test 3 in Example 8.3.20 is an unbiased level a test.
Let X1,..., Xn be a random sample from a n(θ, σ2) population. Consider testing H0: θ ≤ θ0 versus H1 : θ > θ0. Let m denote the sample mean of the first m observations, X1,... ,Xm, for m =
Consider two independent normal samples with equal variances, as in Exercise 8.41. Consider testing H0 - μx - μY ¤ - δ or μx -
In each of the following situations, calculate the p-value of the observed data. (a) For testing H0 : θ ≤ 1/2 versus H1 : θ > 1/2,7 successes are observed out of 10 Bernoulli trials. (b) For
A random sample, X1,..., Xn, is drawn from a Pareto population with pdf(a) Find the MLEs of θ and v. (b) Show that the LRT of H0: θ = 1, v unknown, versus H1: θ
Let X1,..., Xn be iid n(θ,σ2), σ2 known, and let θ have a double exponential distribution, that is, π(θ) = e-|0|/α/(2a), a known. A Bayesian test of the hypotheses H0: θ ≤ 0 versus H1: θ >
Here is another common interpretation of p-values. Consider a problem of testing H0 versus H1. Let W(X) be a test statistic. Suppose that for each a, 0 ≤ a ≤ 1, a critical value ca can be chosen
In Example 8.2.7 we saw an example of a one-sided Bayesian hypothesis test. Now we will consider a similar situation, but with a two-sided test. We want to test H0: θ = 0 versus H1: θ ≠ 0, and we
The discrepancies between p-values and Bayes posterior probabilities are not as dramatic in the one-sided problem, as is discussed by Casella and Berger (1987) and also mentioned in the Miscellanea
Consider testing H0 : μ < 0 versus H1 : μ > 0 using 0-1 loss, where X ~ n(μ, 1). Let δc be the test that rejects H0 if X > c. For every test in this problem, there is a δC in the class of tests
Showing 70700 - 70800
of 88274
First
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
Last