All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
statistical techniques in business
Questions and Answers of
Statistical Techniques in Business
Let X1,..., Xm; Y1,..., Yn be independently, normally distributed with means ξ and η, and variances a σ2 and τ 2 respectively, and consider the hypothesis H : τ ≤ σ a against K : σ < τ .(i)
Let X1,..., Xm and Y1,..., Yn be independent samples from N(ξ, 1)and N(η, 1), and consider the hypothesis H : η ≤ ξ against K : η > ξ. There exists a UMP test, and it rejects the hypothesis
Sufficient statistics with nuisance parameters.(i) A statistic T is said to be partially sufficient for θ in the presence of a nuisance parameter η if the parameter space is the direct product of
Let X and Y be the number of successes in two sets of n binomial trials with probabilities p1 and p2 of success.(i) The most powerful test of the hypothesis H : p2 ≤ p1 against an alternative(p 1,
A counterexample. Typically, as α varies, the most powerful level αtests for testing a hypothesis H against a simple alternative are nested in the sense that the associated rejection regions, say
Confidence bounds for a median. Let X1,..., Xn be a sample from a continuous cumulative distribution functions F. Let ξ be the unique median of F if it exists, or more generally let ξ = inf{ξ :
Let the variables Xi(i = 1,...,s) be independently distributed with Poisson distribution P(λi). For testing the hypothesis H :λj ≤ a (for example, that the combined radioactivity of a number of
Let f , g be two probability densities with respect to μ. For testing the hypothesis H : θ ≤ θ0 or θ ≥ θ1(0 < θ0 < θ1 < 1) against the alternatives θ0
For testing the hypothesis H : θ1 ≤ θ ≤ θ2(θ1 ≤ θ2) against the alternatives θ < θ1 or θ > θ2, or the hypothesis θ = θ0 against the alternativesθ = θ0, in an exponential family or
Extension of Theorem 3.7.1. The conclusions of Theorem 3.7.1 remain valid if the density of a sufficient statistic T (which without loss of generality will be taken to be X), say pθ(x), is STP3 and
STP3. Let θ and x be real-valued, and suppose that the probability densities pθ(x) are such that pθ (x)/pθ(x) is strictly increasing in x for θ < θ. Then the following two conditions are
Exponential families. The exponential family (3.19) with T (x) = x and Q(θ) = θ is STP∞, with the natural parameter space and X = (−∞,∞).[That the determinant |eθi x j |,i, j = 1,..., n,
Totally positive families. A family of distributions with probability densities pθ(x), θ and x real-valued and varying over and X , respectively, is said to be totally positive of order r(TPr) if
For a random variable X with binomial distribution b(p, n), determine the constants Ci, γ (i = 1, 2) in the UMP test (3.33) for testing H : p ≤ 0.2 or≤ 0.7 when α = 0.1 and n = 15. Find the
Let F1,..., Fm+1 be real-valued functions defined over a space U.A sufficient condition for u0 to maximize Fm+1 subject to Fi(u) ≤ ci(i = 1,..., m)is that it satisfies these side conditions, that
The following example shows that Corollary 3.6.1 does not extend to a countably infinite family of distributions. Let pn be the uniform probability density on [0, 1 + 1/n], and p0 the uniform density
Optimum selection procedures. On each member of a population n measurements (X1,..., Xn) = X are taken, for example the scores of n aptitude tests which are administered to judge the qualifications
If β(θ) denotes the power function of the UMP test of Corollary 3.4.1, and if the function Q of (3.19) is differentiable, then β(θ) > 0 for all θ for which Q(θ) > 0.[To show that β(θ0) > 0,
that Eθ[L(θ, θ)] = Pθ{θ∗ ≤ θ}L(θ, u)d F(u)≤ Pθ{θ∗ ≤ θ}L(θ, u)d F∗(u) = Eθ[L(θ, θ∗)].]
Confidence bounds with minimum risk. Let L(θ, θ) be nonnegative and nonincreasing in its second argument for θ < θ, and equal to 0 for θ ≥ θ. If θand θ∗ are two lower confidence bounds
(i) Suppose U1,..., Un are i.i.d. U(0, 1) and let U(k) denote the kth largest value (or kth order statistic). Find the density of U(k) and show that P{U(k) ≤ p} = p 0n!(k − 1)!(n − k)!uk−1(1
(i) For n = 5, 10 and 1 − α = 0.95, graph the upper confidence limits p¯ and p¯∗ of Example 3.5.2 as functions of t = x + u.(ii) For the same values of n and α1 = α2 = 0.05, graph the lower
In Example 3.5.2, what is an explicit formula for the uniformly most accurate upper bound at level 1 − α when X = 0 and U = u? Compare it to the Clopper-Pearson bound in the same situation.
Typically, lower confidence bounds θ(X) satisfying (3.21) also satisfy Pθ{θ(X) < θ} ≥ 1 − α for all θso that θ is strictly greater than θ(X) with probability ≥ 1 − α. A similar issue
Let f (x)/[1 − F(x)] be the “mortality” of a subject at time x given that it has survived to this time. A c.d.f. F is said to be smaller than G in the hazard ordering if g(x)1 − G(x) ≤ f
Let F and G be two continuous, strictly increasing c.d.f.s, and let k(u) = G[F−1(u)], 0 < u < 1.(i) Show F and G are stochastically ordered, say F(x) ≤ G(x) for all x, if and only if k(u) ≤ u
Extension of Lemma 3.4.2. Let P0 and P1 be two distributions with densities p0, p1 such that p1(x)/p0(x) is a nondecreasing function of a real-valued statistic T (x).(i) If T = T (X) has probability
Let X1, ··· , Xn be a sample from a location family with common density f (x − θ), where the location parameter θ ∈ R and f (·) is known. Consider testing the null hypothesis that θ = θ0
Let X1,..., Xn be a sample from the inverse Gaussian distribution I(μ, τ ) with densityτ2πx 3 exp− τ2xμ2 (x − μ)2, x > 0, τ , μ > 0.Show that there exists a UMP test for testing(i) H :
Consider a single observation X from W(1, c).(i) The family of distributions does not have a monotone likelihood ratio in x.(ii) The most powerful test of H : c = 1 against c = 2 rejects when X < k1
A random variable X has the Weibull distribution W(b,c) if its density is c b x bc−1 e−(x/b)c, x > 0,b, c > 0.Show that this defines a probability density. If X1,..., Xn is a sample from W(b,
Let X1,..., Xn be a sample from the gamma distribution (g, b)with density 1(g)bg x g−1 e−x/b, 0 < x, 0 b0 when g is known;(ii) H : g ≤ g0 against g > g0 when b is known.In each case give the
Suppose a time series X0, X1, X2,... evolves in the following way.The process starts at 0, so X0 = 0. For any i ≥ 1, conditional on X0,..., Xi−1, Xi =ρXi−1 + i , where the i are i.i.d.
Let Xi be independently distributed as N(i, 1), i = 1,..., n. Show that there exists a UMP test of H : ≤ 0 against K : > 0, and determine it as explicitly as possible.
Let X be a single observation from the Cauchy density given at the end of Section 3.4.(i) Show that no UMP test exists for testing θ = 0 against θ > 0.(ii) Determine the totality of different
Let X = (X1,..., Xn) be a sample from the uniform distribution U(θ, θ + 1).(i) For testing H : θ ≤ θ0 against K : θ > θ0 at level α, there exists a UMP test which rejects when min(X1,...,
When a Poisson process with rate λ is observed for a time interval of length τ , the number X of events occurring has the Poisson distribution P(λτ ).Under an alternative scheme, the process is
the distribution of [r i=1 Yi + (n − r)Yr]/θ was found to be χ2 with 2r degrees of freedom.]
Let X1,..., Xn be independently distributed with density (2θ)−1 e−x/2θ, x ≥ 0, and let Y1 ≤···≤ Yn be the ordered X’s. Assume that Y1 becomes available first, then Y2, and so on, and
Let the probability density pθ of X have monotone likelihood ratio in T (x), and consider the problem of testing H : θ ≤ θ0 against θ > θ0. If the distribution of T is continuous, the p-value
(i) A necessary and sufficient condition for densities pθ(x) to have monotone likelihood ratio in x, if the mixed second derivative ∂2 log pθ(x)/∂θ ∂x exists, is that this derivative is ≥
Let X be the number of successes in n independent trials with probability p of success, and let φ(x) be the UMP test (3.16) for testing p ≤ p0 against p > p0 at the level of significance α.(i)
(i) If pˆ is uniform on (0, 1), show that −2 log(pˆ) has the Chi-squared distribution with 2 degrees of freedom.(ii) Suppose pˆ1,..., pˆs are i.i.d. uniform on (0, 1). Let F = −2 log(pˆ1
Under the setup of Lemma 3.3.1, show that there exists a real-valued statistic T (X) so that the rejection region is necessarily of the form (3.47). [Hint:Let T (X) =−ˆp.]
Under the setup of Lemma 3.3.1, suppose the rejection regions are defined by Rα = {X : T (X) ≥ k(α)} (3.47)for some real-valued statistic T (X) and k(α) satisfying supθ∈H Pθ{T (X) ≥ k(α)}
(i) Show that if Y is any random variable with c.d.f. G(·), then P{G(Y ) ≤ u} ≤ u for all 0 ≤ u ≤ 1 .If G−(t) = P{Y < t}, then show P{1 − G−(Y ) ≤ u} ≤ u for all 0 ≤ u ≤ 1
In Example 3.21, show that p-value is indeed given by pˆ = ˆp(X) =(11 − X)/10. Also, graph the c.d.f. of pˆ under H and show that the last inequality in (3.15) is an equality if and only if u is
is admissible.
Let fθ, θ ∈ , denote a family of densities with respect to a measureμ. (We assume is endowed with a σ-field so that the densities fθ(x) are jointly measurable in θ and x.) Consider the
Suppose X1,..., Xn are i.i.d. N(ξ, σ2) with σ known. For testingξ = 0 versus ξ = 0, the average power of a test φ = φ(X1,..., Xn) is given by∞−∞Eξ (φ)d(ξ) , where is a probability
Under the setup of Theorem 3.2.1, show that there always exist MP tests that are nested in the sense of Problem 3.17(iii).
A counterexample. Typically, as α varies the most powerful level αtests for testing a hypothesis H against a simple alternative are nested in the sense that the associated rejection regions, say
Based on X with distribution indexed by θ ∈ , the problem is to test θ ∈ ω versus θ ∈ ω. Suppose there exists a test φ such that Eθ[φ(X)] ≤ β for all θ in ω, where β < α. Show
it is sufficient for P.]
Fully informative statistics. A statistic T is fully informative if for every decision problem the decision procedures based only on T form an essentially complete class. If P is dominated and T is
If the sample space X is Euclidean and P0, P1 have densities with respect to Lebesgue measure, there exists a nonrandomized most powerful test for testing P0 against P1 at every significance level
The following example shows that the power of a test can sometimes be increased by selecting a random rather than a fixed sample size even when the randomization does not depend on the observations.
Let X1,..., Xn be independently distributed, each uniformly over the integers 1, 2,..., θ. Determine whether there exists a UMP test for testing H :θ = θ0, at level 1/θn 0 against the
(i) For testing H0 : θ = 0 against H1 : θ = θ1 when X is N(θ, 1), given any 0 < α < 1 and any 0 < π < 1 (in the notation of the preceding problem), there exists θ1 and x such that (a) H0 is
In the notation of Section 3.2, consider the problem of testing H0 :P = P0 against H1 : P = P1, and suppose that known probabilities π0 = π andπ1 = 1 − π can be assigned to H0 and H1 prior to
Let X be distributed according to Pθ, θ ∈ , and let T be sufficient for θ. If ϕ(X) is any test of a hypothesis concerning θ, then ψ(T ) given by ψ(t) =E[ϕ(X) | t] is a test depending on T
to obtain UMP tests of (a) H : τ = τ0 against τ = τ0 when b is known; (b) H : c = c0, τ = τagainst c > c0, τ < τ0.
A random variable X has the Pareto distribution P(c, τ ) if its density is cτ c/xc+1, 0 < τ < x, 0 < C.(i) Show that this defines a probability density.(ii) If X has distribution P(c, τ ), then Y
Let the distribution of X be given by x 01 2 3 Pθ(X = x) θ 2θ 0.9 − 2θ 0.1 − θwhere 0 < θ < 0.1. For testing H : θ = 0.05 against θ > 0.05 at level α = 0.05, determine which of the
Let P0, P1, P2 be the probability distributions assigning to the integers 1,..., 6 the following probabilities:123456 P0 0.03 0.02 0.02 0.01 0 0.92 P1 0.06 0.05 0.08 0.02 0.01 0.78 P2 0.09 0.05 0.12
In the proof of Theorem 3.2.1(i), consider the set of c satisfyingα(c) ≤ α ≤ α(c − 0). If there is only one suchc, c is unique; otherwise, there is an interval of such values [c1, c2]. Argue
UMP test for exponential densities. Let X1,..., Xn be a sample from the exponential distribution E(a,b) of Problem1.18, and let X(1) = min(X1,..., Xn).(i) Determine the UMP test for testing H : a =
UMP test for U(0, θ). Let X = (X1,..., Xn) be a sample from the uniform distribution on (0, θ).(i) For testing H : θ ≤ θ0 against K : θ > θ0 any test is UMP at level α for which Eθ0φ(X) =
Let be the natural parameter space of the exponential family (2.35), and for any fixed tr+1,..., tk (r < k) let θ1...θr be the natural parameter space of the family of conditional distributions
For any θ which is an interior point of the natural parameter space, the expectations and covariances of the statistics Tj in the exponential family (2.35)are given by E Tj(X)= −∂
Life testing. Let X1,..., Xn be independently distributed with exponential density (2θ)−1e−x/2θ for x ≥ 0, and let the ordered X’s be denoted by Y1 ≤ Y2 ≤···≤ Yn. It is assumed
Let Xi (i = 1,...,s) be independently distributed with Poisson distribution P(λi), and let T0 = X j , Ti = Xi , λ = λj . Then T0 has the Poisson distribution P(λ), and the conditional
For a decision problem with a finite number of decisions, the class of procedures depending on a sufficient statistic T only is essentially complete. [For Euclidean sample spaces this follows from
If a statistic T is sufficient for P, then for every function f which is (A, Pθ)-integrable for all θ ∈ there exists a determination of the conditional expectation function Eθ[ f (X) | t] that
that d P0 dλ = d P0 d n j=0 c j Pj d n j=0 c j Pj dλis also A0-measurable. (ii): Let λ = ∞j=1 c j Pθ j be equivalent to P. Then pairwise sufficiency of T implies for any θ0 that d Pθ0 /(d
Pairwise sufficiency. A statistic T is pairwise sufficient for P if it is sufficient for every pair of distributions in P.(i) If P is countable and T is pairwise sufficient for P, then T is
Sufficiency of likelihood ratios. Let P0, P1 be two distributions with densities p0, p1. Then T (x) = p1(x)/p0(x) is sufficient for P = {P0, P1}. [This follows from the factorization criterion by
is sufficient.(iii) Let X1,..., Xn be identically and independently distributed according to a continuous distribution P ∈ P, and suppose that the distributions ofP are symmetric with respect to
Symmetric distributions.(i) LetP be any family of distributions of X = (X1,..., Xn) which are symmetric in the sense that P Xi1 ,..., Xin∈ A= P {(X1,..., Xn) ∈ A}for all Borel sets A and all
Let X = Y × T , and suppose that P0, P1 are two probability distributions given by d P0(y, t) = f (y)g(t) dμ(y) dν(t), d P1(y, t) = h(y, t) dμ(y) dν(t), where h(y, t)/ f (y)g(t) < ∞. Then
(i) LetP be any family of distributions X = (X1,..., Xn)such that P{(Xi, Xi+1,..., Xn, X1,..., Xi−1) ∈ A} = P{(X1,..., Xn) ∈ A}for all Borel sets A and all i = 1,..., n. For any sample point
Let (X , A) be a measurable space, and A0 a σ-field contained in A.Suppose that for any function T , the σ-field B is taken as the totality of sets B such that T −1(B) ∈ A. Then it is not
If f (x) > 0 for all x ∈ S and μ is σ-finite, then S f dμ = 0 impliesμ(S) = 0.[Let Sn be the subset of S on which f (x) ≥ 1/n. Then μ(S) ≤ μ(Sn) andμ(Sn) ≤ nSn f dμ ≤ nS f dμ =
Radon–Nikodym derivatives.(i) If λ and μ are σ-finite measures over (X , A) and μ is absolutely continuous with respect to λ, thenf dμ =f dμdλdλfor any μ-integrable function f .(ii) If
Monotone class. A class F of subsets of a space is a field if it contains the whole space and is closed under complementation and under finite unions; a classMis monotone if the union and
(i) Let X1,..., Xn be a sample from the uniform distributionU(0, θ), 0 < θ < ∞, and let T = max(X1,..., Xn). Show that T is sufficient, once by using the definition of sufficiency and once by
In n independent trials with constant probability p of success, let Xi = 1 or 0 as the ith trial is a success or not. Then n i=1 Xi is minimal sufficient.[Let T = Xi and suppose that U = f (T ) is
need not hold when G is infinite follows by comparing the best invariant estimates of (i) with the estimate δ1(x) which is X + 1 when X < 0 and X − 1 when X ≥ 0.
(i) Let X take on the values θ − 1 and θ + 1 with probability 1 2 each.The problem of estimating θ with loss function L(θ,d) = min(|θ − d|, 1) remains invariant under the transformation gX =
Admissibility of invariant procedures. If a decision problem remains invariant under a finite group, and if there exists a procedure δ0 that uniformly minimizes the risk among all invariant
Admissibility of unbiased procedures. (i) Under the assumptions of Problem 1.10, if among the unbiased procedures there exists one with uniformly minimum risk, it is admissible. (ii) That in general
(i) Let X1,..., Xn be a sample from N(ξ, σ2), and consider the problem of deciding between ω0 : ξ < 0 and ω1 : ξ ≥ 0. If x¯ = xi /n and C =(a1/a0)2/n, the likelihood ratio procedure takes
(i) Let X have probability density pθ(x) with θ one of the valuesθ1,..., θn, and consider the problem of determining the correct value of θ, so that the choice lies between the n decisions d1 =
Invariance and minimax. Let a problem remain invariant relative to the groups G, G¯ , and G∗ over the spaces X , , and D, respectively. Then a randomized procedure Yx is defined to be invariant if
Unbiasedness and minimax. Let = 0 ∪ 1 where 0, 1 are mutually exclusive, and consider a two-decision problem with loss function L(θ, di) = ai for θ ∈ j(j = i) and L(θ, di) = 0 for θ ∈ i(i =
(i) As an example in which randomization reduces the maximum risk, suppose that a coin is known to be either standard (HT) or to have heads on both sides (HH). The nature of the coin is to be decided
Structure of Bayes solutions.(i) Let be an unobservable random quantity with probability density ρ(θ), and let the probability density of X be pθ(x) when = θ. Then δ is a Bayes solution of a
Unbiasedness in interval estimation. Confidence intervals I = (L, L¯)are unbiased for estimating θ with loss function L(θ, I) = (θ − L)2 + (L¯ − θ)2 provided E[ 1 2 (L + L¯)] = θ for all
Relation of unbiasedness and invariance.(i) If δ0 is the unique (up to sets of measure 0) unbiased procedure with uniformly minimum risk, it is almost invariant.(ii) If G¯ is transitive and G∗
Let C be any class of procedures that is closed under the transformations of a group G in the sense that δ ∈ C implies g∗δg−1 ∈ C for all g ∈ G. If there exists a unique procedure δ0
Showing 700 - 800
of 5757
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last