All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
introduction to statistical investigations
Questions and Answers of
Introduction To Statistical Investigations
Let \(X_{1}, \ldots, X_{n}\) be a set of independent and identically distributed random variables from a distribution \(F\) with continuous and bounded density \(f\), that is assumed to be symmetric
Prove that\[\int_{-\infty}^{\infty} f^{2}(x) d x\]equals \(\frac{1}{2} \pi^{-1 / 2}, 1, \frac{1}{4}, \frac{1}{6}\), and \(\frac{2}{3}\) for the \(\mathrm{N}(0,1),
Prove that the square efficacy of the \(t\)-test equals \(1,12, \frac{1}{2}, 3 \pi^{-2}\), and 6 for the \(\mathrm{N}(0,1), \operatorname{Uniform}\left(-\frac{1}{2}, \frac{1}{2}ight),
Prove that the square efficacy of the sign test equals \(2 \pi^{-1}, 4,1, \frac{1}{4}\), and 4 for the \(\mathrm{N}(0,1), \operatorname{Uniform}\left(-\frac{1}{2}, \frac{1}{2}ight)\), LaPlace
Consider the density \(f(x)=\frac{3}{20} 5^{-1 / 2}\left(5-x^{2}ight) \delta\left\{x ;\left(-5^{1 / 2}, 5^{1 / 2}ight)ight\}\). Prove that \(E_{V}^{2} E_{T}^{-2} \simeq 0.864\), which is a lower
Let \(X_{1}, \ldots, X_{n}\) be a set of independent and identically distributed random variables from a discrete distribution with distribution function \(F\) and probability distribution function
Let \(X_{1}, \ldots, X_{n}\) be a set of independent and identically distributed random variables from a discrete distribution with distribution function \(F\) and probability distribution function
Let \(X_{1}, \ldots, X_{n}\) be a set of independent and identically distributed random variables from a distribution \(F\) with continuous density \(f\). Prove that the histogram estimate with fixed
Prove that the mean integrated squared error can be written as the sum of the integrated square bias and the integrated variance. That is, prove that \(\operatorname{MISE}\left(\bar{f}_{n},
Using the fact that the pointwise bias of the histogram is given by,\[\operatorname{Bias}\left[\bar{f}_{n}(x)ight]=\frac{1}{2} f^{\prime}(x)\left[h-2\left(x-g_{i}ight)ight]+O\left(h^{2}ight),\]as \(h
Let \(f\) be a density with at least two continuous and bounded derivatives and let \(g_{i}
Given that the asymptotic mean integrated squared error for the histogram with bin width \(h\) is given by\[\operatorname{AMISE}\left(\bar{f}_{n}, fight)=(n h)^{-1}+\frac{1}{12} h^{2}
Let \(K\) be any non-decreasing right-continuous function such that\[\begin{gathered}\lim _{t ightarrow \infty} K(t)=1 \\\lim _{t ightarrow-\infty} K(t)=0\end{gathered}\]and\[\int_{-\infty}^{\infty}
Use the fact that the pointwise bias of the kernel density estimator with bandwidth \(h\) is given by\[\operatorname{Bias}\left[\tilde{f}_{n, h}(x)ight]=\frac{1}{2} h^{2} f^{\prime \prime}(x)
Using the fact that the asymptotic mean integrated squared error of the kernel estimator with bandwidth \(h\) is given by,\[\operatorname{AMISE}\left(\tilde{f}_{n, h}, fight)=(n h)^{-1}
Consider the Epanechnikov kernel given by \(k(t)=\frac{3}{4}(1-t)^{2} \delta\{t ;[-1,1]\}\). Prove that \(\sigma_{k} R(k)=3 /(5 \sqrt{5})\).
Compute the efficiency of each of the kernel functions given below relative to the Epanechnikov kernel.a. The Biweight kernel function, given by \(\frac{15}{16}\left(1-t^{2}ight)^{2} \delta\{t
Let \(\hat{f}_{n}(t)\) denote a kernel density estimator with kernel function \(k\) computed on a sample \(X_{1}, \ldots, X_{n}\). Prove that,\[E\left[\int_{-\infty}^{\infty} \hat{f}_{h}(t) f(t) d
Let \(X_{1}, \ldots, X_{n}\) be a set of independent and identically distributed random variables from a distribution \(F\) with mean \(\theta\). Let \(R_{n}(\hat{\theta}, \theta)=n^{1 /
Let \(X_{1}, \ldots, X_{n}\) be a set of independent and identically distributed random variables from a distribution \(F\) with mean \(\mu\). Define \(\theta=g(\mu)=\mu^{2}\), and let
Let \(\mathbf{X}_{1}, \ldots, \mathbf{X}_{n}\) be a set of two-dimensional independent and identically distributed random vectors from a distribution \(F\) with mean vector \(\boldsymbol{\mu}\). Let
In the context of the development of the bias corrected and accelerated bootstrap confidence interval, prove
Write a program in \(\mathrm{R}\) that simulates 1000 samples of size \(n\) from a distribution \(F\) with location parameter \(\theta\), where \(n, F\) and \(\theta\) are specified below. For each
Write a program in \(\mathrm{R}\) that simulates five samples of size \(n\) from a distribution \(F\), where \(n\) and \(F\) are specified below. For each sample compute a histogram estimate of the
Write a program in \(\mathrm{R}\) that simulates five samples of size \(n\) from a distribution \(F\), where \(n\) and \(F\) are specified below. For each sample compute a kernel density estimate of
Let \(\left\{x_{n}ight\}_{n=1}^{\infty}\) be a sequence of real numbers defined by\[x_{n}=\left\{\begin{array}{rl}-1 & n=1+3(k-1), k \in \mathbb{N} \\0 & n=2+3(k-1), k \in \mathbb{N} \\1 &
Let \(\left\{x_{n}ight\}_{n=1}^{\infty}\) be a sequence of real numbers defined by\[x_{n}=\frac{n}{n+1}-\frac{n+1}{n},\]for all \(n \in \mathbb{N}\). Compute\[\liminf _{n ightarrow \infty}
Let \(\left\{x_{n}ight\}_{n=1}^{\infty}\) be a sequence of real numbers defined by \(x_{n}=n^{(-1)^{n}-n}\) for all \(n \in \mathbb{N}\). Compute\[\liminf _{n ightarrow \infty} x_{n},\]and\[\limsup
Let \(\left\{x_{n}ight\}_{n=1}^{\infty}\) be a sequence of real numbers defined by \(x_{n}=n 2^{-n}\), for all \(n \in \mathbb{N}\). Compute\[\liminf _{n ightarrow \infty} x_{n}\]and\[\limsup _{n
Each of the sequences given below converges to zero. Specify the smallest value of \(n_{\varepsilon}\) so that \(\left|x_{n}ight|n_{\varepsilon}\) as a function of \(\varepsilon\).a.
Let \(\left\{x_{n}ight\}_{n=1}^{\infty}\) and \(\left\{y_{n}ight\}_{n=1}^{\infty}\) be sequences of real numbers such that\[\lim _{n ightarrow \infty} x_{n}=x\]and\[\lim _{n ightarrow \infty}
Let \(\left\{x_{n}ight\}_{n=1}^{\infty}\) and \(\left\{y_{n}ight\}_{n=1}^{\infty}\) be sequences of real numbers such that \(x_{n} \leq y_{n}\) for all \(n \in \mathbb{N}\). Prove that if the limit
Let \(\left\{x_{n}ight\}_{n=1}^{\infty}\) and \(\left\{y_{n}ight\}_{n=1}^{\infty}\) be sequences of real numbers such that\[\lim _{n ightarrow \infty}\left(x_{n}+y_{n}ight)=s\]and\[\lim _{n ightarrow
Find the supremum and infimum limits for each sequence given below.a. \(x_{n}=(-1)^{n}\left(1+n^{-1}ight)\)b. \(x_{n}=(-1)^{n}\)c. \(x_{n}=(-1)^{n} n\)d. \(x_{n}=n^{2} \sin ^{2}\left(\frac{1}{2} n
Let \(\left\{x_{n}ight\}_{n=1}^{\infty}\) be a sequence of real numbers.a. Prove that\[\inf _{n \in \mathbb{N}} x_{n} \leq \liminf _{n ightarrow \infty} x_{n} \leq \limsup _{n ightarrow \infty} x_{n}
Let \(\left\{x_{n}ight\}_{n=1}^{\infty}\) and \(\left\{y_{n}ight\}_{n=1}^{\infty}\) be a sequences of real numbers such that \(x_{n} \leq y_{n}\) for all \(n \in \mathbb{N}\). Prove that\[\liminf _{n
Let \(\left\{x_{n}ight\}_{n=1}^{\infty}\) and \(\left\{y_{n}ight\}_{n=1}^{\infty}\) be a sequences of real numbers such that\[\left|\limsup _{n ightarrow \infty} x_{n}ight|
Let \(\left\{x_{n}ight\}_{n=1}^{\infty}\) and \(\left\{y_{n}ight\}_{n=1}^{\infty}\) be a sequences of real numbers such that \(x_{n}>0\) and \(y_{n}>0\) for all \(n \in \mathbb{N}\),\[0
Let \(\left\{f_{n}(x)ight\}_{n=1}^{\infty}\) and \(\left\{g_{n}(x)ight\}_{n=1}^{\infty}\) be sequences of real valued functions that converge pointwise to the real functions \(f\) and \(g\),
Let \(\left\{f_{n}(x)ight\}_{n=1}^{\infty}\) and \(\left\{g_{n}(x)ight\}_{n=1}^{\infty}\) be sequences of real valued functions that converge uniformly on \(\mathbb{R}\) to the real functions \(f\)
Let \(\left\{f_{n}(x)ight\}_{n=1}^{\infty}\) be a sequence of real functions defined by \(f_{n}(x)=\frac{1}{2} n \delta\{x ;(n-\) \(\left.\left.n^{-1}, n+n^{-1}ight)ight\}\) for all \(n \in
Let \(\left\{f_{n}(x)ight\}_{n=1}^{\infty}\) be a sequence of real functions defined by \(f_{n}(x)=(1+\) \(\left.n^{-1}ight) \delta\{x ;(0,1)\}\) for all \(n \in \mathbb{N}\).a. Prove that\[\lim _{n
Let \(g(x)=\exp (-|x|)\) and define a sequence of functions \(\left\{f_{n}(x)ight\}_{n=1}^{\infty}\) as \(f_{n}(x)=g(x) \delta\{|x| ;(n, \infty)\}\), for all \(n \in \mathbb{N}\).a.
Define a sequence of functions \(\left\{f_{n}(x)ight\}_{n=1}^{\infty}\) as \(f_{n}(x)=n^{2} x(1-x)^{n}\) for \(x \in \mathbb{R}\) and for all \(n \in \mathbb{N}\).a. Calculate\[f(x)=\lim _{n
Define a sequence of functions {fn(x)}∞n=1{fn(x)}n=1∞ as fn(x)=n2x(1−x)nfn(x)=n2x(1−x)n for x∈[0,1]x∈[0,1]. Determine whether\[\lim _{n ightarrow \infty} \int_{0}^{1} f_{n}(x) d
Suppose that \(f\) is a quadratic polynomial. Prove that for \(\delta \in \mathbb{R}\),\[f(x+\delta)=f(x)+\delta f^{\prime}(x)+\frac{1}{2} \delta^{2} f^{\prime \prime}(x) .\]
Suppose that \(f\) is a cubic polynomial. Prove that for \(\delta \in \mathbb{R}\),\[f(x+\delta)=f(x)+\delta f^{\prime}(x)+\frac{1}{2} \delta^{2} f^{\prime \prime}(x)+\frac{1}{6} \delta^{3} f^{\prime
Prove that if \(f\) is a polynomial of degree \(p\) then\[f(x+\delta)=\sum_{i=1}^{p} \frac{\delta^{i} f^{(i)}}{i !}\]
Prove Theorem 1.13 using induction. That is, assume that\[E_{1}(x, \delta)=\int_{x}^{x+\delta}(x+\delta-t) f^{\prime \prime}(t) d t\]which has been shown to be true, and that\[E_{p}(x,
Given that \(E_{p}(x, \delta)\) from Theorem 1.13 can be written as\[E_{p}(x, \delta)=\frac{1}{p !} \int_{x}^{x+\delta}(x+\delta-t)^{p} f^{(p+1)}(t) d t\]show that \(E_{p}(x, \delta)=\delta^{p+1}
Use Theorem 1.13 with \(p=1,2\) and 3 to find approximations for each of the functions listed below for small values of \(\delta\).a. \(f(\delta)=1 /(1+\delta)\)b. \(f(\delta)=\sin ^{2}(\pi /
Prove that the \(p^{\text {th }}\)-order Taylor expansion of a function \(f(x)\) has the same derivatives of order \(1, \ldots, p\) as \(f(x)\). That is, show that\[\left.\frac{d^{j}}{d \delta^{j}}
Show that by taking successive derivatives of the standard normal density that \(H_{3}(x)=x^{3}-3 x, H_{4}(x)=x^{4}-6 x^{2}+3\) and \(H_{5}(x)=x^{5}-10 x^{3}+15 x\).
Use Theorem 1.13 (Taylor) to find fourth and fifth order polynomials that are approximations to the standard normal distribution function \(\Phi(x)\). Is there a difference between the
Prove Part 1 of Theorem 1.14 using induction. That is, prove that for any non-negative integer \(k\),\[H_{k}(x)=\sum_{i=0}^{\lfloor k / 2floor}(-1)^{i} \frac{(2 i) !}{2^{i} i
Prove Part 2 of Theorem 1.14. That is, prove that for any non-negative integer \(k \geq 2\),\[H_{k}(x)=x H_{k-1}(x)-(k-1) H_{k-2}(x) .\]The simplest approach is to use Definition 1.6. Theorem 1.14.
Prove Part 3 of Theorem 1.14 using only Definition 1.6. That is, prove that for any non-negative integer \(k\),\[\frac{d}{d x} H_{k}(x)=k H_{k-1}(x)\]Do not use the result of Part 1 of Theorem 1.14.
The Hermite polynomials are often called a set of orthogonal polynomials. Consider the Hermite polynomials up to a specified order \(d\). Let \(\mathbf{h}_{k}\) be a vector in \(\mathbb{R}^{d}\)
In Theorem 1.15 prove that \(E_{p}(x, \delta)=o\left(\delta^{p}ight)\), as \(\delta ightarrow 0\). Theorem 1.15. Let f be a function that has p+ 1 bounded and continuous derivatives in the interval
Consider approximating the normal tail integral\[\bar{\Phi}(z)=\int_{z}^{\infty} \phi(t) d t\]for large values of \(z\) using integration by parts as discussed in Example 1.24. Use repeated
Using integration by parts, show that the exponential integral\[\int_{z}^{\infty} t^{-1} e^{-t} d t\]has asymptotic expansion\[z^{-1} e^{-z}-z^{-2} e^{-z}+2 z^{-3} e^{-z}-6 z^{-4}
Prove the second and third results of Theorem 1.18. That is, let \(\left\{a_{n}ight\}_{n=1}^{\infty}\), \(\left\{b_{n}ight\}_{n=1}^{\infty},\left\{c_{n}ight\}_{n=1}^{\infty}\), and
Prove the remaining three results of Theorem 1.19. That is, consider two real sequences \(\left\{a_{n}ight\}_{n=1}^{\infty}\) and \(\left\{b_{n}ight\}_{n=1}^{\infty}\) and positive integers \(k\) and
For each specified pair of functions \(G(t)\) and \(g(t)\), determine the value of \(\alpha\) and \(c\) so that \(G(t) \asymp c t^{\alpha-1}\) as \(t ightarrow \infty\) and determine if there is a
Consider a real function \(f\) that can be approximated with the asymptotic expansion\[f_{n}(x)=\pi x+\frac{1}{2} n^{-1 / 2} \pi^{2} x^{1 / 2}-\frac{1}{3} n^{-1} \pi^{3} x^{1 / 4}+O\left(n^{-3 /
Refer to the three approximations derived for each of the four functions in Exercise 26. For each function use \(\mathrm{R}\) to construct a line plot of the function, along with the three
Refer to the three approximations derived for each of the four functions in Exercise 26. For each function use \(\mathrm{R}\) to construct a line plot of the error terms \(E_{1}(x, \delta), E_{2}(x,
Refer to the three approximations derived for each of the four functions in Exercise 26. For each function use \(\mathrm{R}\) to construct a line plot of the error terms \(E_{2}(x, \delta)\) and
Consider the approximation for the normal tail integral \(\bar{\Phi}(z)\) studied in Example 1.24 given by\[\bar{\Phi}(z) \simeq z^{-1} \phi(z)\left(1-z^{-2}+3 z^{-4}-15 z^{-6}+105 z^{-8}ight) .\]A
Verify that \(\mathcal{F}=\left\{\emptyset, \omega_{1}, \omega_{2}, \omega_{3}, \omega_{1} \cup \omega_{2}, \omega_{1} \cup \omega_{3}, \omega_{2} \cup \omega_{3}, \omega_{1} \cup \omega_{2} \cup
Let \(\left\{A_{n}ight\}_{n=1}^{\infty}\) be a sequence of monotonically increasing events from a \(\sigma\) field \(\mathcal{F}\) of subsets of a sample space \(\Omega\). Prove that the sequence
Consider a probability space \((\Omega, \mathcal{F}, P)\) where \(\Omega=(0,1) \times(0,1)\) is the unit square and \(P\) is a bivariate extension of Lebesgue measure. That is, if \(R\) is a
Prove Theorem 2.4. That is, prove that\[P\left(\bigcup_{i=1}^{n} A_{i}ight) \leq \sum_{i=1}^{n} P\left(A_{i}ight)\]The most direct approach is based on mathematical induction using the general
Prove Theorem 2.6 (Markov) for the case when \(X\) is a discrete random variable on \(\mathbb{N}\) with probability distribution function \(p(x)\). Theorem 2.6 (Markov). Consider a random variable X
Prove Theorem 2.7 (Tchebysheff). That is, prove that if \(X\) is a random variable such that \(E(X)=\mu\) and \(V(X)=\sigma^{2}\delta) \leq\) \(\delta^{-2} \sigma^{2}\). Theorem 2.7 (Tchebysheff).
Let \(\left\{A_{n}ight\}_{n=1}^{\infty}\) be a sequence of events from a \(\sigma\)-field \(\mathcal{F}\) of subsets of a sample space \(\Omega\). Prove that if \(A_{n+1} \subset A_{n}\) for all \(n
Let \(\left\{A_{n}ight\}_{n=1}^{\infty}\) be a sequence of events from \(\mathcal{F}\), a \(\sigma\)-field on the sample space \(\Omega=(0,1)\), defined by\[A_{n}= \begin{cases}\left(\frac{1}{3},
Let \(\left\{A_{n}ight\}_{n=1}^{\infty}\) be a sequence of events from \(\mathcal{F}\), a \(\sigma\)-field on the sample space \(\Omega=\mathbb{R}\), defined by \(A_{n}=\left(-1-n^{-1},
Let \(\left\{A_{n}ight\}_{n=1}^{\infty}\) be a sequence of events from \(\mathcal{F}\), a \(\sigma\)-field on the sample space \(\Omega=(0,1)\), defined by\[A_{n}= \begin{cases}B & \text { if } n
Let \(\left\{A_{n}ight\}_{n=1}^{\infty}\) be a sequence of events from \(\mathcal{F}\), a \(\sigma\)-field on the sample space \(\Omega=\mathbb{R}\), defined by\[A_{n}=
Consider a probability space \((\Omega, \mathcal{F}, P)\) where \(\Omega=(0,1), \mathcal{F}=\mathcal{B}\{(0,1)\}\) and \(P\) is Lebesgue measure on \((0,1)\). Let
Consider tossing a fair coin repeatedly and define \(H_{n}\) to be the event that the \(n^{\text {th }}\) toss of the coin yields a head. Prove that\[P\left(\limsup _{n ightarrow \infty}
Consider the case where \(\left\{A_{n}ight\}_{n=1}^{\infty}\) is a sequence of independent events that all have the same probability \(p \in(0,1)\). Prove that\[P\left(\limsup _{n ightarrow \infty}
Let \(\left\{U_{n}ight\}_{n=1}^{\infty}\) be a sequence of independent \(\operatorname{UNIFORm}(0,1)\) random variables. For each definition of \(A_{n}\) given below, calculate\[P\left(\limsup _{n
Let \(X\) be a random variable that has moment generating function \(m(t)\) that converges on some radius \(|t| \leq b\) for some \(b>0\). Using induction, prove
Let \(X\) be a \(\operatorname{Poisson}(\lambda)\) random variable.a. Prove that the moment generating function of \(X\) is \(\exp [\lambda \exp (t)-1]\).b. Prove that the characteristic function of
Let \(Z\) be a \(\mathrm{N}(0,1)\) random variable.a. Prove that the moment generating function of \(X\) is \(\exp \left[-\frac{1}{2} t^{2}ight]\).b. Prove that the characteristic function of \(X\)
Let \(Z\) be a \(\mathrm{N}(0,1)\) random variable and define \(X=\mu+\sigma Z\) for some \(\mu \in \mathbb{R}\) and \(0
Let \(X\) be a \(\mathrm{N}\left(\mu, \sigma^{2}ight)\) random variable. Using the moment generating function, derive the first three moments of \(X\). Repeat the process using the characteristic
Let \(X\) be a \(\operatorname{UnIform}(\alpha, \beta)\) random variable.a. Prove that the moment generating function of \(X\) is \([t(\beta-\alpha)]^{-1}[\exp (t \beta)-\) \(\exp (t \alpha)]\).b.
Let \(X\) be a random variable. Prove that the characteristic function of \(X\) is real valued if and only if \(X\) has the same distribution as \(-X\).
Prove Theorem 2.24. That is, suppose that \(X\) is a random variable with moment generating function \(m_{X}(t)\) that exists and is finite for \(|t|0\). Suppose that \(Y\) is a new random variable
Prove Theorem 2.32. That is, suppose that \(X\) is a random variable with characteristic function \(\psi(t)\). Let \(Y=\alpha X+\beta\) where \(\alpha\) and \(\beta\) are real constants. Prove that
Prove Theorem 2.33. That is, suppose that \(X_{1}, \ldots, X_{n}\) be a sequence of independent random variables where \(X_{i}\) has characteristic function \(\psi_{i}(t)\), for \(i=1, \ldots, n\).
Let \(X_{1}, \ldots, X_{n}\) be a sequence of independent random variables where \(X_{i}\) has a \(\operatorname{Gamma}\left(\alpha_{i}, \betaight)\) distribution for \(i=1, \ldots, n\).
Suppose that \(X\) is a discrete random variable that takes on non-negative integer values and has characteristic function \(\psi(t)=\exp \{\theta[\exp (i t)-1]\}\). Use Theorem 2.29 to find the
Suppose that \(X\) is a discrete random variable that takes on the values \(\{0,1\}\) and has characteristic function \(\psi(t)=\cos (t)\). Use Theorem 2.29 to find the probability that \(X\) equals
Suppose that \(X\) is a discrete random variable that takes on positive integer values and has characteristic function\[\psi(t)=\frac{p \exp (i t)}{1-(1-p) \exp (i t)}\]Use Theorem 2.29 to find the
Suppose that \(X\) is a continuous random variable that takes on real values and has characteristic function \(\psi(t)=\exp (-|t|)\). Use Theorem 2.28 to find the density of \(X\). Theorem 2.28.
Suppose that \(X\) is a continuous random variable that takes on values in \((0,1)\) and has characteristic function \(\psi(t)=[\exp (i t)-1] / i t\). Use Theorem 2.28 to find the density of \(X\).
Showing 1100 - 1200
of 1982
First
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Last