All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
regression analysis
Questions and Answers of
Regression Analysis
8.8 This exercise generalizes Example 8.1 in two ways. First, the stationary covariance function ????(h) is replaced by a limiting intrinsic covariance function. If the range parameter ???? tends to
8.7 Suppose the mean of the random field in Example 8.1 is assumed known.Show that the kriging variance takes the same form as in (8.15), but without the final term. Deduce that the kriging variance
8.6 Consider a tensor product covariance model for a zero-mean bivariate stationary spatial process on ℝ1 where cov(Xu(t), X????(t + h)) = cu???? exp(−0.5|h|), u, ???? = 1, 2, h ∈ ℝ1, and C
8.5 (Mardia and Goodall, 1993). Using the notation from Section 8.5, assume the tensor product model (8.4) for processes X1(t), ..., Xq(t). Given data on each process at sites, t1, ..., tn, confirm
8.4 Suppose the signal S in Exercise 8.2 comes from a stationary Gaussian process with mean ???? and covariance function ????(h), observed at sites t1, ..., tn, with ????(0) = ????2. Show that the
8.3 In the same setting as Exercise 8.2, suppose it is desired to predict a new signal S0 given observations Z = [Z1, ..., Zn]T. Here, it is assumed that[S0, ST]T are jointly multivariate normal with
8.2 Let S ∼ Nn(????, Σ) be a multivariate normal latent “signal,” and, given S, consider independent Poisson distributed observations Zi|S = s ∼ P(????i), where log ????i = si, i = 1, ...,
8.1 Let Y ∼ Nn(????, Σ) follow a multivariate normal distribution and set Xi =exp(Yi), i = 1, ..., n. The purpose of this exercise is to find the moments of X. They are most easily calculated
7.9 The ordinary kriging predictor and kriging variance for a stationary random field in Theorem 7.1 have been expressed in terms of the covariance function ????(h). These formulas can also be
7.8 Let U(t) be a stationary process, t ∈ ℝ2 with a covariance function ????(h) and an unknown mean ????. Consider n = 3 sites lying on an equilateral triangle with coordinates⎡⎢⎢⎣t T1 tT
7.7 Verify the formulas for Bayesian kriging prediction in Section 7.12. In particular, using the Woodbury formula for the inverse of a matrix(Ω + FΔFT)−1 = Ω−1 − Ω−1 FD−1 FTΩ−1,
7.6 Exercise 7.5 can also be extended to unequally spaced time points t1 < ··· < tn. Show that in this case B is a tri-diagonal matrix with diagonal elements bii =⎧⎪⎨⎪⎩1 2(t2−t1), i =
7.5 Consider the linear intrinsic covariance function ????I(h)=−|h|, h ∈ ℝ in one dimension. Consider equally spaced sitesti = i, i = 1, ..., n. Hence, ΣI?
7.4 Let A be a symmetric positive definite n × n matrix and let b be an n-vector.Consider the minimization problem minimize xTAx such that bTx = 1, over x ∈ ℝn. Show that the solution is given
7.3 (a) Let U(t) be a one-dimensional stationary process with an unknown mean ???? and with a covariance function ????(h) = ????2????(h), where ????(h) is a specified correlation function. Consider
7.2 The easiest way to prove that M−1 has the form in (7.46) is by rotating to the starred coordinates in Section 7.6.2. Show that M∗ and the stated form for (M∗)−1 reduce to M∗
7.1 In the setting of ordinary kriging, show that the maximum likelihood estimator of ???? based on data x = [x1, ..., xn]T, also known as the generalized least squares estimator, is given by ????̂
6.11 Consider the Matérn process restricted to the integer lattice t ∈ ℤd, and suppose data are observed on a rectangular region of size D ⊂ ℤd of size n1 ×···× nd. The purpose of this
6.10 Another circulant approximation to the covariance matrix for a circular CAR(1) model at n sites is given byΣcirc = ????2 circ(1, ????, ????2, ...,????m, ????m, ...,????2, ????), n odd, m = (n
6.9 Information matrix for MLE from circular CAR(1) process with nugget effect. Following on from Exercise 6.7, include a nugget effect in the CAR(1) model. Recall that a CAR(1) process with
6.8 Information matrix for composite MLE from circular CAR(1) process.Consider again the circulant CAR(1) model of the previous exercise. The purpose of this exercise is to investigate the accuracy
6.7 Information matrix for MLE from circular CAR(1) process. The simplest nontrivial stationary Gaussian process on the line is the AR(1) model or equivalently the CAR(1) model. The spectral density
6.6 Guyon (1982). This exercise looks in more detail at the unbiased sample covariance function used in Section 6.5. Suppose D is a rectangular lattice of length n in each direction so that it
6.5 A regression interpretation of the moment estimator for a CAR model. Let D be a finite domain in ℤd and let be a finite symmetric neighborhood of the origin, with half-neighborhood †.
6.4 Consider n equally spaced data sites in one dimension from the exponential covariance function. The covariance matrix Σ was given in (6.21). Show that the inverse of Σ is given by Ψ(exact) in
6.3 (a) Using the ideas from Sections 4.8.2 and 4.6.1, show that the AR(1) and .CAR(1) models in Section 6.3 have the spectral densities given in (6.13)and (6.17).(b) Show that the spectral densities
6.2 The quadratic form Q(W) in the Whittle log-likelihood for a CAR model is specified as a linear combination of terms involving the biased sample covariance function. Show that it can also be
6.1 Given observations xt, t ∈ D, where D is a rectangular region in ℤd as in(6.4), let x denote the sample mean of the data and define the centered?
5.7 The purpose of this exercise is to show that the two forms of the REML log-likelihood (5.31) and (5.41) are the same, up to an additive constant not depending on the data or the parameters. Let y
5.6 (Marshall and Mardia, 1985) This exercise develops the principle of MINQUE for certain spatial processes for which the mean vanishes and the covariance function is linear in the unknown
5.5 Consider the setting of Section 5.9 with n = 3. Let X ∼ N3(0, Σ) and suppose the covariance matrix Σ satisfies ????11 = ????22 = ????33 = 1. In the notation of this section, show that the
5.4 The n × n Helmert matrix H, n ≥ 2, is an orthogonal matrix whose rows are defined as follows:● For j = 1, ..., n − 1, the j row is given by(1, ..., 1,−j, 0, ..., 0)∕√j(j + 1), where
5.3(a) Let G = [G1 G2] be an n × n orthogonal matrix partitioned into two blocks, with n1 and n2 columns, respectively, n1 + n2 = n, so that GT 1G1 = I,GT 2G2 = I,GT 1G2 = 0. Let B be an n × n
5.2 This exercise looks at the regularity of the Matérn covariance function for small lags. This behavior is important for the study of infill asymptotics in Section 5.14. Suppose the real index
5.1 The spherical scheme is defined by the covariance function????(h; ????2,a) = {1 − 3 2|h∕a| + 1 2|h∕a|3, |h| ≤ a, 0, |h| > a, which is positive definite in dimensions d = 1, 2, 3. The
4.17 The purpose of this exercise is to show how a {0,1}-valued Markov mesh model can be recast as a Markov random field. For simplicity, restrict attention to the one-dimensional case and suppose
4.16 Verify the conditional distribution for Xt|X∖t in Example 4.9.
4.15 For the auto-logistic model (4.83), show that log pt(1|x∖t)pt(0|x∖t) = ????t + ∑s∈(t)????st xt, and hence deduce (4.84).
4.14 The purpose of this exercise is to confirm that Eqs. (4.74)–(4.76) imply(4.81). Let xT be a possible value of the random field and define yT by yt = 0 and y∖t = x∖t. Writing pt(xt|x∖t) =
4.13 The purpose of this exercise is to give some examples of cliques based on Figures 4.3 and 4.4. Let T = ℤ2 and consider the neighborhood structures generated by translates {(basic,1)t = t +
4.12 Verify the proof of Theorem 4.9.2 for the subset expansion of a negative potential function.
4.11 (Brook expansion (Brook, 1964)). Verify the expansion (4.73) and hence confirm that the full conditional probability functions in (4.72) determine the joint probability function pT(xT).
4.10 Consider the QAR model for a stationary Gaussian random field in Example 4.6. If c = −ab, show that the spectral density takes the form f(????) = ????2|1 − aei????[1] − bei????[2] +
4.9 Consider random variables Xij, i, j ∈ ℤ, such that(a) For each i, the random variables {Xij, j ∈ ℤ} follow an AR(1) process with parameter ????, so that E{Xij|Xij′ ∶ j′ < j} =
4.8 Verify Eq. (4.68) to show that the three types of conditioning on the past are equivalent for a Gaussian QAR model, E[Xt|(Xt−s, s ∈ )] = ∑s∈asXt−s, E[Xt|(Xt−s, s ∈ )] =
4.7 Regarding ℤ2 ⊂ ℝ2 as part of a complex plane ℂ, fix an angle ???? and set = {t ∈ Z2 ∶ t ≠ 0, ???? ≤ arg t < ???? + ????}, where arg t is shorthand for arg(t[1] + it[2]). Note
4.6 Nonuniqueness of the mean for a CAR model. Let {Xt} be a stationary AR(1) process in one dimension, with mean ????, autoregression parameter 0
4.5 In one dimension, consider a stationary CAR model (4.31), E[Xt|X∖t] = ∑S s = −S s ≠ 0????s Xt−s, var[Xt|X∖t] = ????2???? , where 1 − ????̃(????) = 1 − 2∑S s=1 ????s coss????
4.4 Consider a d-dimensional AR model for a stationary Gaussian random field{Xt} given by∑s∈K dsXt−s = ????t, t ∈ ℤd, where {????t} is white noise, and K ⊂ ℤd is a finite set. This
4.3 In d = 1 dimension, consider the SAR model∑S s=−S dsXt−s = ????t, where d0 > 0, ds = d−s and {????t} is a white noise process. Assume that∑dsei????s ≠ 0 for all ???? ∈ [−????,
4.2 Consider the following two autoregressions in one dimension:(i) 6Xt − 5Xt−1 + Xt−2 = ????t,(ii) 2Xt − 7Xt−1 + 3Xt−2 = ????t, where in each case {????t} is a white noise process with
4.1 (Tower rule for conditional expectations). Let (U, V1, V2, ..., Vn) denote a collection of jointly distributed random variables. The notation E[U|????1, ...,????n] denote the expected value of U
3.20 The purpose of this exercise is to show that negativity of the dispersion variance ????(V1|V2) in Exercise 3.19 cannot arise under the additional assumption that V2 can be partitioned as a union
3.19 The purpose of this exercise is to construct a counterexample showing that it is possible that ????(V1|V2) < 0 even when V1 ⊂ V2. Let X(t) = √2 cos(t +Φ) denote a random cosine wave in one
3.18 .(a) Show that the dispersion variance ????(0|V) as defined in Eq. (3.60) continues to make sense for an intrinsic random field with semivariogram????(h), and is given in this case by????(0|V) =
3.17 Let V be an open bounded region in ℝd. If {X(t)} is a stationary random field with covariance function ????(h), define the (continuous) sample mean within V by X(V) = |V|−1∫V X(t) dt.Show
3.15 can be extended to other values of ????.(a) If g(h) is a function of h ∈ ℝd, the Laplacian is defined by Δg(h) =∑d????=1 ????2g(h)∕????h[????]2. If g(h) = g#(r), say, with |h| = r, is
3.16 The purpose of this exercise is to investigate the extent to which Exercise
3.10, it was noted that the corresponding process is self-similar for all values of ????. The purpose of this exercise is to confirm the representation (3.45) for 0
3.15 Consider the function f????,d(????) = |????|−d−2????, ???? ∈ ℝd, where ???? ∈ ℝ is a real parameter. Then f????,d can be viewed as the spectral density of an isotropic stochastic
3.14 Repeat Exercise 3.12 with the same test function ????(h), but this time using the de Wijsian generalized intrinsic covariance function????GI(h)=−log h, h > 0, where ????GI(h) is defined up to
3.13 Repeat Exercise 3.12 with the same test function ????(h), but this time using the linear intrinsic covariance function ????I(h)=−|h|, where ????I(h)is defined up to an additive constant. As
3.12 In one dimension, d = 1, consider a stationary Gaussian random field X(t)with the exponential covariance function????(h) = exp(−|h|).Consider the indicator test function????(u) = I[|u| ≤
3.11 Let D be a bounded region in ℝd and let ????(t) = I[t ∈ D] be an indicator function on D. Show that the Fourier transform ????̃(????) of ???? satisfies the following:(a) ????̃(????) is
3.10 The purpose of this exercise is to confirm that the registered intrinsic random field XR(t) in (3.20) has the covariance function ????R(s, t) in (3.21).
3.9 Show that k, the space of homogeneous polynomials of degree k in ℝd, and k, the space of all polynomials of degree ≤ k in ℝd have dimensions pH(k) = dim(k) = (k + d − 1 k), pF(k)
3.8 .(a) If ????(s, t) is an ordinary continuous positive semidefinite function, show that????G(????, ????) = ∫ ????(s) ????(t) ????(s, t) ds dt defines a positive semidefinite generalized bilinear
3.7 If g(h) satisfies the integral representation (3.11) with A = 0, show that g(h)∕|h|2 → 0 as |h| → ∞.
3.6 If f(s)is an arbitrary finite real-valued function ofs ∈ ℝd, show that ????(s, t) =f(s)f(t) is positive semidefinite.Hint: For coefficients a1, ..., an and sites t1, ..., tn note the
3.5 Let {XI(t) ∶ t ∈ ℝd} be an intrinsic random field (of intrinsic order k = 0)with semivariogram ????(h). Given a fixed vector h0 ∈ ℝd, define a new random field by Y(t) = XI(t) − XI(t
3.4 If a semivariogram ????(h) is replaced by ????c(h) = ????(h) + c for any constant c ∈ ℝ, show that formulae (3.4) and (3.5) remain valid.
3.3 If −????(h) is a conditionally positive semidefinite function with ????(0) = 0, show that ????(s, t) = ????(s) + ????(t) − ????(t − s) is positive semidefinite. Further show that ????(s, t)
3.2 Verify the formula var {∑n i=1 ai XI(ti)}= −∑n i,j=1 ai aj????(ti − tj)for an intrinsic random field {XI(t)} of order 0 with semivariogram ????(h), where a(n × 1) is an increment vector
3.1 Let ????(h), h ∈ ℝd, be a valid semivariogram and suppose it tends to a finite limit for large lags, ????(h) → c as |h| → ∞. Show that????(h) = c − ????(h)defines a valid covariance
4.17 The purpose of this exercise is to show how a {0,1}-valued Markov mesh model can be recast as a Markov random field. For simplicity, restrict attention to the one-dimensional case and suppose
4.16 Verify the conditional distribution for Xt|X∖t in Example 4.9.
4.15 For the auto-logistic model (4.83), show that log pt(1|x∖t)pt(0|x∖t) = ????t + ∑s∈(t)????st xt, and hence deduce (4.84).
4.14 The purpose of this exercise is to confirm that Eqs. (4.74)–(4.76) imply(4.81). Let xT be a possible value of the random field and define yT by yt = 0 and y∖t = x∖t. Writing pt(xt|x∖t) =
4.13 The purpose of this exercise is to give some examples of cliques based on Figures 4.3 and 4.4. Let T = ℤ2 and consider the neighborhood structures generated by translates {(basic,1)t = t +
4.12 Verify the proof of Theorem 4.9.2 for the subset expansion of a negative potential function.
4.11 (Brook expansion (Brook, 1964)). Verify the expansion (4.73) and hence confirm that the full conditional probability functions in (4.72) determine the joint probability function pT(xT).
4.10 Consider the QAR model for a stationary Gaussian random field in Example 4.6. If c = −ab, show that the spectral density takes the form f(????) = ????2|1 − aei????[1] − bei????[2] +
4.9 Consider random variables Xij, i, j ∈ ℤ, such that(a) For each i, the random variables {Xij, j ∈ ℤ} follow an AR(1) process with parameter ????, so that E{Xij|Xij′ ∶ j′ < j} =
4.8 Verify Eq. (4.68) to show that the three types of conditioning on the past are equivalent for a Gaussian QAR model, E[Xt|(Xt−s, s ∈ )] = ∑s∈asXt−s, E[Xt|(Xt−s, s ∈ )] =
4.7 Regarding ℤ2 ⊂ ℝ2 as part of a complex plane ℂ, fix an angle ???? and set = {t ∈ Z2 ∶ t ≠ 0, ???? ≤ arg t < ???? + ????}, where arg t is shorthand for arg(t[1] + it[2]). Note
4.6 Nonuniqueness of the mean for a CAR model. Let {Xt} be a stationary AR(1) process in one dimension, with mean ????, autoregression parameter 0
4.5 In one dimension, consider a stationary CAR model (4.31), E[Xt|X∖t] = ∑S s = −S s ≠ 0????s Xt−s, var[Xt|X∖t] = ????2???? , where 1 − ????̃(????) = 1 − 2∑S s=1 ????s coss????
4.4 Consider a d-dimensional AR model for a stationary Gaussian random field{Xt} given by∑s∈K dsXt−s = ????t, t ∈ ℤd, where {????t} is white noise, and K ⊂ ℤd is a finite set. This
4.3 In d = 1 dimension, consider the SAR model∑S s=−S dsXt−s = ????t, where d0 > 0, ds = d−s and {????t} is a white noise process. Assume that∑dsei????s ≠ 0 for all ???? ∈ [−????,
4.2 Consider the following two autoregressions in one dimension:(i) 6Xt − 5Xt−1 + Xt−2 = ????t,(ii) 2Xt − 7Xt−1 + 3Xt−2 = ????t, where in each case {????t} is a white noise process with
4.1 (Tower rule for conditional expectations). Let (U, V1, V2, ..., Vn) denote a collection of jointly distributed random variables. The notation E[U|????1, ...,????n] denote the expected value of U
3.20 The purpose of this exercise is to show that negativity of the dispersion variance ????(V1|V2) in Exercise 3.19 cannot arise under the additional assumption that V2 can be partitioned as a union
3.19 The purpose of this exercise is to construct a counterexample showing that it is possible that ????(V1|V2) < 0 even when V1 ⊂ V2. Let X(t) = √2 cos(t +Φ) denote a random cosine wave in one
3.18 .(a) Show that the dispersion variance ????(0|V) as defined in Eq. (3.60) continues to make sense for an intrinsic random field with semivariogram????(h), and is given in this case by????(0|V) =
3.17 Let V be an open bounded region in ℝd. If {X(t)} is a stationary random field with covariance function ????(h), define the (continuous) sample mean within V by X(V) = |V|−1∫V X(t) dt.Show
3.15 can be extended to other values of ????.(a) If g(h) is a function of h ∈ ℝd, the Laplacian is defined by Δg(h) =∑d????=1 ????2g(h)∕????h[????]2. If g(h) = g#(r), say, with |h| = r, is
3.16 The purpose of this exercise is to investigate the extent to which Exercise
3.10, it was noted that the corresponding process is self-similar for all values of ????. The purpose of this exercise is to confirm the representation (3.45) for 0
3.15 Consider the function f????,d(????) = |????|−d−2????, ???? ∈ ℝd, where ???? ∈ ℝ is a real parameter. Then f????,d can be viewed as the spectral density of an isotropic stochastic
3.14 Repeat Exercise 3.12 with the same test function ????(h), but this time using the de Wijsian generalized intrinsic covariance function????GI(h)=−log h, h > 0, where ????GI(h) is defined up to
Showing 1 - 100
of 1080
1
2
3
4
5
6
7
8
9
10
11