All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
legal research analysis
Questions and Answers of
Legal Research Analysis
7.21 For the NYSE returns, say, rt, analyzed in Chapter 5, Example 5.4:(a) Estimate the spectrum of the rt. Does the spectral estimate appear to support the hypothesis that the returns are white?(b)
7.20 Repeat the analysis of Example 7.18 on BNRF1 of herpesvirus saimiri(the data file is bnrf1hvs.dat), and compare the results with the results obtained for Epstein–Barr.
7.19 Verify, as stated in (7.182), the imaginary part of a k×k spectral matrix, fim(ω), is skew symmetric, and then show βfim yy (ω)β = 0 for a real k×1 vector, β.
7.18 Extend the EM algorithm for classical factor analysis, (7.161)-(7.166), to the time series case of maximizing ln LB(ωj),D (ωj)in (7.177).Then, for the data used in Example 7.16, find the
7.17 In the factor analysis model (7.155), let p = 3, q = 1, andShow there is a unique choice for B and D, but δ2 3 1 .4 .9 .4 1 1 .7 .9.7 1
7.16 For this problem, consider the first three earthquake series listed in eq+exp.dat.(a) Estimate and compare the spectral density of the P component and then of the S component for each individual
7.15 The data set ch5fmri.dat contains data from other stimulus conditions in the fMRI experiment, as discussed in Example 7.14 (one location—Caudate—was left out of the analysis for brevity).
7.14 Assume the same additive signal plus noise representations as in the previous problem, except, the signal is now a random process with a zero mean and covariance matrix σ2 sI. Derive the
7.13 The problem of detecting a signal in noise can be considered using the model xt = st + wt, t= 1, . . . , n, for p1(x) when a signal is present and the model xt = wt, t= 1, . . . , n, for p2(x)
7.12 Show the ratio of the two smoothed spectra in (7.104) has the indicated F-distribution when f1(ω) = f2(ω). When the spectra are not equal, show the variable is proportional to an
7.11 Verify the forms of the linear compounds involving the mean given in(7.91) and (7.92), using (7.89) and (7.90).
7.10 Suppose we have I = 2 groups and the models y1jt = μt + α1t + v1jt for the j = 1, . . . , N observations in group 1 and y2jt = μt + α2t + v2jt for the j = 1, . . . , N observations in group
7.9 For the random coefficient model, verify the expected mean square of the regression power component isRecall, the underlying frequency domain model is Y (ωk) = Z(ωk)B(ωk) +V (ωk), where
7.8 Consider the estimator (7.68) as applied in the context of the random coefficient model (7.66). Prove the filter coefficients for the minimum mean square estimator can be determined from (7.69)
7.7 Consider a linear model with mean value function μt and a signal αt delayed by an amount τj on each sensor, i.e.,Show the estimators (7.43) for the mean and the signal are the Fourier
7.6 Consider estimating the functionby a linear filter estimator of the formwhere βt is defined by (7.43). Show a sufficient condition for ψt to be an unbiased estimator; i.e., E ψt = ψt, is
7.5 Consider the complex regression model (7.29) in the form Y = XB+V , where Y = (Y1, Y2, . . . YL) denotes the observed DFTs after they have been re-indexed and X = (X1,X2, . . . ,XL) is a matrix
7.4 Consider the predicted serieswhere βr satisfies (7.14). Show the ordinary coherence between yt and yt is exactly the multiple coherence (7.21). Yt =
7.3 Verify (7.19) and (7.20) for the mean-squared prediction error MSE in(7.12). Use the orthogonality principle, which impliesand gives a set of equations involving the autocovariance functions.
7.2 Prove f in (7.6) maximizes the log likelihood (7.5) by minimizing the negative of the log likelihoodwhere the λi values correspond to the eigenvalues in a simultaneous diagonalization of the
7.1 Consider the complex Gaussian distribution for the random variable X =Xc − iXs, as defined in (7.1)-(7.3), where the argument ωk has been suppressed. Now, the 2p × 1 real random variable Z =
6.25 In a small pilot study, a psychiatrist wanted to examine the effects of the drug lithium on bulimics (bulimics have continuous abnormal hunger and frequently go on eating binges). Although
6.24 Fit a stochastic volatility model to the returns of one (or more) of the four financial time series available in the R datasets package as EuStockMarkets.
6.23 Verify (6.175) and (6.182).
6.22 Verify (6.169) and (6.170).
6.21 Use the material presented in Example 6.18 to perform a Bayesian analysis of the model for the Johnson & Johnson data presented in Example 6.10.
6.20 Argue that a switching model is reasonable in explaining the behavior of the number of sunspots (see Figure 4.31) and then fit a switching model to the sunspot data.
6.19 Repeat the bootstrap analysis of Example 6.12 on the entire three-month treasury bills and rate of inflation data set of 110 observations. Do the conclusions of Example 6.12—that the dynamics
6.18 Verify Property P6.6.
6.17 Verify Property P6.5.
6.16 Use Property P6.6 to complete the following exercises.(a) Write a univariate AR(1) model, yt = φyt−1 + vt, in state-space form. Verify your answer is indeed an AR(1).(b) Repeat (a) for an
6.15 Using Example 6.10 as a guide, fit a structural model to the Federal Reserve Board Production Index data and compare it with the model fit in Example 3.43.
6.14 The data set labeled ar1miss.dat is n = 100 observations generated from an AR(1) process, xt = φxt−1 +wt, with φ = .9 and σw = 1, where 10% of the data has been zeroed out at random.
6.13 As an example of the way the state-space model handles the missing data problem, suppose the first-order autoregressive process xt = φxt−1 + wt has an observation missing at t = m, leading to
6.12 Continuing with the previous problem, consider the evaluation of the Hessian matrix and the numerical evaluation of the asymptotic variance–covariance matrix of the parameter estimates. The
6.11 In §6.3, we discussed that it is possible to obtain a recursion for the gradient vector, −∂ lnLY (Θ)/∂Θ. Assume the model is given by (6.1)and (6.2) and At is a known design matrix that
6.10 To explore the stability of the filter, consider a univariate state-space model. That is, for t = 1, 2, . . ., the observations are yt = xt +vt and the state equation is xt = φxt−1 + wt,
6.9 Develop the EM algorithm for the model with inputs, (6.3) and (6.4).
6.8 Consider the model yt = xt + vt, where vt is Gaussian white noise with variance σ2 v, xt are independent Gaussian random variables with mean zero and var(xt) = rtσ2x with xt independent of vt,
6.7 Let yt represent the land-based global temperature series shown in Figure 6.2. The data file for this problem is HL.dat on the website.(a) Using regression, fit a third-degree polynomial in time
6.6 (a) Consider the univariate state-space model given by state conditions x0 = w0, xt = xt−1 + wt and observations yt = xt + vt, t = 1, 2, . . ., where wt and vt are independent, Gaussian, white
6.5 Derivation of Property P6.2 Based on the Projection Theorem. Throughout this problem, we use the notation of Property P6.2 and of the Projection Theorem given in Appendix B, where H is L2. If
6.4 Suppose the vector z = (x, y), where x (p×1) and y (q ×1) are jointly distributed with mean vectors μx and μy and with covariance matrixConsider projecting x on M= sp{1, y}, say, x = b +
6.3 Simulate n = 100 observations from the following state-space model:xt = .8xt−1 + wt and yt = xt + vt where x0 ∼ N(0, 2.78), wt ∼ iid N(0, 1), and vt ∼ iid N(0, 1) are all mutually
6.2 Consider the state-space model presented in Example 6.3. Let xt−1 t =E(xt|yt−1, . . . , y1) and let Pt−1 t = E(xt − xt−1 t )2. The innovation sequence or residuals are t = yt −
6.1 Consider a system process given by xt = −.9xt−2 + wt t = 1, . . . , n where x0 ∼ N(0, σ2 0), x−1 ∼ N(0, σ2 1), and wt is Gaussian white noise with variance σ2w. The system process is
5.13 Consider the data set containing quarterly U.S. unemployment, U.S. GNP, consumption, and government and private investment from 1948-III to 1988-II. The seasonal component has been removed from
5.12 Consider predicting the transformed flows It = logit from transformed precipitation values Pt =√pt using a transfer function model of the form(1 − B12)It = α(B)(1 − B12)Pt + nt, where we
5.11 The file labeled clim-hyd has 454 months of measured values for the climatic variables air temperature, dew point, cloud cover, wind speed, precipitation (pt), and inflow (it), at Shasta Lake.
5.10 Consider the correlated regression model, defined in the text by (5.53), say, y = Zβ + x, where x has mean zero and covariance matrix Γ. In this case, we know that the weighted least squares
5.9 Let St represent the monthly sales data listed in sales.dat (n = 150), and let Lt be the leading indicator listed in lead.dat. Fit the regression model ∇St = β0 + β1∇Lt−3 + xt, where xt
5.8 The sunspot data are plotted in Chapter 4, Figure 4.31. From a time plot of the data, discuss why is it reasonable to fit a threshold model to the data, and then fit a threshold model.
5.7 The 2×1 gradient vector, l(1)(α0, α1), given for an ARCH(1) model was displayed in (5.41). Verify (5.41) and then use the result to calculate the 2 × 2 Hessian matrix 1(2)(a0, a1)= (o 21/
5.6 The stats package of R contains the daily closing prices of four major European stock indices; type help(EuStockMarkets) for details. Fit a GARCH model to the returns of these series and discuss
5.5 Investigate whether the growth rate of the monthly Oil Prices exhibit GARCH behavior. If so, fit an appropriate model to the growth rate.
5.4 Investigate whether the monthly returns of a stock dividend yield listed in the file sdyr.dat exhibit GARCH behavior. If so, fit an appropriate model to the returns. The data are monthly returns
5.3 Compute the sample ACF of the absolute values of the NYSE returns displayed in Figure 1.4 up to lag 200 and comment on whether the ACF indicates long memory. Fit an ARFIMA model to the absolute
5.2 The data in globtemp2.dat are annual global temperature deviations from 1880 to 2004 (there are three columns in the data file; work with the annual means and not the 5-year smoothed data). The
5.1 The data set labeled fracdiff.dat is n = 1000 simulated observations from a fractionally differenced ARIMA(1, 1, 0) model with φ = .75 and d = .4.(a) Plot of the data and comment.(b) Plot the
4.43 For the zero-mean complex random vector z = xc − ixs, with cov(z) =Σ = C − iQ, with Σ= Σ∗, define w = 2Re(a∗z), where a = ac − ias is an arbitrary non-zero complex vector. Prove
4.42 Finish the proof of Theorem C.5.
4.41 Prove Lemma C.4.
4.40 Show that condition (4.41) implies (C.19) under the assumption that wt ∼ wn(0, σ2w).
4.39 Let wt be a Gaussian white noise series with variance σ2w. Prove that the results of Theorem C.4 hold without error for the DFT of wt.
4.38 Consider the two-dimensional linear filter given as the output (4.154).(a) Express the two-dimensional autocovariance function of the output, say, γy(h1, h2), in terms of an infinite sum
4.37 Consider the same model as in the preceding problem.(a) Prove the optimal smoothed estimator of the form(c) Compare mean square error of the estimator in part (b) with that of the optimal finite
4.36 Consider the model yt = xt + vt, where xt = φ1xt−1 + wt, such that vt is Gaussian white noise and independent of xt with var(vt) =σ2 v, and wt is Gaussian white noise and independent of vt,
4.35 Consider the signal plus noise modelwhere the signal and noise series, xt and vt are both stationary with spectra fx(ω) and fv(ω), respectively. Assuming that xt and vt are independent of each
4.34 Figure 4.33 contains 454 months of measured values for the climatic variables air temperature, dew point, cloud cover, wind speed, precipitation, and inflow at Shasta Lake in California. We
4.33 Prove the squared coherence ρ2y·x(ω) = 1 for all ω whenthat is, when xt and yt can be related exactly by a linear filter. Yt= T=-
4.32 Consider the problem of approximating the filter outputfor t = M/2−1,M/2, . . . , n−M/2, where xt is available for t = 1, . . . , n andwith ωk = k/M. Prove by Yt= k=- Yi M |k|
4.31 Using Examples 4.20-4.22 as a guide, perform a dynamic Fourier analysis and wavelet analyses (dwt and waveshrink analysis) on the event of unknown origin that took place near the Russian nuclear
4.30 Repeat the wavelet analyses of Examples 4.21 and 4.22 on all earthquake and explosion series in the data file eq+exp.dat. Do the conclusions about the difference between earthquakes and
4.29 Repeat the dynamic Fourier analysis of Example 4.20 on the remaining seven earthquakes and seven explosions in the data file eq+exp.dat. Do the conclusions about the difference between
4.28 Suppose we wish to test the noise alone hypothesis H0 : xt = nt against the signal-plus-noise hypothesis H1 : xt = st + nt, where st and nt are uncorrelated zero-mean stationary processes with
4.27 Suppose a sample time series with n = 256 points is available from the first-order autoregressive model. Furthermore, suppose a sample spectrum computed with L = 3 yields the estimated value ¯
4.26 Fit an autoregressive spectral estimator to the Recruitment series and compare it to the results of Example 4.11.
4.25 Often, the periodicities in the sunspot series are investigated by fitting an autoregressive spectrum of sufficiently high order. The main periodicity is often stated to be in the neighborhood
4.24 Suppose we are given a stationary zero-mean series xt with spectrum fx(ω) and then construct the derived series yt = ayt−1 + xt, t= ±1,±2, ... .(a) Show how the theoretical fy(ω) is
4.23 Suppose xt is a stationary series, and we apply two filtering operations in succession, say,and then(a) Show the spectrum of the output is fz(ω) = |A(ω)|2|B(ω)|2fx(ω), where A(ω) and B(ω)
4.22 Let xt = cos(2πωt), and consider the outputwhere |A(ω)| and φ(ω) are the amplitude and phase of the filter, respectively.Interpret the result in terms of the relationship between the input
4.21 Determine the theoretical power spectrum of the series formed by combining the white noise series wt to form yt = wt−2 + 4wt−1 + 6wt + 4wt+1 + wt+2.Determine which frequencies are present by
4.20 Consider the bivariate time series records containing monthly U.S. production as measured monthly by the Federal Reserve Board Production Index and unemployment as given in Figure 3.22.(a)
4.19 For the processes in Problem 4.18,(a) Compute the phase between xt and yt.(b) Simulate n = 1024 observations from xt and yt for φ = .9, σ2 = 1, and D = 1. Then estimate and plot the phase
4.18 Consider two processes xt = wt and yt = φxt−D + vt where wt and vt are independent white noise processes with common variance σ2, φ is a constant, and D is a fixed integer delay.(a) Compute
4.17 Analyze the coherency between the temperature and salt data discussed in Problem 4.9. Discuss your findings.
4.16 Consider two time seriesformed from the white noise series wt with variance σ2w = 1.(a) Are xt and yt jointly stationary? Recall the cross-covariance function must also be a function only of
4.15 Use Property P4.1 to verify (4.63). Then verify (4.66) and (4.67)
4.14 The periodic behavior of a time series induced by echoes can also be observed in the spectrum of the series; this fact can be seen from the results stated in Problem 4.6(a). Using the notation
4.13 Repeat Problem 4.9 using a nonparametric spectral estimation procedure.In addition to discussing your findings in detail, comment on your choice of a spectral estimate with regard to smoothing
4.12 Repeat Problem 4.8 using a nonparametric spectral estimation procedure.In addition to discussing your findings in detail, comment on your choice of a spectral estimate with regard to smoothing
4.11 Prove the convolution property of the DFT, namely,for t = 1, 2, . . . , n, where dA(ωk) and dx(ωk) are the discrete Fourier transforms of at and xt, respectively, and we assume that xt = xt+n
4.10 Let the observed series xt be composed of a periodic signal and noise so it can be written as xt = β1 cos(2πωkt) + β2 sin(2πωkt) + wt, where wt is a white noise process with variance σ2w.
4.9 The levels of salt concentration known to have occurred over rows, corresponding to the average temperature levels for the soil science data considered in Figures 1.15 and 1.16, are shown in
4.8 Figure 4.31 shows the biyearly smoothed (12-month moving average)number of sunspots from June 1749 to December 1978 with n = 459 points that were taken twice per year. With Example 4.9 as a
4.7 Suppose xt and yt are stationary zero-mean time series with xt independent of ys for all s and t. Consider the product series zt = xtyt.Prove the spectral density for zt can be written as z(w) =
4.6 In applications, we will often observe series containing a signal that has been delayed by some unknown time D, i.e., xt = st + Ast−D + nt, where st and nt are stationary and independent with
4.5 A first-order autoregressive model is generated from the white noise series wt using the generating equations xt = φxt−1 + wt, where φ, for |φ| .(a) Show the power spectrum of xt is given
4.4 A time series was generated by first drawing the white noise series wt from a normal distribution with mean zero and variance one. The observed series xt was generated from xt = wt − θwt−1,
4.3 Verify (4.5).
Showing 1 - 100
of 1549
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last