All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
legal research analysis
Questions and Answers of
Legal Research Analysis
4.2 With reference to equations (4.2) and (4.3), let Z1 = U1 and Z2 = −U2 be independent, standard normal variables. Consider the polar coordinates of the point (Z1, Z2), that is, A2 = Z2 1 + Z2 2
4.1 Repeat the simulations and analyses in Examples 4.1 and 4.2 with the following changes:(a) Change the sample size to n = 128 and generate and plot the same series as in Example 4.1:xt1 = 2
3.43 Prove Property P3.2.
3.42 Prove Theorem B.2.
3.41 Use Theorem B.2 and B.3 to verify (3.105).
3.40 Consider the series xt = wt − wt−1, where wt is a white noise process with mean zero and variance σ2w. Suppose we consider the problem of predicting xn+1, based on only x1, . . . , xn. Use
3.39 Use the Projection Theorem to derive the Innovations Algorithm, Property P3.6, equations (3.68)-(3.70). Then, use Theorem B.2 to derive the m-step-ahead forecast results given in (3.71) and
3.38 Suppose xt =p j=1 φjxt−j+wt, where φp = 0 and wt is white noise such that wt is uncorrelated with {xk;k p, the BLP of xn+1 on sp{xk, k ≤ n} is P In+1=0jxn+1-j. j=1
3.37 Fit an appropriate seasonal ARIMA model to the log-transformed Johnson and Johnson earnings series of Example 1.1. Use the estimated model to forecast the next 4 quarters.
3.36 Fit a seasonal ARIMA model of your choice to the U.S. Live Birth Series(birth.dat). Use the estimated model to forecast the next 12 months.
3.35 Fit a seasonal ARIMA model of your choice to the unemployment data displayed in Figure 3.22. Use the estimated model to forecast the next 12 months.
3.34 Sketch the ACF of the seasonal ARIMA(0, 1)×(1, 0)12 model with Φ = .8 and θ = .5.
3.33 Consider the ARIMA model xt = wt +Θwt−2.(a) Identify the model using the notation ARIMA(p,d, q) × (P, D,Q)s.(b) Show that the series is invertible for |Θ| (c) Develop equations for the
3.32 One of the series collected along with particulates, temperature, and mortality described in Example 2.2 is the sulfur dioxide series. Fit an ARIMA(p,d, q) model to the data, performing all of
3.31 The second column in the data file globtemp2.dat are annual global temperature deviations from 1880 to 2004. The data are an update to the Hansen-Lebedeff global temperature data and the url of
3.30 Using the gas price series described in Problem 2.9, fit an ARIMA(p,d, q)model to the data, performing all necessary diagnostics. Comment.
3.29 In Example 3.36, we presented the diagnostics for the MA(2) fit to the GNP growth rate series. Using that example as a guide, complete the diagnostics for the AR(1) fit.
3.28 For the logarithm of the glacial varve data, say, xt, presented in Example 3.31, use the first 100 observations and calculate the EWMA, +xtt+1, given in (3.134) for t = 1, . . . , 100, using λ
3.27 Verify that the IMA(1,1) model given in (3.131) can be inverted and written as (3.132).
3.26 Suppose yt = β0 + β1t + · · · + βqtq + xt, βq = 0, where xt is stationary. First, show that ∇kxt is stationary for any k =1, 2, . . . , and then show that ∇kyt is not stationary for
3.25 Forecasting with estimated parameters: Let x1, x2, . . . , xn be a sample of size n from a causal AR(1) process, xt = φxt−1 + wt. Let φ be the Yule–Walker estimator of φ.(a) Show φ −
3.24 A problem of interest in the analysis of geophysical time series involves a simple model for observed data containing a signal and a reflected version of the signal with unknown amplification
3.23 Consider the stationary series generated by xt = α + φxt−1 + wt + θwt−1, where E(xt) = μ, |θ| .(a) Determine the mean as a function of α for the above model. Find the autocovariance
3.22 Using Example 3.30 as your guide, find the Gauss–Newton procedure for estimating the autoregressive parameter, φ, from the AR(1) model, xt = φxt−1+wt, given data x1, . . . , xn. Does this
3.21 Generate n = 50 observations from a Gaussian AR(1) model with φ = .99 and σw = 1. Using an estimation technique of your choice, compare the approximate asymptotic distribution of your estimate
3.20 Generate 10 realizations of length n = 200 of a series from an ARMA(1,1)model with φ1 = .90, θ1 = .2 and σ2 = .25. Fit the model by nonlinear least squares or maximum likelihood in each case
3.19 Generate n = 500 observations from the ARMA model given by xt = .9xt−1 + wt − .9wt−1, with wt ∼ iid N(0, 1). Plot the simulated data, compute the sample ACF and PACF of the simulated
3.18 Suppose x1, . . . , xn are observations from an AR(1) process with μ = 0.(a) Show the backcasts can be written as xnt= φ1−tx1, for t ≤ 1.(b) In turn, show, for t ≤ 1, the backcasted
3.17 Let Mt represent the cardiovascular mortality series discussed in Chapter 2, Example 2.2. Fit an AR(2) model to the data using linear regression and using Yule–Walker.(a) Compare the parameter
3.16 Verify statement (3.78), that for a fixed sample size, the ARMA prediction errors are correlated.
3.15 Consider the ARMA(1,1) model discussed in Example 3.6, equation (3.26);that is, xt = .9xt−1 + .5wt−1 + wt. Show that truncated prediction as defined in (3.81) is equivalent to truncated
3.14 For an AR(1) model, determine the general form of the m-step-ahead forecast xtt+m and show 1-2m E[(t+m+m)] =2- W 1-62
3.13 Suppose we wish to find a prediction function g(x) that minimizes MSE = E[(y − g(x))2], where x and y are jointly distributed random variables with density function f(x, y).(a) Show that MSE
3.12 Suppose xt is stationary with zero mean and recall the definition of the PACF given by (3.49) and (3.50). That is, let.andbe the two residuals where {a1, . . . , ah−1} and {b1, . . . , bh−1}
3.11 In the context of equation (3.56), show that, if γ(0) > 0 and γ(h) → 0 as h→∞, then Γn is positive definite.
3.10 Consider the MA(1) series xt = wt + θwt−1, where wt is white noise with variance σ2w(a) Derive the minimum mean square error one-step forecast based on the infinite past, and determine the
3.9 Let Mt represent the cardiovascular mortality series discussed in Chapter 2, Example 2.2.(a) Fit an AR(2) to Mt using linear regression as in Example 3.16.(b) Assuming the fitted model in (a) is
3.8 Generate n = 100 observations from each of the three models discussed in Problem 3.7. Compute the sample ACF for each model and compare it to the theoretical values. Compute the sample PACF for
3.7 Verify the calculations for the autocorrelation function of an ARMA(1, 1)process given in Example 3.11. Compare the form with that of the ACF for the ARMA(1, 0) and the ARMA(0, 1) series. Plot
3.6 For the AR(2) autoregressive series shown below, determine a set of difference equations that can be used to find ψj, j = 0, 1, . . . in the representation(3.24) and the autocorrelation function
3.5 For the AR(2) model given by xt = −.9xt−2 + wt, find the roots of the autoregressive polynomial, and then sketch the ACF, ρ(h).
3.4 Verify the causal conditions for an AR(2) model given in (3.27). That is, show that an AR(2) is causal if and only if (3.27) holds.
3.3 Identify the following models as ARMA(p, q) models (watch out for parameter redundancy), and determine whether they are causal and/or invertible:(a) xt = .80xt−1 − .15xt−2 + wt −
3.2 Let wt be white noise with variance σ2w and let |φ| Consider the process(a) Find the mean and the variance of {xt, t = 1, 2, . . .}. Is xt stationary?(b) Showfor h ≥ 0.(c) Argue that for
3.1 For an MA(1), xt = wt +θwt−1, show that |ρx(1)| ≤ 1/2 for any numberθ. For which values of θ does ρx(1) attain its maximum and minimum?
2.12 Use a smoothing technique described in §2.4 to estimate the trend in the global temperature series displayed in Figure 1.2. Use the entire data set(see Example 2.1 for details).
2.11 For the data plotted in Figure 1.5, let St denote the SOI index series, and let Rt denote the Recruitment series.(a) Draw a lag plot similar to the one in Figure 2.7 for Rt and comment.(b)
2.10 In this problem, we will explore the periodic nature of St, the SOI series displayed in Figure 1.5.(a) Detrend the series by fitting a regression of St on time t. Is there a significant trend in
2.9 Consider the two time series representing average wholesale U.S. gas and oil prices over 180 months, beginning in July 1973 and ending in December 1987. Analyze the data using some of the
2.8 The glacial varve record plotted in Figure 2.6 exhibits some nonstationarity that can be improved by transforming to logarithms and some additional nonstationarity that can be corrected by
2.7 Show (2.28) is stationary.
2.6 Consider a process consisting of a linear trend with an additive noise term consisting of independent random variables wt with zero means and variances σ2w, that is, xt = β0 + β1t + wt, where
2.5 Model Selection. Both selection criteria (2.18) and (2.19) are derived from information theoretic arguments, based on the well-known Kullback–Leibler discrimination information numbers (see
2.4 Kullback-Leibler Information. Given the random vector y, we define the information for discriminating between two densities in the same family, indexed by a parameter θ, say f(y; θ1) and f(y;
2.3 Generate a random walk with drift, (1.4), of length n = 500 with δ = .1 and σw = 1. Call the data xt for t = 1, . . . , 500. Fit the regression xt = βt + wt using least squares. Plot the data,
2.2 For the mortality data examined in Example 2.2:(a) Add another component to the regression in (2.24) that accounts for the particulate count four weeks prior; that is, add Pt−4 to the
2.1 For the Johnson & Johnson data, say yt, for t = 1, . . . , 84, shown in Figure 1.1, let xt = ln(yt).(a) Fit the regression model xt = βt + α1Q1(t) + α2Q2(t) + α3Q3(t) + α4Q4(t) + wt where
1.31 Let {xt; t = 0,±1,±2, . . .} be iid (0, σ2).(a) For h ≥ 1 and k ≥ 1, show that xtxt+h and xsxs+k are uncorrelated for all s = t.(b) For fixed h ≥ 1, show that the h × 1
1.30 For a linear process of the form xt =∞j=0φjwt−j , where {wt} satisfies the conditions of Theorem A.7 and |φ| < 1, show that√n(ρx(1) − ρx(1))1 − ρ2 x(1)d→N(0, 1), and
1.29 Let xt be a linear process of the form (A.44)–(A.45). If we define˜γ(h) = n−1n t=1(xt+h − μx)(xt − μx), show that n1/2˜γ(h) − γ(h)= op(1).Hint: The Markov Inequality P{|x|
1.28 (a) Suppose xt is a weakly stationary time series with mean zero and with absolutely summable autocovariance function, γ(h), such that∞h=−∞γ(h) = 0.Prove that√n ¯x p→0, where ¯x
1.27 Suppose xt = β0 +β1t, where β0 and β1 are constants. Prove as n→∞,ρx(h) → 1 for fixed h, where ρx(h) is the ACF (1.37).
1.26 A concept used in geostatistics, see Journel and Huijbregts (1978) or Cressie (1993), is that of the variogram, defined for a spatial process xs, s = (s1, s2), for s1, s2 = 0,±1,±2, ..., as
1.25 Consider a collection of time series x1t, x2t, . . . , xNt that are observing some common signal μt observed in noise processes e1t, e2t, . . . , eNt, with a model for the j-th observed series
1.24 A real-valued function g(t), defined on the integers, is non-negative definite if and only ifn s=1n t=1 asg(s − t)at ≥ 0 for all positive integers n and for all vectors a = (a1, a2, . . .
1.23 For the time series yt described in Example 1.23, verify the stated result that ρy(1) = −.47 and ρy(h) = 0 for h > 1.
1.22 Simulate a series of n = 500 observations from the signal-plus-noise model presented in Example 1.12 with σ2w= 1. Compute the sample ACF to lag 100 of the data you generated and comment.
1.21 Although the model in Problem 1.2 is not stationary (Why?), the sample ACF can be informative. For the data you generated in that problem, calculate and plot the sample ACF, and then comment.
1.20 (a) Simulate a series of n = 500 moving average observations as in Example 1.9 and compute the sample ACF, ρ(h), to lag 20. Compare the sample ACF you obtain to the actual ACF, ρ(h). [Recall
1.19 (a) Simulate a series of n = 500 Gaussian white noise observations as in Example 1.8 and compute the sample ACF, ρ(h), to lag 20. Compare the sample ACF you obtain to the actual ACF, ρ(h).
1.18 Suppose that xt is a linear process of the form (1.31) satisfying the absolute summability condition (1.32). Prove∞h=−∞|γ(h)| < ∞.Section 1.6
1.17 Suppose we have the linear process xt generated by xt = wt − θwt−1, t = 0, 1, 2, . . ., where {wt} is independent and identically distributed with characteristic function φw(·), and θ is
1.16 Consider the series xt = sin(2πUt), t = 1, 2, . . ., where U has a uniform distribution on the interval (0, 1).(a) Prove xt is weakly stationary.(b) Prove xt is not strictly stationary. [Hint:
1.15 Let wt, for t = 0,±1,±2, . . . be a normal white noise process, and consider the series xt = wtwt−1.Determine the mean and autocovariance function of xt, and state whether it is stationary.
1.14 Let xt be a stationary normal process with mean μx and autocovariance function γ(h). Define the nonlinear time series yt = exp{xt}.(a) Express the mean function E(yt) in terms of μx and
1.13 Consider the two series xt = wt yt = wt − θwt−1 + ut, where wt and ut are independent white noise series with variances σ2w and σ2u, respectively, and θ is an unspecified constant.(a)
1.12 For two weakly stationary series xt and yt, verify (1.30)
1.11 Consider the linear process defined in (1.31).(a) Verify that the autocovariance function of the process is given by(1.33). Use the result to verify your answer to Problem 1.7.(b) Show that xt
1.10 Suppose we would like to predict a single stationary series xt with zero mean and autocorrelation function γ(h) at some time in the future, say, t +, for> 0.(a) If we predict using only xt and
1.9 A time series with a periodic component can be constructed from xt = U1 sin(2πω0t) + U2 cos(2πω0t), where U1 and U2 are independent random variables with zero means and E(U2 1) = E(U2 2) =
1.8 Consider the randow walk with drift model xt = δ + xt−1 + wt, for t = 1, 2, . . . , with x0 = 0, where wt is white noise with variance σ2w.(a) Show that the model can be written as xt = δt
1.7 For a moving average process of the form xt = wt−1 + 2wt + wt+1, where wt are independent with zero means and variance σ2w, determine the autocovariance and autocorrelation functions as a
1.6 Consider the time series xt = β1 + β2t + wt, where β1 and β2 are known constants and wt is a white noise process with variance σ2w.(a) Determine whether xt is stationary.(b) Show that the
1.5 For the two series, xt, in Problem 1.2 (a) and (b):(a) compute and sketch the mean functions μx(t); for t = 1, . . . , 200.(b) calculate the autocovariance functions, γx(s, t), for s, t = 1, .
1.4 Show that the autocovariance function can be written asγ(s, t) = E[(xs − μs)(xt − μt)] = E(xsxt) − μsμt, where E[xt] = μt.
1.3 (a) Generate n = 100 observations from the autoregression xt = −.9xt−2 + wt with σw = 1, using the method described in Example 1.10. Next, apply the moving average filter vt = (xt + xt−1 +
1.2 Consider a signal plus noise model of the general form xt = st + wt, where wt is Gaussian white noise with σ2w= 1. Simulate and plot n =200 observations from each of the following two models
1.1 To compare the earthquake and explosion signals, plot the data displayed in Figure 1.7 on the same graph using different colors or different line types and comment on the results.
=+c. If the monthly return for VLO was 3.4%, given the regression equation, what would be your best estimate of the monthly return for TSO?
=+ How is this number interpreted?
=+b. What is the r-squared for the regression line?
=+a. Calculate the least-squares regression line.
=+5. Plot the monthly returns using TSO as the dependent variable and VLO as the independent variable.
=+Explain what the correlation coefficient means.
=+4. What is the correlation coefficient for VLO and TSO?
=+data provided in Question 2. Explain how the standard deviation relates to risk.
=+3. Calculate the standard deviation of the monthly return for both VLO and TSO using the
=+c. Explain why the mode is not a meaningful statistic for this data set.
=+i. Arithmetic mean of the monthly returns ii. Median monthly return iii. Geometric mean monthly return
=+b. Calculate the following for each of these stocks:
Showing 100 - 200
of 1549
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last