Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

If you had to choose one between the ETS and the SARIMA model to forecast several years into the future, which one would you go

image text in transcribed

image text in transcribed

image text in transcribedimage text in transcribed

image text in transcribed

image text in transcribed

image text in transcribed

  • If you had to choose one between the ETS and the SARIMA model to forecast several years into the future, which one would you go with? Why?

  • If you had the resources to construct models other than the ETS and the SARIMA, would you keep searching? Cite at least two reasons in case you would. If you wouldnt, explain why not.

The file titled "NuclearEnergy" on Blackboard contains data on the amount of electricity gen- erated in the US over different months (Fig.1). a. (30 pts) Figure 2 shows the best models from the ETS, SARIMA, and neural network category, Fig. 3 shows some accuracy metrics, Fig. 4 shows some residual analyses checks, Fig. 5 does some tests on the linearity assumption, Fig. 6 shows some rolling window cross-validation errors. Nuclear eerg production in the US 600- what ents 400 200- *ten Wahudummy mynysrah 1975 1980 1990 1995 2000 1985 Time Figure 1: Nuclear energy production in the US > best.ets ets(train.en) > best.ets ETS(M, Ad,M) Call: ets(y-train.en) Smoothing parameters: alpha=0.9384 beta le-04 gamma - le-4 phi -0.98 Initial states: 1 - 58.8607 b = 4.8875 S = 1.8654 0.9535 0.9595 1.822 1.1227 1.8886 2.9684 0.9022 0.8655 8.9846 0.9811 1.1845 signa: 0.8643 AIC AICC BIC 3784.754 3707.889 3772.128 > best.arima-auto. arima(train.en, stepwise - F) > best.arima Series: train.en ARIMA(1,2,3)(0,1,1)[12] with drift Coefficients: ar1 mal ma2 ma3 smal drift 0.9598 -0.1194 -0.2e2e -0.1989 -0.6862 1.7817 s.e. e.0224 0.8623 e.e574 8.8640 2.8474 0.3685 sigma 2 estimated as 395: log likelihood -1323.56 AI 2661.12 AICC2661.5 BIC-2687.es > best.neuralennetar(train.en) > best.neural Series: train.en Model: NNAR(1,1,2)(12) Call: nnetar(y - train.en) Average of 2e networks, each of which is a 2-2-1 network with 9 weights options were . linear output units signa^2 estimated as 701.7 > Figure 2: Computer-generated "best" candidates > accuracy(forecast(best.ets), en.ts) ME RMSE MAE MPE MAPE MASE ACF1 Theil's u Training set 1.077933 19.62998 15.25368 0.8810861 4.726116 0.4397663 0.84199627 NA Test set 14.715094 28.61498 22.98249 2.1154902 3.627015 0.6682828 0.12751466 0.464963 > accuracy(forecast(best.arima), ents) ME RMSE MAE MPE MAPE MASE ACF1 Theil's U Training set 0.02325159 19.29376 14.88581 -0.1348431 4.536740 0.4291605 -0.009535455 NA Test set 6.62464804 19.46271 16.91486 0.8089735 2.719778 0.4876584 -8.094058344 0.2945444 Figure 3: Some accuracy measures > checkresiduals(best.ets) Ljung-Box test data: Residuals from ETS(M, Ad,M) Q* = 77.354, df = 7, p-value = 4.763e-14 Model df: 17. Total lags used: 24 > checkresiduals(best.arima) Ljung-Box test data: Residuals from ARIMA(1,0,3) (0,1,1)[12] with drift Q* = 64.071, df = 18, p-value = 4.426e-07 Model df: 6. Total lags used: 24 Figure 4: Some residual analyses > nonlinearity Test(train.en) ** Teraesvirta's neural network test ** Null hypothesis: Linearity in "mean" X-squared = 27.05787 df = 2 p-value = 1.331859e-86 ** White neural network test ** Null hypothesis: Linearity in "mean" X-squared = 19.66495 df = 2 p-value = 5.367964e-05 ** Keenan's one-degree test for nonlinearity ** Null hypothesis: The time series follows some AR process F-stat = 4.664014 P-value = 0.03163803 ** McLeod-Li test ** Null hypothesis: The time series follows some ARIMA process Maximum P-value = 0 ** Tsay's Test for nonlinearity ** Null hypothesis: The time series follows some AR process F-stat = 1.744 p-value = 0.0006862 ** Likelihood ratio test for threshold nonlinearity ** Null hypothesis: The time series follows some AR process Alternativce hypothesis: The time series follows some TAR process X-squared = 19.07894 p-value = 0.3686573 Figure 5: Some linearity checks Rolling window CV on Nuclear Energy 20 BestMAMdamp BestSARIMA BestNNET 10 0 Rolling window errors -10 -20 5 10 15 20 Size of training set Figure 6: Rolling window cross-validation The file titled "NuclearEnergy" on Blackboard contains data on the amount of electricity gen- erated in the US over different months (Fig.1). a. (30 pts) Figure 2 shows the best models from the ETS, SARIMA, and neural network category, Fig. 3 shows some accuracy metrics, Fig. 4 shows some residual analyses checks, Fig. 5 does some tests on the linearity assumption, Fig. 6 shows some rolling window cross-validation errors. Nuclear eerg production in the US 600- what ents 400 200- *ten Wahudummy mynysrah 1975 1980 1990 1995 2000 1985 Time Figure 1: Nuclear energy production in the US > best.ets ets(train.en) > best.ets ETS(M, Ad,M) Call: ets(y-train.en) Smoothing parameters: alpha=0.9384 beta le-04 gamma - le-4 phi -0.98 Initial states: 1 - 58.8607 b = 4.8875 S = 1.8654 0.9535 0.9595 1.822 1.1227 1.8886 2.9684 0.9022 0.8655 8.9846 0.9811 1.1845 signa: 0.8643 AIC AICC BIC 3784.754 3707.889 3772.128 > best.arima-auto. arima(train.en, stepwise - F) > best.arima Series: train.en ARIMA(1,2,3)(0,1,1)[12] with drift Coefficients: ar1 mal ma2 ma3 smal drift 0.9598 -0.1194 -0.2e2e -0.1989 -0.6862 1.7817 s.e. e.0224 0.8623 e.e574 8.8640 2.8474 0.3685 sigma 2 estimated as 395: log likelihood -1323.56 AI 2661.12 AICC2661.5 BIC-2687.es > best.neuralennetar(train.en) > best.neural Series: train.en Model: NNAR(1,1,2)(12) Call: nnetar(y - train.en) Average of 2e networks, each of which is a 2-2-1 network with 9 weights options were . linear output units signa^2 estimated as 701.7 > Figure 2: Computer-generated "best" candidates > accuracy(forecast(best.ets), en.ts) ME RMSE MAE MPE MAPE MASE ACF1 Theil's u Training set 1.077933 19.62998 15.25368 0.8810861 4.726116 0.4397663 0.84199627 NA Test set 14.715094 28.61498 22.98249 2.1154902 3.627015 0.6682828 0.12751466 0.464963 > accuracy(forecast(best.arima), ents) ME RMSE MAE MPE MAPE MASE ACF1 Theil's U Training set 0.02325159 19.29376 14.88581 -0.1348431 4.536740 0.4291605 -0.009535455 NA Test set 6.62464804 19.46271 16.91486 0.8089735 2.719778 0.4876584 -8.094058344 0.2945444 Figure 3: Some accuracy measures > checkresiduals(best.ets) Ljung-Box test data: Residuals from ETS(M, Ad,M) Q* = 77.354, df = 7, p-value = 4.763e-14 Model df: 17. Total lags used: 24 > checkresiduals(best.arima) Ljung-Box test data: Residuals from ARIMA(1,0,3) (0,1,1)[12] with drift Q* = 64.071, df = 18, p-value = 4.426e-07 Model df: 6. Total lags used: 24 Figure 4: Some residual analyses > nonlinearity Test(train.en) ** Teraesvirta's neural network test ** Null hypothesis: Linearity in "mean" X-squared = 27.05787 df = 2 p-value = 1.331859e-86 ** White neural network test ** Null hypothesis: Linearity in "mean" X-squared = 19.66495 df = 2 p-value = 5.367964e-05 ** Keenan's one-degree test for nonlinearity ** Null hypothesis: The time series follows some AR process F-stat = 4.664014 P-value = 0.03163803 ** McLeod-Li test ** Null hypothesis: The time series follows some ARIMA process Maximum P-value = 0 ** Tsay's Test for nonlinearity ** Null hypothesis: The time series follows some AR process F-stat = 1.744 p-value = 0.0006862 ** Likelihood ratio test for threshold nonlinearity ** Null hypothesis: The time series follows some AR process Alternativce hypothesis: The time series follows some TAR process X-squared = 19.07894 p-value = 0.3686573 Figure 5: Some linearity checks Rolling window CV on Nuclear Energy 20 BestMAMdamp BestSARIMA BestNNET 10 0 Rolling window errors -10 -20 5 10 15 20 Size of training set Figure 6: Rolling window cross-validation

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Excel For Accountants Tips, Tricks & Techniques

Authors: Conrad Carlberg

1st Edition

1932925015, 9781932925012

More Books

Students also viewed these Accounting questions

Question

Analyze the management of inventory. AppendixLO1

Answered: 1 week ago