程序代写代做代考 C go SECTION A

SECTION A
ETF3231 Exam 2019: solutions
Write about a quarter of a page each on any FOUR of the following topics. (Clearly state if you agree or disagree with each statement. No marks will be given without any justification.)
Deduct marks for each major thing missed, and for each wrong statement. In general, be relatively generous if the answer makes sense and contains the main ideas.
(a) The trouble with statistical methods of forecasting is that they assume the patterns in the past will continue in the future.
• Partly true – will accept both agree and disagree – see how the arguments follow to give marks.0.
• Statistical methods of forecasting do assume some aspects of the past patterns will continue. However, the better models assume that the way patterns have changed over time will continue to occur into the future. So it is not so much that the past patterns will repeat, but that the past patterns will continue to change at the same rate as they have in the past.
• Of course, this may not be true, and then the forecasts will not be good. But for short-term forecasts, it is usually a reasonable assumption.
• Some forecasting methods make strong assumptions about the past/future. e.g., linear time trends. Others are highly adaptive and so make relatively weak assumptions. e.g., ETS models.
(b) A time series decomposition into trend, seasonal and remainder terms is only useful when there are no cycles in the data.
• False.
• Intimeseriesdecomposition,trendandcyclearecombinedtogivethe“trend”component.
• The decomposition is useful whenever there is some (possibly chang.ing) seasonality in
the data.
• It can be used for better understanding the data, or for forecasting components separately.
(c) Some ETS models are not always suitable and should be avoided.
• This is true.
• We have models that are unstable because we divide through by a state and can cause
numerical instabilities, namely ETS(A,N,M), ETS(A,A,M), ETS(A,Ad ,M).
• Multiplicative error models are only useful for strictly positive data and are numerically
unstable for data containing zeros or negative values because of the relative error.
(d) The random walk model is a non-stationary process.
• This is true.
2
2
1
2
1
1
2
1
2
2
1

• We showed the following several times in class. A random walk model is yt = yt−1 + εt
with εt ∼ NID(0,σ2).
• Conditional mean:
• Unconditional mean:
• Unconditional variance:
E(yt | yt−1) = yt t
􏰆
j=1
E(yt)=y0 t
􏰆
var(yt)=var(y0)+ var(εj)+cov() j=1
yt = y0 +
εj
var(yt)=tσ2
• Hence a non-stationary process due to its variance being time dependent.
• We can be lenient here. Give marks if they just describe this (probably not full marks).
(e) The combination of AR and MA components guide long-run ARIMA forecasts.
• This is not true. The combination of c and d guide these.
• Ifc=0thenford=0,d=1andd=2,long-termARIMAforecastswillrespectivelygoto
0, non-zero constant, follow a straight line.
• Ifc􏰇0thend=0,d=1,d=2,long-termforecastswillrespectivelygo,tothemeanof the data, follow straight line, quadratic trend (don’t do it).
• We will be lenient with this marking. Probably close to full marks if some are wrong.
(f) Linear regression models are simplistic because the real world is nonlinear.
• True, but that doesn’t make linear models not useful. A linear regression model is often a useful but simple approximation to reality that works well.
• If there is not enough data to estimate the nonlinearity, particularly if the nonlinear is too mild to estimate well, then a linear model is often the best approach.
• The linearity is in the parameters, not in the functional form. So nonlinear relationships can be modelled using a linear model.
— END OF SECTION A —
2
[Total: 20 marks]
1
1
1
1
1
1
2
2
1
2
2

SECTION B
(a)
• the series is trending.
• strong seasonality with seasonal variation increasing with the level of the series, i.e.,
multiplicative seasonality.
• The trough is during summer in the US, July and August.
• A bit harder to see but there is another dip in Jan especially visible in the early years.
Notice for example Jan 2000 (2001) in the seasonplot is below Dec 1999 (2000). This becomes more visible in the STL decomposition that follows. (Give 3.5 marks for the previous and 0.5 extra here if anyone picks this – give 3/4 if they only considering August at the trough and not commenting on July as well).
(b)
• This is an STL decomposition with the three panels showing trend-cycle, seasonal and remainder components.
• as STL is an additive decomposition a log transformation is first applied to account for the multiplicative seasonality.
• although the window is quite large,
the seasonal component is changing over time.
It shows that the trough at the beginning of each year due to the Christmas period has disappeared over the years
(c) A. Seasonal naive model. Arguably suitable, the trend may be deemed to flatten towards the end. We’ll take both yes and no answers with clear justification.
B. Drift method plus seasonal dummies. Not suitable as seasonality is changing. Also better with a log or Box Cox transformation.
C. Holt-Winters additive damped trend method. Not suitable, seasonality is multiplicative. D. Holt-Winters multiplicative damped trend method. Perfectly suitable, both multiplicative
seasonality and damped trend would be appropriate.
E. ETS(A,N,M). One of the combinations that is not allowed.
F. ETS(M,Ad,M). Suitable for both accounting for changing seasonality and the dampen- ing of the trend.
G. ARIMA(1,1,4). Not suitable – no seas component.
H. ARIMA(3,1,2)(1,1,0)12. Possibly suitable as taking seasonal and first order differences.
I. ARIMA(0,1,1)(2,0,0)12. Not suitable you would need a seasonal difference.
J. Regression with time and Fourier terms. Not suitable, trend not linear (seems to dampen
1
1
1
2
1
1
1
towards the end) and seasonality is changing.
— END OF SECTION B —
[Total: 20 marks]
3
1
1
1
1
1
1
1
1
1
1
1
1

SECTION C
(a)
• both models have multiplicative errors, additive trend and multiplicative seasonality
• the trend in the second model, damped, is a damped trend
(b)
• α for trend is lower than for damped. Hence level for damped is changing more rapidly relative to trend. This is reflected on the estimated level component in Figure 5 being wigglier for damped.
• β for trend is 0 hence the slope is not changing at all. This is shown in the slope panel of Figure 5 where the slope is not changing up to the third decimal place — recall that the bar on the right shows the relative scale. So more of a global trend here.
• β for damped is quite low (not exactly zero). The combination of this and the dampening parameter is making the slope to slightly change (changes in the second decimal place).
• Thereisabigcontrastintheestimatedsmoothingcoefficientsfortheseasonalcomponents with γ for trend being quite high and hence the seasonal component changing rapidly in Figure 5 while γ for damped is much lower and the seasonal component is changing much slower in the bottom panel of 5.
(c)
• both sets of residuals are autocorrelated shown both from the ACF but also the null of white noise is easily rejected by the Ljung-Box test.
• histograms look close to normal for both sets.
• Point forecasts will be ok but predictions intervals will not.
• Fromthetwosetsofresidualsprobablythedampedresidualsarebettergiventhesignificant
spike at lag 1 for trend. (d)
• Based on the AICc and the overall quality of the residuals I would choose damped.
(e) The forecasts from the two models are very different
• trend forecasts include an estimated upwards global trend, although the series seem to flatten out towards the end.
• In contrast damped generates flat forecasts due to the effect of the non-zero β and the dampening parameter.
• The seasonal components projected from the two models are also very different with trend forecasts including the changed over the estimated sample component reflecting more the seasonality observed towards the end of the sample.
• Another stark contrast is the width of the prediction intervals with damped being much wider reflecting more uncertainty. The forecasts, and particularly the wider prediction intervals, make me feel comfortable with the decision to go with the damped ETS(M,Ad,M) model.
1
1
1
1
2
1
1
1
4
2
1
2
1
1
1
1

(f)
yt =(lt−1+φbt−1)st−m(1+εt) lt =(lt−1+φbt−1)(1+αεt)
bt = φbt−1 + β(lt−1 + φbt−1)εt st =st−m(1+γεt)
where α = 0.89, β = 0.02, γ = 0.07, φ = 0.98 and σˆ = 0.013.
— END OF SECTION C —
[Total: 20 marks]
1
1
5

SECTION D
(a)
• Row 1: ACF is slow decaying reflecting strong trend and a slightly higher seasonal spike (although not too easy to see). PACF spike at lag 1 close to 1 also showing the strong trend
• Row 2: taking first order difference leaves a strong seasonal component reflected by the
slow decay in the seasonal spikes in the ACF. Hence stationarity has yet to be achieved.
Seasonal differencing is definitely necessary.
• Row 3: seasonal differencing seems to get closer to stationarity. The ACF decays at a faster
rate than before but still slow and PACF spike is still quite high, although not as close to 1. The time plot of the seasonally differenced series shows that it still jumps around a little. Probably close enough to stationary. A first order difference will ensure stationarity.
• Row 4: stationarity seems to have been achieved after both seasonal and first order differencing.
(b)
• arima1 relates to the third set of ACF/PACF, where only a seasonal difference has been applied
• The AR(1) component captures the dynamics shown in the single spike in the PACF and the decaying spikes in the ACF.
• The seasonal components SAR(1) SMA(1) capture the seasonal spikes on both ACF and PACF.
• arima2 relates to the fourth set of ACF/PACF, where both a seasonal and a first order difference has been applied
• The MA(1) component captures the dynamics shown in the just significant spike at lag 1 in the ACF.
• Equivalently the seasonal component SMA(1) captures the seasonal spike on the ACF. (c)
(1−B)(1−B12)yt =(1+θB)(1+ΘB12)εt
(1−B−B12+B13)yt =(1+θB+ΘB12+θΘB13)εt
yt − yt−1 − yt−12 + yt−13 = εt + θεt−1 + Θεt−12 + θΘεt−13 yt = yt−1 + yt−12 − yt−13 + εt + θεt−1 + Θεt−12 + θΘεt−13
where yt = log(daycare) and εt is white noise with mean 0 and variance 0.0001222. The estimated parameters are θ = −0.142 and Θ = −0.729.
(d)
• Some significant spikes in the ACF due to multiple tests at a 5% level of significance (we expect 1 out of 20 to be incorrectly rejecting the Null, type I error)
• The Ljung-Box test clearly shows that the Null white noise (ρ1 =, . . . , = ρ24 = 0) cannot be rejected.
1
1
1
1
1
1
1
6
1
1
1
2
1
1
1
1

(e)
• arima1 has only one order of differencing and therefore a constant/drift component is required to generate trending forecasts.
• arima2 has two lots of differencing and therefore no constant/drift is allowed nor required to generate trending forecasts.
• The drift component in arima1 projects a global trend/stronger trend in contrast to arima2 which models 2 lots of differencing.
• As expected the two lots of differencing in arima2 result to wider forecast intervals compared to arima1.
— END OF SECTION D —
7
[Total: 20 marks]
1
1
1
1

SECTION E
(a) Because we will then be including more fourier terms than seasons.
(b) You would select fit5 as it has min AIC.
(c)
(d)
􏰀√􏰁
exp log(35.343) ± 1.96 0.000115 = (34.608, 36.094)
􏰆5 􏰄 􏰂 2 π k t 􏰃 􏰂 2 π k t 􏰃 􏰅
αksin m +βkcos m +ηt
yt =a+bt+
(1−φB)(1−ΦB12)ηt =(1+θB)(1+ΘB12)εt
k=1
where yt = log(daycare). (Give marks if they write out the model using variables S1-12 to C5-12).
and ε ∼ NID(0,σ2).
• a,baretheinterceptandslopecoefficientsrespectively,guidingthelong-termlineartrend of the forecasts.
• φ,θ,Φ,Θ are the ARIMA coefficients for the error term guiding the short-run dynamics around the long-term linear trend.
• αk , βk are the coefficients of the Fourier terms guiding the seasonal variation.
1
1
(e) Although the data is non-stationary, clearly trending, the inclusion of a linear trend has accounted for this returning stationary residuals, which are in turn modelled by a stationary ARMA process. In fact the trend included in the model is exponential given the log transformation on the left hand side (probably will not expect them to comment on this).
(f)
• The plot and ACF shows that the residuals are like white noise (with some significant spikes at very long lags). The LB test rejects the null of joint zero autocorrelations for the first 24 lags of the residuals (this is probably due to the spikes at 18 and 21). However, the significant spikes are very small, so will have almost no effect on the point forecasts or prediction intervals.
• The histogram shows that the residuals are close to normal, although the tails are a little longer. The prediction intervals may be affected slightly, but probably not much.
• The time plot shows that the residuals have mean zero and constant variance.
1
1
— END OF SECTION E —
8
[Total: 20 marks]
2
1
2
1
1
1
1
3
2
2