程序代写代做代考 C Bayesian ECON3350/7350 Univariate Time Series – II

ECON3350/7350 Univariate Time Series – II
Eric Eisenstat
The University of Queensland
Lecture 3
Eric Eisenstat
(School of Economics) ECON3350/7350 Week 3 1 / 23

Estimation of Univariate Time Series Models
Recommended readings
Author
Title
Chapter
Call No
Enders Verbeek
Applied Econometric Time Series, 4e
A Guide to Modern Econometrics
2 8.7, 8.8
HB139 .E55 2015 HB139 .V465 2012
Eric Eisenstat (School of Economics) ECON3350/7350 Week 3 2 / 23

Univariate Autoregressive Moving Average Models
ARMA(1, 0)
ARMA(0, 1)
ARMA(1, 1)
yt = a0 + a1yt−1 + εt
yt = a0 + εt + β1εt−1
yt =a0 +a1yt−1 +εt +β1εt−1
ARMA(2, 2)
yt = a0 + a1yt−1 + a2yt−2 + εt + β1εt−1 + β2εt−2
Eric Eisenstat (School of Economics) ECON3350/7350 Week 3 3 / 23

ACF and PACF
The ACF and PACF can help us identify the model structure.
MODEL ARMA(1, 0) ARMA(0, 1) ARMA(1, 1) ARMA(2, 0)
ACF Decays to zero One non-zero peak Decays to zero Decays to zero
PACF
One non-zero peak Decays to zero Decays to zero Two non-zero peaks
Eric Eisenstat
(School of Economics)
ECON3350/7350
Week 3
4 / 23

Coles Myers Dividend Yield Series, 1983Q3–2003Q3
8
7
6
5
4
3
2
1
Does the plot of the series show any apparent trend?
DIVYIELD
84 86 88 90 92 94 96 98 00 02
Eric Eisenstat (School of Economics) ECON3350/7350 Week 3 5 / 23

Model Identification
Date: 08/08/16 Time: 14:07 Sample: 1983Q3 2003Q3 Included observations: 81
Autocorrelation
Partial Correlation
AC PAC Q-Stat
1 0.818 0.818 56.284 2 0.662 -0.023 93.580 3 0.548 0.038 119.44 4 0.433 -0.062 135.85 5 0.345 0.010 146.35 6 0.195 -0.245 149.76 7 0.070 -0.043 150.21 8 -0.029 -0.068 150.29 9 -0.065 0.126 150.68
10 -0.095 -0.046 151.54 11 -0.204 -0.237 155.55 12 -0.280 -0.056 163.21 13 -0.324 -0.029 173.57 14 -0.360 -0.103 186.59 15 -0.323 0.155 197.20 16 -0.297 0.032 206.31 17 -0.246 0.100 212.64 18 -0.210 -0.108 217.33 19 -0.155 0.017 219.93 20 -0.088 -0.031 220.77 21 -0.023 0.124 220.83 22 0.040 -0.018 221.02 23 0.028 -0.131 221.11 24 0.001 -0.114 221.11
Prob
0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
Eric Eisenstat
(School of Economics)
ECON3350/7350
Week 3
6 / 23

The Q-Statistic
White noise has zero autocorrelation.
Test if a process is white noise using the Ljung-Box (1978) Q-statistic.
joint test that the first K autocorrelations (ρk = γk/γ0) are not significant;
H0 :ρ1 =ρ2 =···=ρK =0versusH1 😕 Test statistic computed as
QK = T (T + 2)
where rk is the sample autocorrelation at lag k, and K is the number of
autocorrelation lags being tested.
􏰁K rk2 2
T + k ∼ χK ,
k=1
Eric Eisenstat (School of Economics) ECON3350/7350 Week 3
7 / 23

Model Identification for the Coles Myers Dividend Yield Data
The Q-statistic rejects the null at all values K = 1, . . . , 24 (and beyond). The SACF is decaying.
The SPACF has a single non-zero peak.
What is the most suitable model suggested?
Good practice: fit a range of models around what appears to be “most suitable”.
Eric Eisenstat (School of Economics) ECON3350/7350 Week 3
8 / 23

Estimation and Diagnostics
Recall that a pure AR model (i.e. with no MA lags, or equivalently ARMA(p, 0)) can be estimated using OLS.
Including MA lags requires either Non-linear Least Squares (NLLS) or Maximum Likelihood (MLE).
NLLSchoosesparametersα0,…,αp,β1,…,βq,σε2 tominimizethe non-linear sum of squares.
MLEchoosesparametersα0,…,αp,β1,…,βq,σε2 tomaximizethe likelihood.
A likelihood is the joint density of the observed data (e.g. y1, . . . , yT ) as a function of the model parameters.
Assuming a distribution for the errors implies a likelihood for data.
See Appendix 2.1 of Enders for the form of the likelihood (and log-likelihood) of the ARMA(p, q).
Eric Eisenstat (School of Economics) ECON3350/7350 Week 3 9 / 23

Estimation and Diagnostics
AR(1)
Dependent Variable: DIVYIELD
Method: ARMA Maximum Likelihood (BFGS)
Date: 03/10/16 Time: 15:42
Sample: 1983Q3 2003Q3
Included observations: 81
Convergence achieved after 5 iterations
Coefficient covariance computed using outer product of gradients
MA(1)
Dependent Variable: DIVYIELD
Method: ARMA Maximum Likelihood (BFGS)
Date: 03/10/16 Time: 15:40
Sample: 1983Q3 2003Q3
Included observations: 81
Convergence achieved after 7 iterations
Coefficient covariance computed using outer product of gradients
Variable
C AR(1) SIGMASQ
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic 93.33682 Prob(F-statistic) 0.000000
Inverted AR Roots .85
Coefficient
3.916264 0.850219 0.434378
Std. Error t-Statistic
Prob.
Variable
C MA(1) SIGMASQ
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic 44.22903 Prob(F-statistic) 0.000000
Inverted MA Roots -.79
0.445945 8.781954 0.0000 0.072178 11.77952 0.0000 0.051893 8.370618 0.0000
Coefficient
3.932779 0.790423 0.690675
Std. Error
0.166361 0.077826 0.102480
t-Statistic
23.63996 10.15624 6.739628
Prob.
0.0000 0.0000 0.0000
3.927654 1.221629 2.553966 2.642650 2.589547 1.324738
0.705297 0.697741 0.671628 35.18461 -81.80513
Mean dependent var S.D. dependent var Akaike info criterion Schwarz criterion Hannan-Quinn criter. Durbin-Watson stat
3.927654 1.221629 2.093954 2.182637 2.129535 1.917667
0.531414 0.519398 0.846900 55.94466 -100.4356
Mean dependent var S.D. dependent var Akaike info criterion Schwarz criterion Hannan-Quinn criter. Durbin-Watson stat
Eric Eisenstat (School of Economics)
ECON3350/7350
Week 3
10 / 23

Criteria for Model Selection
We can use information criteria to choose amongst models in a more formal way.
Two commonly used information criteria are: Akaike’s Information Criterion (AIC) and Bayesian Information Criterion (BIC, or sometimes SC, SBC, SBIC).
All Information criteria quantify a penalty for lack of fit and / or over-parameterizations.
When comparing models using information criteria, lower is better. The trade off is fit versus parsimony.
.
They both sum the log of the estimated residual variance (i.e. measure of fit) and estimate of the number of free parameters.
Eric Eisenstat (School of Economics) ECON3350/7350 Week 3 11 / 23
Both AIC and BIC are likelihood based:
2 p+q+1 2 p+q+1
AIC=lnσ􏰭ε +2 T BIC=lnσ􏰭ε +lnT T

Model Identification for the Coles Myers Dividend Yield Data
The AIC is 2.554 for MA(1) and 2.094 for AR(1). Which is better? The BIC is 2.642 for MA(1) and 2.182 for AR(1). Which is better?
Do the AIC and BIC agree on the preferred specification? How does this compare to what was suggested by the SACF and SPACF?
Eric Eisenstat (School of Economics) ECON3350/7350 Week 3 12 / 23

Analyzing the Residuals
Date: 08/08/16 Time: 15:54
Sample: 1983Q3 2003Q3
Included observations: 81
Q-statistic probabilities adjusted for 1 ARMA term
Date: 08/08/16 Time: 15:56
Sample: 1983Q3 2003Q3
Included observations: 81
Q-statistic probabilities adjusted for 1 ARMA term
Autocorrelation
Partial Correlation
AC PAC
Q-Stat Prob
Autocorrelation
Partial Correlation
AC PAC
Q-Stat
Prob
0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
1 0.024 0.024 0.0495
2 -0.060 -0.061 0.3578 0.550 3 0.037 0.040 0.4742 0.789 4 0.038 0.033 0.6029 0.896 5 0.179 0.183 3.4410 0.487 6 -0.094 -0.104 4.2337 0.516 7 -0.061 -0.037 4.5723 0.600 8 -0.166 -0.203 7.1188 0.417 9 -0.029 -0.028 7.1990 0.515
10 0.163 0.131 9.7296 0.373 11 -0.090 -0.042 10.511 0.397 12 -0.122 -0.084 11.955 0.367 13 -0.012 0.023 11.970 0.448 14 -0.144 -0.209 14.046 0.371 15 -0.010 -0.061 14.055 0.446 16 -0.107 -0.113 15.233 0.435 17 0.020 0.079 15.274 0.505 18 -0.073 -0.055 15.836 0.535 19 0.006 0.081 15.841 0.604 20 0.046 -0.066 16.075 0.652 21 -0.033 0.004 16.194 0.705 22 0.224 0.169 21.917 0.404 23 0.056 0.040 22.286 0.443 24 0.133 0.175 24.361 0.384
1 0.318 0.318 8.5070 2 0.582 0.535 37.340 3 0.289 0.050 44.524 4 0.316 -0.065 53.262 5 0.246 0.049 58.612 6 0.101 -0.142 59.522 7 0.075 -0.130 60.034 8 -0.060 -0.105 60.367 9 -0.026 0.010 60.432
10 -0.063 0.078 60.812 11 -0.109 -0.041 61.947 12 -0.236 -0.259 67.389 13 -0.139 0.032 69.301 14 -0.327 -0.146 80.017 15 -0.136 0.026 81.910 16 -0.263 0.064 89.041 17 -0.105 0.121 90.197 18 -0.168 0.007 93.214 19 -0.105 -0.062 94.411 20 -0.013 0.035 94.431 21 -0.080 -0.054 95.145 22 0.126 0.145 96.949 23 -0.054 -0.077 97.285 24 0.077 -0.085 97.980
Note: when analyzing residuals of an ARMA(p, q) the Q-statistic for K autocorrelations has distribution χ2K−p−q.
Eric Eisenstat (School of Economics) ECON3350/7350 Week 3 13 / 23

Inference With Time-Series Data
What can we do with an estimated ARMA(p, q) for a sample y1, . . . , yT ? Forecast: predict future values yT +1 , . . . , yT +h .
Impulse Response Functions: analyze response of variables to unforeseeable shocks.
More possibilities for multivariate dynamic systems.
Eric Eisenstat (School of Economics) ECON3350/7350 Week 3 14 / 23

Lag Operator Notation
To discuss forecasting and impulse response function, it helps to define some notation.
Definition
The lag operator L applied to a stochastic process {yt} transforms a realisation at time t into a realisation at time t − 1, i.e.
yt−1 = Lyt.
This helps us write polynomials in the lag operator: a(L) = 1 − a1L − · · · − apLp,
β(L) = 1 + β1L + · · · + βqLq.
Then, the ARMA(p, q) can be concisely expressed as a(L)yt = β(L)εt.
Eric Eisenstat (School of Economics) ECON3350/7350 Week 3
15 / 23

AR and MA representations of the ARMA(p, q)
If the ARMA(p, q) is stationary, then a(L) can be inverted to produce the pure MA representation yt = θ(L)εt, where
θ(L) = β(L). a(L)
If the ARMA(p,q) is invertible, then β(L) can be inverted to produce the pure AR representation φ(L)yt = εt, where
φ(L) = a(L) . β(L)
If the ARMA(p, q) is stationary and invertible, then we will also refer to yt = θ(L)εt as the Wold representation.
Every stationary time series has a Wold representation!
Eric Eisenstat (School of Economics) ECON3350/7350 Week 3 16 / 23

Forecasting
Recall that given observations y1, . . . , yT , the optimal predictor of a future value yT +h is the conditional expectation y􏰭T +h ≡ E(yT +h | y1, . . . , yT ).
Given parameter estimates of an (invertible) ARMA(p, q), we can estimate residuals {ε􏰭 } using ε􏰭 = φ(L)y , and generate point forecasts recursively:
t t􏰭t
y􏰭T+1 =a0 +a1yT +···+apyT−p+1 +β1ε􏰭T +···+βqε􏰭T−q+1,
y􏰭T+2 =a0 +a1y􏰭T+1 +···+apyT−p+2 +β2ε􏰭T +···+βqε􏰭T−q+2, .
y􏰭T +q+1 = a0 + a1y􏰭T +q + · · · + apyT −p+q+1, .
y􏰭T+h =a0 +a1y􏰭T+h−1 +···+apy􏰭T−p+h, Eric Eisenstat (School of Economics) ECON3350/7350
if p > q
h>max{p,q}.
Week 3
17 / 23

Forecast Uncertainty
But point forecasts on their own are difficult to interpret.
To make forecasts useful, we need to quantify uncertainty. Two sources of uncertainty:
Uncertainty due to unknown future, i.e. future errors εT +1 , . . . , εT +h . Uncertainty due to parameter estimation, i.e. using
􏰭􏰭
􏰭a0,…,􏰭ap,β1,…,βq instead of the “true” a0,…,ap,β1,…,βq.
If all parameters were known, the forecast error variance would be: Var(yT+h −y􏰭T+h)=σε2(1+θ12 +···+θh2−1).
Example: For an AR(1), Var(y − y􏰭
T + h T + h
Question: what happens as h −→ ∞?
Eric Eisenstat (School of Economics) ECON3350/7350
) = σ2 1 . ε 1 − a 21
1−a2h
Week 3
18 / 23

Impulse Response Functions
Policy is often concerned with how a variable responds to a shock.
If we have a positive, unanticipated shock to GDP, how will GDP
respond in the future?
We consider the impulse response path: the difference between the expected path with the shock and the expected path without the shock.
The impulse response function (IRF) is the partial derivative ∂yt+h for all
h ≥ 0. In a linear model, this is equivalent to E(yt+h |εt = 1,εt−1,εt−2,…)
∂εt
−E(yt+h|εt =0,εt−1,εt−2,…)=θh.
Eric Eisenstat (School of Economics) ECON3350/7350 Week 3 19 / 23

Estimating Impulse Response Functions
Given an estimated (stationary) ARMA(p, q), the IRF can be obtained by either
iteratively computing y􏰭 for ε = 1 and iteratively computing y ̄ t+h t t+h
forε =0,thensettingIR(h)=y􏰭 −y ̄ ,or t t+h t+h
by computing θ(L) = β(L)/a(L).
Note that for IRFs deterministic terms (such as a0) do not matter–we can
ignore them.
θ(L) can be derived from a(L) and β(L) using the method of
undetermined coefficients (see Enders pp. 33–42).
Uncertainty in IRFs is only due to uncertainty in parameter estimation.
Eric Eisenstat (School of Economics) ECON3350/7350 Week 3 20 / 23

Example: ARMA(2,1)
Suppose we estimate the model
yt = a1yt−1 + a2yt−2 + εt + β1εt−1, yt = 1.2yt−1 − 0.7yt−2 + εt + 0.3εt−1.
Inthiscase,a(L)=1−1.2L+0.7L2 andβ(L)=1+0.3L. We wish to find
θ(L) = (1 + 0.3L)/(1 − 1.2L + 0.7L2) = 1 + θ1L + θ2L2 + · · · .
Eric Eisenstat (School of Economics) ECON3350/7350 Week 3
21 / 23

Method of Undetermined Coefficients for the ARMA(2, 1)
(1 + 0.3L)/(1 − 1.2L + 0.7L2) = 1 + θ1L + θ2L2 + · · · , 1+0.3L=(1−1.2L+0.7L2)(1+θ1L+θ2L2 +···),
which leads to
L0: L1: L2: L3:
.
Lk: Eric Eisenstat
1 = 1,
0.3=θ1 −1.2 ⇒
0=θ2 −1.2θ1 +0.7 ⇒ 0=θ3 −1.2θ2 +0.7θ1 ⇒
0 = θk − 1.2θk−1 + 0.7θk−2 ⇒ (School of Economics) ECON3350/7350
θ1 =1.5, θ2 =1.1, θ3 =0.27,
θk = 1.2θk−1 − 0.7θk−2.
Week 3
22 / 23
1+0.3L=1+θ1L+θ2L2 +··· −1.2L−1.2θ1L2 −1.2θ2L3 −···
+0.7L2 +0.7θ1L3 +0.7θ2L4 −··· ,

IRFtoashockεt =1
1.5
1
0.5
0
−0.5
−1
0 10 20 30 40
Eric Eisenstat (School of Economics) ECON3350/7350 Week 3
23 / 23