程序代写代做代考 finance ECON 3350/7350: Applied Econometrics for Macroeconomics and Finance

ECON 3350/7350: Applied Econometrics for Macroeconomics and Finance
Final Exam Review
1 Overview
The 􏱃nal exam is comprehensive and you may expect any topic that was covered either in lecture, tutorials or this review to be assessed. However, there will be no other material on the exam (e.g. anything that is covered in the text book but not explicitly covered in class will not be assessed). The exam itself consists of two sections:
1. Short answers: 5 Questions, 25 Points Total;
2. Problem solving: 3 Questions, 75 Points Total.
As previously stated, you may expect output from an empirical application, but there will be no explicit reference to Stata or any other particular software.
2 Topics
The exam focuses on assessing your ability to use time-series econometric concepts in set- ting up appropriate models, estimating them with data, and applying / interpreting the estimation results. You are expected to:
􏰐 use ACFs and PACFs to as a guide in building appropriate ARMA speci􏱃cations for a univariate time-series;
􏰐 use the Q-test to formally analyse the autocorrelation in a time-series, particularly in testing whether residuals are white-noise;
􏰐 use information criteria (e.g., AIC, BIC) as a tool for model selection;
􏰐 use time-series models such as the ARMA to generate forecasts;
􏰐 use time-series models such as the ARDL to estimate dynamic e􏱂ects;
􏰐 understand the concept of stationarity and how it relates to
1. deterministic trends vs. stochastic trends, 1

2. de-trending trend-stationary data using simple regression techniques,
3. testing for the presence of stochastic and/or deterministic trends using the (Aug- mented) Dickey Fuller tests,
4. order of integration,
5. co-integration,
6. heteroscedasticity;
􏰐 understand the concept of cointegration and how it relates to
1. common stochastic trends present in two or more time-series variables,
2. cointegrating vectors and applying them to remove the common-stochastic trend by taking linear combinations of integrated (of the same order) variables,
3. the relationship between the cointegration rank and the number of common stochastic trends present,
4. the concept of a long-run equilibrium between two or more cointegrated variables,
5. spurious regressions vs. super-convergent ordinary least squares estimation,
6. testing for co-integration using the Engle-Granger test (on the residuals via the adjusted ADF test),
7. analysing and testing for co-integration using the ECM representation of the ARDL,
8. modeling multivariate time-series using the VECM representation of the VAR;
􏰐 understand the concept of heteroscedasticity and how it relates to
1. modelling time-varying volatilities in time-series data using alternative approaches such as the (G)ARCH, T-GARCH, E-GARCH, GARCH-M and SV,
2. testing for the heteroscedasticity using a general approach such as the LM test,
3. extending an ARMA or ARDL to allow for heteroscedastic errors,
4. testing for the presence of leverage e􏱂ects using either a generic test or a speci􏱃c model (E-GARCH, T-GARCH),
5. understanding the economic meaning of di􏱂erent forms of heteroscedasticity, in- cluding leverage e􏱂ects, etc.,
6. using models of time-varying volatilities to generate volatility forecasts;
􏰐 use multivariate time-series models such as VARs and VECMs to construct and analyse systems of time-series variables;
􏰐 understand the relationship between reduced-form VARs and structural VARs and how it relates to
2

1. systems of ARDL equations,
2. imposing identifying restrictions using methods such as the Cholesky decomposi- tion of the reduced-form covariance matrix,
3. generating forecasts vs. obtaining IRFs and FEVDs,
4. stability of the VAR and existence of (􏱃nite) IRFs / FEVDs,
5. Granger causality and the relevant hypothesis tests;
􏰐 use the VAR(1) companion form of a VAR(p) to 1. determine the stability of the system,
2. compute IRFs and FEVDs;
􏰐 use the VECM representation of the VAR to
1. analyse systems of integrated variables,
2. analyse cointegrating relationships among the variables by applying the Granger Representation Theorem,
3. estimateIRFsandFEVDsforI(1)variablesandunderstandthedi􏱂erencebetween incremental e􏱂ects and cumulative e􏱂ects.
3 Examples
1. Suppose we determine that xt and yt are both I(1). What are the issues with running the regression:
yt = β0 + β1xt + εt?
How can this regression be used to test for co-integration between xt and yt?
If xt (a)
(b)
and yt are truly co-integrated, then εt is I(0). This has two implications: 􏰭
the OLS estimator β1 is super-consistent (i.e., it converges to β at rate T );
the residual does not contain a unit root, so we can use the ADF test (with minor adjustments) on the estimated series {􏰭εt}, whereby rejecting the null con􏱃rms co-integration (Engle-Granger test).
On the contrary, if xt and yt are not co-integrated, then 􏰭
(a) the OLS estimator β1 is not consistent and the regression yields spurious results;
(b) the residual is I(1) and failing the reject the null with the ADF test on the
estimated residual series is evidence that xt and yt are potentially not cointegrated.
3

On the other hand, we can always estimate the ECM form of the regression, which does not su􏱂er from the risk of producing spurious results. For example, suppose we obtain:
􏰑􏰒
∆yt =3.45− 0.56 yt−1 − 0.752 ×xt−1 −0.364×∆xt +εt. (1.04) (−3.94) (−14.23) (−7.28)
These estimates are, in general, reliable. In additional, co-integration is captured directly by the ECM term: in this case, we reject no co-integration since both the long-run multiplier and the speed of adjustment coe􏱄cient are statistically signi􏱃cant.
2. Consider the following model for a univariate time-series {yt}: ∆yt =μ+εt,
ε =ν􏰟h, ttt
ht = α0 + α1ε2t−1 + α2ε2t−2 + β1ht−1 + β2ht−2. What type of model is this? Is it stationary?
You should recognise this as the GARCH(2, 2) speci􏱃cation for conditional heteroscedas- ticity. It is not stationary, unless α1 = α2 = β1 = β2 = 0. If any one of the parameters is not zero, then the error variance changes over time and, therefore, the time-series is not stationary by de􏱃nition (even though its conditional mean is just a 􏱃xed intercept).
How do we determine the lag orders to be (2, 2) rather than something else? Generally,
we follow similar principles to determining lag orders in ARMA models and compare
information criteria (e.g. AIC, BIC). Similarly, we can use techniques established for
ARMA models to obtain one-step-ahead forecasts of volatility. For example, suppose
we have a sample of T observations, y1, . . . , yT , which we use to estimate the parameters 􏰭􏰭 􏰭􏰭
μ􏰭,α􏰭0,α􏰭1,α􏰭2,β1,andβ2,alongwiththevolatilitiesh1,…,hT andresiduals􏰭ε1,…,􏰭εT. Using this information, we obtain:
􏰭 􏰭􏰭􏰭􏰭
h =α􏰭+α􏱁􏰭ε2+α􏰭􏰭ε2 +βh+βh ,
T+1 0 1 T 2 T−1 1 T 2 T−1
􏰭􏰭􏰭􏰭􏰭
h = α􏰭 + α􏰭 􏰭ε 2 + ( α􏱁 + β ) h + β h ,
T+2 0 2T 1 1T+1 2T
􏰭􏰭􏰭􏰭􏰭
h =α􏰭 +(α􏱁+β)h +(α􏰭 +β)h ,
T+3 0 1 1 T+2 2 2 T+1 .
3. Consider the following n-dimensional SVAR(p):
Bxt =γ0 +Γ1xt−1 +···+Γpxt−p +εt, εt ∼N(0,Σ).
If we have obtained estimates of B, γ0 and Γ1, . . . , Γp, how do we compute the IRFs?
Recall that if p = 1, the computation is straightforward. Speci􏱃cally, re-arrange the SVAR to:
xt =B−1γ0 +A1xt−1 +B−1εt, εt ∼N(0,Σ), 4

where A1 = B−1Γ (note the connection to the reduced form, which also should suggest a connection between IRFs and forecasting). Then, the response of xi,t to an impulse in shock εj,t at horizon h is the (i, j) element of the n × n matrix Φh = Ah1 B−1. Hence, to compute this impulse response, we need ot raise the matrix A1 to the power h (i.e., multiply A1 by itself h times), then multiply by B−1 and 􏱃nd the correct element in the resulting matrix.
When p > 1, the procedure is nearly identical; all we need to do is compute Al = B−1Γj for j = 1, . . . , p, then construct the VAR(1) companion form with:
A1 ··· Ap−1 Ap
 In 0  A􏱀 1 =   . . . .   .
.. In 0
Note that A􏱀 1 is np × np. To obtain impulse responses at horizon h, we 􏱃rst raise A􏱀 1 to the power h, i.e. compute
 Φ􏱀1,1,h · · · Φ􏱀1,n,h Φ􏱀1,n+1,h · · · Φ􏱀1,np,h   ··· ··· ··· ··· ··· ··· 
Φ􏱀h = A􏱀h1 =  Φ􏱀n,1,h ··· Φ􏱀n,n,h Φ􏱀n,n+1,h ··· Φ􏱀n,np,h . Φ􏱀n+1,1,h · · · Φ􏱀n+1,n,h Φ􏱀n+1,n+1,h · · · Φ􏱀n+1,np,h ··················
Φ􏱀 np,1,h · · · Φ􏱀 np,n,h Φ􏱀 np,n+1,h · · · Φ􏱀 np,np,h
Next, we extract the upper-left n × n block (red box), Φ􏱀1:n,1:n,h. Finally, we multiply
by B−1 to obtain:
Φh = Φ􏱀1:n,1:n,hB−1.
Note that if B−1 is lower-triangular, then the nth column of Φh is the same as the nth column of Φ􏱀1:n,1:n,h (vector formed from column n and rows 1 to n of Φ􏱀h). Hence, the last step is not necessary if one is only interested in the responses to shock εn,t.
5