程序代做 ECON7350: Applied Econometrics for Macroeconomics and Finance

ECON7350: Applied Econometrics for Macroeconomics and Finance
Tutorial 11: Multivariate Processes – I/II
At the end of this tutorial you should be able to:
• Use R to construct an adequate set of VAR(p) models;

Copyright By PowCoder代写 加微信 powcoder

• Use R to forecast one of the variables generated by a VAR(p) process;
• Derive a structural VAR from a reduced form VAR using the Cholesky factorisation;
• Use R to obtain inference on dynamic relationships from an identified SVAR;
• Use R to obtain inference on Granger causality.
Problems with Solutions
The data file money_dem.csv contains quarterly observations for the following variables from 1959Q1 to 2001Q1:
• RGDP: real US GDP;
• GDP: nominal GDP;
• M2: money supply;
• Tb3mo: three-month rate on US treasury bills.
Using this sample, we will work with the following variables:
• Log Real GDP: lrgdpt = ln RGDPt;
• GDP Deflator: pricet = GDPt/RGDPt;
• Log Real Money Supply: lrm2t = ln(M2t/pricet);
• Short-term Interest Rate: rst = tb3mot.
1. Consider forecasting lrgdpt using a VAR(p) model of the multivariate process
{xt}, where xt = (lrgdpt, lrm2t, rst)′.
(a) Construct and adequate set of VAR(p) models.

Solution For this tutorial we need the following packages.
The package vars provides the functionality we will use to work with VAR(p) models. The package pracma provides the function perms, which generates all possible permutations of the elements of a given vector. This is used in Question 2 to examine all possible orderings of the variables in the VAR.
Next, we load the data and generate required variables.
library(zoo)
library(vars)
library(pracma)
mydata <- read.delim("money_dem.csv", header = TRUE, sep = ",") date <- as.yearqtr(mydata$DATE) lrgdp <- log(mydata$RGDP) price <- mydata$GDP/mydata$RGDP lrm2 <- log(mydata$M2) - log(price) rs <- mydata$TB3mo x <- cbind(lrgdp, lrm2, rs) To construct an adequate set of VAR(p) models for forecasting, we will consider VARs with p = 1, . . . , 20. Note that the VAR function provided by the vars package does not allow p = 0 as an input. We will not worry about this as the VAR(0) is a very special case (and generally not of interest in time-series applications). VAR_est <- list() ic_var <- matrix(nrow = 20, ncol = 3) colnames(ic_var) <- c("p", "aic", "bic") for (p in 1:20) VAR_est[[p]] <- VAR(x, p) ic_var[p,] <- c(p, AIC(VAR_est[[p]]), BIC(VAR_est[[p]])) ic_aic_var <- ic_var[order(ic_var[,2]),] ic_bic_var <- ic_var[order(ic_var[,3]),] The AIC and BIC clearly agree on p = 2, 3, 4 as the top three specifications. The remaining specifications appear to be quite a bit inferior, so we proceed with an adequate set consisting of the three specifications above. The serial.test function provides several tests for VAR residual autocorrelation. We will focus on the LM test by setting type = "BG", but other tests are just as adq_set_var <- as.matrix(ic_var[2:4,]) adq_idx_var <- c(2:4) ## [1] "Checking VAR(20)" ## Breusch- M test ## data: Residuals of VAR object VAR_est[[p]] ## Chi-squared = 55.601, df = 45, p-value = 0.1337 ## [1] "Checking VAR(20)" ## Breusch- M test ## data: Residuals of VAR object VAR_est[[p]] ## Chi-squared = 55.601, df = 45, p-value = 0.1337 ## [1] "Checking VAR(20)" ## Breusch- M test ## data: Residuals of VAR object VAR_est[[p]] ## Chi-squared = 55.601, df = 45, p-value = 0.1337 Unfortunately, estimated residual autocorrelations appear to be quite high for all three models. This is a concern, so we proceed by checking all 20 VARs we have estimated so far. ## [1] "Checking VAR(1)" ## Breusch- M test ## data: Residuals of VAR object VAR_est[[p]] ## Chi-squared = 189.99, df = 45, p-value < 2.2e-16 nmods <- length(adq_idx_var) for (i in 1:nmods) p < adq_idx_var[i] print(paste0("Checking VAR(", p, ")")) print(serial.test(VAR_est[[p]], type = "BG")) for (p in 1:20) { print(paste0("Checking VAR(", p, ")")) print(serial.test(VAR_est[[p]], type = "BG")) ## [1] "Checking VAR(2)" ## Breusch- M test ## data: Residuals of VAR object VAR_est[[p]] ## Chi-squared = 96.62, df = 45, p-value = 1.239e-05 ## [1] "Checking VAR(3)" ## Breusch- M test ## data: Residuals of VAR object VAR_est[[p]] ## Chi-squared = 88.246, df = 45, p-value = 0.0001246 ## [1] "Checking VAR(4)" ## Breusch- M test ## data: Residuals of VAR object VAR_est[[p]] ## Chi-squared = 83.131, df = 45, p-value = 0.0004688 ## [1] "Checking VAR(5)" ## Breusch- M test ## data: Residuals of VAR object VAR_est[[p]] ## Chi-squared = 83.481, df = 45, p-value = 0.0004291 ## [1] "Checking VAR(6)" ## Breusch- M test ## data: Residuals of VAR object VAR_est[[p]] ## Chi-squared = 66.097, df = 45, p-value = 0.02188 ## [1] "Checking VAR(7)" ## Breusch- M test ## data: Residuals of VAR object VAR_est[[p]] ## Chi-squared = 74.501, df = 45, p-value = 0.003709 ## [1] "Checking VAR(8)" ## Breusch- M test ## data: Residuals of VAR object VAR_est[[p]] ## Chi-squared = 60.642, df = 45, p-value = 0.05967 ## [1] "Checking VAR(9)" ## Breusch- M test ## data: Residuals of VAR object VAR_est[[p]] ## Chi-squared = 60.593, df = 45, p-value = 0.06017 ## [1] "Checking VAR(10)" ## Breusch- M test ## data: Residuals of VAR object VAR_est[[p]] ## Chi-squared = 62.07, df = 45, p-value = 0.04646 ## [1] "Checking VAR(11)" ## Breusch- M test ## data: Residuals of VAR object VAR_est[[p]] ## Chi-squared = 58.862, df = 45, p-value = 0.08044 ## [1] "Checking VAR(12)" ## Breusch- M test ## data: Residuals of VAR object VAR_est[[p]] ## Chi-squared = 59.21, df = 45, p-value = 0.07596 ## [1] "Checking VAR(13)" ## Breusch- M test ## data: Residuals of VAR object VAR_est[[p]] ## Chi-squared = 58.196, df = 45, p-value = 0.08961 ## [1] "Checking VAR(14)" ## Breusch- M test ## data: Residuals of VAR object VAR_est[[p]] ## Chi-squared = 47.223, df = 45, p-value = 0.3819 ## [1] "Checking VAR(15)" ## Breusch- M test ## data: Residuals of VAR object VAR_est[[p]] ## Chi-squared = 51.487, df = 45, p-value = 0.2348 ## [1] "Checking VAR(16)" ## Breusch- M test ## data: Residuals of VAR object VAR_est[[p]] ## Chi-squared = 42.506, df = 45, p-value = 0.5782 ## [1] "Checking VAR(17)" ## Breusch- M test ## data: Residuals of VAR object VAR_est[[p]] ## Chi-squared = 49, df = 45, p-value = 0.3157 ## [1] "Checking VAR(18)" ## Breusch- M test ## data: Residuals of VAR object VAR_est[[p]] ## Chi-squared = 46.546, df = 45, p-value = 0.4085 ## [1] "Checking VAR(19)" ## Breusch- M test ## data: Residuals of VAR object VAR_est[[p]] ## Chi-squared = 60.162, df = 45, p-value = 0.06476 ## [1] "Checking VAR(20)" ## Breusch- M test ## data: Residuals of VAR object VAR_est[[p]] ## Chi-squared = 55.601, df = 45, p-value = 0.1337 White noise residuals are rejected for all specifications with p ≤ 7, which indicates that we need to consider higher lag orders. For p ≥ 8, both AIC and BIC clearly agree that lower p values are preferred. We will proceed with p = 8, 9, 10 as the adequate set. (b) How many intercept and slope coefficients need to be estimated for each VAR(p) in the adequate set? Solution We can compute the total number of coefficients using the simple formula k = n(1 + np). adq_set_var <- as.matrix(ic_var[8:10,]) adq_idx_var <- c(8:10) nmods <- length(adq_idx_var) for (i in 1:nmods) p <- adq_idx_var[i] print(paste0("VAR(", p, ") has ", 3 * (1 + 3 * p), " coefficients.")) ## [1] "VAR(8) has 75 coefficients." ## [1] "VAR(9) has 84 coefficients." ## [1] "VAR(10) has 93 coefficients." (c) Check the stability of each estimated VAR(p) model. How does this affect forecasting? Solution The vars package provides a handy function roots to ascertain the stability of an estimated VAR(p). In fact, it generates estimates of the absolute values of the eigenvalues of the companion form. Unfortunately, it does not provide confidence intervals for the estimated eigenvalues. Warning: the vars package also has a function called stability, but this is not related to the roots of the characteristic equation (instead, it is concerned with structural breaks in the process). nmods <- length(adq_idx_var) for (i in 1:nmods) p <- adq_idx_var[i] print(paste0("VAR(", p, "): Maximum absolute eigenvalue is ", max(vars::roots(VAR_est[[p]])))) ## [1] "VAR(8): Maximum absolute eigenvalue is 1.00159063636763" ## [1] "VAR(9): Maximum absolute eigenvalue is 1.00063351477022" ## [1] "VAR(10): Maximum absolute eigenvalue is 1.00056830228862" We find that the largest eigenvalue is approximately 1 in absolute value—i.e., the smallest root is close to a unit root. We should be aware that forecasts will be less reliable at longer horizons. Do we want to consider imposing an explicit restriction of an exact unit root? We will explore this in Tutorial 12! (d) Forecast lrgdpt for 12 quarters past the end of the sample. Plot the point estimates along with 95% predictive intervals. Interpret the results. Solution We can use the predict function to generate forecasts; the function plot will then automatically produce 95% predictive intervals. Unfortunately, these intervals only account for forecast uncertainty, but not estimation uncertainty (i.e., parameter estimates treated as given in deriving the predictive intervals). VAR_fcst <- list() xlim <- c(length(date) - 3 * hrz, length(date) + hrz) ylim <- c(lrgdp[xlim[1]], max(lrgdp) + 0.2) for (i in 1:nmods) p <- adq_idx_var[i] VAR_fcst[[i]] <- predict(VAR_est[[p]], n.ahead = hrz) plot(VAR_fcst[[i]], names = "lrgdp", xlim = xlim, ylim = ylim, main = "Forecast for Log Real GDP", xlab = "Horizon", ylab = "RRP") 150 160 170 180 Forecast for Log Real GDP Forecast for Log Real GDP 150 160 170 180 8.9 9.0 9.1 9.2 9.3 8.9 9.0 9.1 9.2 9.3 Forecast for Log Real GDP 140 150 160 170 180 Forecasts for the three VAR specifications look qualitatively similar. Also, to note is that the near unit root estimated for each VAR does not seem to be of any consequence in terms of forecasts up to 12 quarters ahead—they are still quite accurate in the sense of having relatively small predictive intervals (but keeping in mind that tese do not take into account estimate uncertainty). 2. Analyse the dynamic relationships between lrgdpt, lrm2t and rst. For all VAR specifications relevant to this question please use p = 8. (a) Using the Cholesky decomposition, compute the IRFs for all the possible orderings of the system and study the responses. Are the responses sensitive to ordering? Choose the most reasonable ordering and explain your answer. Solution We compare IRFs of six orderings, which are obtained from all possible permutations of xt. In R, we can do this by employing the funcation perms from the pracma package. Then, we compute the IRFs using the irf function within nested for loops. orders <- perms(1:3) vnames <- c("lrgdp", "lrm2", "rs") for (i in 1:3) 8.9 9.0 9.1 9.2 9.3 for (j in 1:3) { for (k in 1:nrow(orders)) { title_i_j_k <- paste0("Response of ", vnames[i], " to a shock in ", vnames[j], "; x = (", paste0(vnames[orders[k,]], collapse = ", "), irf_i_j_k <- irf(VAR(x[,orders[k,]], 8), n.ahead = 40, response = vnames[i], impulse = vnames[j], boot = TRUE) plot(irf_i_j_k, main = title_i_j_k) Response of lrgdp to a shock in lrgdp; x = (rs, lrm2, lrgdp)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs 0.002 0.006 0.010 Response of lrgdp to a shock in lrgdp; x = (rs, lrgdp, lrm2)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of lrgdp to a shock in lrgdp; x = (lrm2, rs, lrgdp)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs lrgdp lrgdp 0.002 0.006 0.010 0.002 0.006 0.010 Response of lrgdp to a shock in lrgdp; x = (lrm2, lrgdp, rs)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of lrgdp to a shock in lrgdp; x = (lrgdp, lrm2, rs)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs lrgdp lrgdp 0.000 0.004 0.008 −0.002 0.002 0.006 Response of lrgdp to a shock in lrgdp; x = (lrgdp, rs, lrm2)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of lrgdp to a shock in lrm2; x = (rs, lrm2, lrgdp)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs lrgdp lrgdp −0.002 0.002 0.006 0.000 0.004 0.008 Response of lrgdp to a shock in lrm2; x = (rs, lrgdp, lrm2)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of lrgdp to a shock in lrm2; x = (lrm2, rs, lrgdp)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs lrgdp lrgdp −0.002 0.002 0.006 −0.004 0.000 0.004 Response of lrgdp to a shock in lrm2; x = (lrm2, lrgdp, rs)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of lrgdp to a shock in lrm2; x = (lrgdp, lrm2, rs)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs lrgdp lrgdp −0.002 0.002 0.006 −0.002 0.002 0.006 Response of lrgdp to a shock in lrm2; x = (lrgdp, rs, lrm2)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of lrgdp to a shock in rs; x = (rs, lrm2, lrgdp)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs lrgdp lrgdp −0.010 −0.004 0.002 −0.004 0.000 0.004 Response of lrgdp to a shock in rs; x = (rs, lrgdp, lrm2)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of lrgdp to a shock in rs; x = (lrm2, rs, lrgdp)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs lrgdp lrgdp −0.010 0.000 0.005 −0.010 0.000 0.005 Response of lrgdp to a shock in rs; x = (lrm2, lrgdp, rs)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of lrgdp to a shock in rs; x = (lrgdp, lrm2, rs)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs 0.000 −0.015 −0.005 lrgdp lrgdp Response of lrgdp to a shock in rs; x = (lrgdp, rs, lrm2)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of lrm2 to a shock in lrgdp; x = (rs, lrm2, lrgdp)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs lrm2 lrgdp −0.002 0.004 0.010 −0.012 −0.006 0.000 Response of lrm2 to a shock in lrgdp; x = (rs, lrgdp, lrm2)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of lrm2 to a shock in lrgdp; x = (lrm2, rs, lrgdp)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs −0.002 0.002 0.006 0.010 0.000 0.004 0.008 Response of lrm2 to a shock in lrgdp; x = (lrm2, lrgdp, rs)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of lrm2 to a shock in lrgdp; x = (lrgdp, lrm2, rs)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs 0.000 0.005 0.010 −0.005 0.005 Response of lrm2 to a shock in lrgdp; x = (lrgdp, rs, lrm2)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of lrm2 to a shock in lrm2; x = (rs, lrm2, lrgdp)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs 0.000 0.005 0.010 0.000 0.005 0.010 Response of lrm2 to a shock in lrm2; x = (rs, lrgdp, lrm2)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of lrm2 to a shock in lrm2; x = (lrm2, rs, lrgdp)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs 0.000 0.005 0.010 −0.005 0.005 0.010 Response of lrm2 to a shock in lrm2; x = (lrm2, lrgdp, rs)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of lrm2 to a shock in lrm2; x = (lrgdp, lrm2, rs)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs 0.010 −0.005 0.005 Response of lrm2 to a shock in lrm2; x = (lrgdp, rs, lrm2)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of lrm2 to a shock in rs; x = (rs, lrm2, lrgdp)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs −0.010 0.000 0.010 0.000 0.005 0.010 Response of lrm2 to a shock in rs; x = (rs, lrgdp, lrm2)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of lrm2 to a shock in rs; x = (lrm2, rs, lrgdp)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs −0.010 0.000 0.010 −0.010 0.000 0.010 Response of lrm2 to a shock in rs; x = (lrm2, lrgdp, rs)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of lrm2 to a shock in rs; x = (lrgdp, lrm2, rs)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs −0.010 0.000 −0.010 0.000 Response of lrm2 to a shock in rs; x = (lrgdp, rs, lrm2)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of rs to a shock in lrgdp; x = (rs, lrm2, lrgdp)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs 0.2 0.4 −0.015 −0.005 0.005 Response of rs to a shock in lrgdp; x = (rs, lrgdp, lrm2)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of rs to a shock in lrgdp; x = (lrm2, rs, lrgdp)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs −0.2 0.0 0.2 −0.2 0.0 Response of rs to a shock in lrgdp; x = (lrm2, lrgdp, rs)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of rs to a shock in lrgdp; x = (lrgdp, lrm2, rs)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs 0.3 0.5 −0.2 0.0 0.2 0.4 0.6 Response of rs to a shock in lrgdp; x = (lrgdp, rs, lrm2)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of rs to a shock in lrm2; x = (rs, lrm2, lrgdp)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs −0.1 0.1 0.2 0.3 −0.2 0.0 0.2 0.4 Response of rs to a shock in lrm2; x = (rs, lrgdp, lrm2)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of rs to a shock in lrm2; x = (lrm2, rs, lrgdp)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs −0.3 −0.1 0.1 −0.2 0.0 Response of rs to a shock in lrm2; x = (lrm2, lrgdp, rs)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of rs to a shock in lrm2; x = (lrgdp, lrm2, rs)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs −0.3 −0.1 0.1 −0.3 −0.1 Response of rs to a shock in lrm2; x = (lrgdp, rs, lrm2)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of rs to a shock in rs; x = (rs, lrm2, lrgdp)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs 0.6 1.0 −0.2 0.0 Response of rs to a shock in rs; x = (rs, lrgdp, lrm2)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of rs to a shock in rs; x = (lrm2, rs, lrgdp)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs 0.6 1.0 −0.2 0.2 Response of rs to a shock in rs; x = (lrm2, lrgdp, rs)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs Response of rs to a shock in rs; x = (lrgdp, lrm2, rs)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs 0.6 −0.2 0.2 Response of rs to a shock in rs; x = (lrgdp, rs, lrm2)' 0 10 20 30 40 95 % Bootstrap CI, 100 runs The following are some obsevations that stand out. i (dlrgdpt, dlrm2t, drst) vs. (dlrgdpt, drst, dlrm2t): We observe similar patterns in impulse responses, but note that the response of interest rates to a change in money supply is fairly different within the two orderings. When money supply is ordered prior to interest rates, there is a significant contemporaneous response of interest rates to a change in money supply; no significant response is observed in the alternative case (money supply prior to interest rates). ii (dlrgdpt, dlrm2t, drst) vs. (dlrm2t, drst, dlrgdpt): These two orderings exhibit 程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com