程序代写 4 Tools from time series analysis

4 Tools from time series analysis
In this section, we review briefly some tools from time series analysis which will be used to model financial time series. The main idea of time series analysis is to regard the observed time series (xt)t as a realization of some stochastic process (Xt)t. Since we only observe one realization of the process, some kind of statistical stationarity is required to enable estimation of the parameters. In time series analysis, a useful framework is that of second-order stationarity. We also review the ARMA models that are commonly used to model second-order stationary processes.
4.1 Basic definitions
A (discrete time) time series model is a stochastic process (Xt), where t = 0,1,… or t ∈ Z. Each Xt is a real or vector-valued random variable, defined on a com- mon probability space. Here, we suppose that the time increment is constant (e.g., day 0,1,…). More generally, we may consider a continuous time model such as the geometric Brownian motion (Section 3.4).

Copyright By PowCoder代写 加微信 powcoder

Definition 4.1 (White noise processes). A stochastic process (εt)t∈Z is a Gaussian white noise process if the random variables εt are i.i.d. N (0, σε2 ) random variables, where σε2 > 0 is a constant.
More generally, we say that (εt)t∈Z is a white noise process, denoted (εt) ∼ WN(0,σε2), if (εt) are pairwise correlated random variables with mean zero and vari-
􏰉σ2, fors=t; E[εt] = 0, Cov(εs, εt) = ε
0, for s = t.
Gaussian white noise
0 250 500 750 1000
Figure 4.1: A sample from the Gaussian white noise process (where σε2 = 1) with length 1000.
In Figure 4.1 we plot a sample of the Gaussian white noise process (compare with Figure 2.2). Clearly, a Gaussian white noise process is a white noise process but not vice versa. In particular, for a white noise process the variables at different time points may be dependent. White noise processes often serve as building blocks of more complicated time series models.
Stationarity is a fundamental concept in time series analysis. If the stochastic pro- cess does not possess (possibly after some transformations) some form of stationarity, statistical inference based on a single realization is not possible.

Definition 4.2 (Stationarity concepts). Consider a process X = (Xt)t∈Z (or t ∈ N). (i) X is strictly stationary if for any t1 , . . . , tn , the distribution of
(Xt1+h,Xt2+h,…,Xtk+h)
is the same for all h≥0.
(ii) X is weakly (or second-order) stationary if E[|Xt|2] < ∞ for all t, E[Xt] = μ is independent of t, and for h ≥ 0, the quantity Cov(Xt,Xt+h) depends only on the lag h but not on t. In this case we call γh = Cov(Xt,Xt+h) the lag-h autocovariance of X, and ρh = Cor(Xt, Xt+h) = γh (provided γ0 > 0) the lag-h γ0
autocorrelation. As functions of h they are called the autocovariance/autocorre- lation functions (ACF).
Clearly,ifεisawhitenoiseprocess,thenγ0 =σε2 andγh =0forh>0. 4.2 Sample ACF and related tests
Given a sample (xt)Tt=1, the sample lag-h autocovariance and autocorrelation are de- fined by
γˆh = T (xt −x ̄)(xt−h −x ̄), x ̄= T xt,
(4.1) (4.2)
Tt=h+1(xt − x ̄)(xt−h − x ̄)
ρˆh = 􏰃􏰎Tt=h+1(xt − x ̄)2 􏰎Tt=h+1(xt−h − x ̄)􏰄1/2 ,
where h = 0,1,…,T − 1. In R, the sample ACF is computed by the the function stats::acf().
For an i.i.d. white noise, we have the following result:
Theorem 4.3. Suppose Xt are i.i.d. with finite variance.1 Then for m ≥ 1 fixed, as
T → ∞ we have √ d
T(ρˆ1,…,ρˆm) −→ N(0,Ip), (4.3)
where −→ denotes convergence in distribution.
In particular, for h ≥ 1 fixed, under the null hypothesis that Xt are i.i.d. with
finite variance, ρˆh is approximately distributed as N(0, T1 ). For each fixed lag, the
interval ±√2 can be used to test the null hypothesis at approximately 95% level. In T
stats::acf(), these intervals are automatically shown if we set plot = TRUE.
Example 4.4. The Hang (^HSI) is a capitalization-weighted market index
of the stock market; currently it has 50 constituent stocks. In Figure 4.2
we plot the sample ACF of the weekly log returns of the Hang from January
2010 to December 2021. None of the sample autocorrelation coefficients fall outside
the interval ±√2 . Judging from the sample ACF (sometimes called a correlogram) T
alone, it appears that the weekly log returns of the Hang behaves like a white noise process.
1Additional technical conditions are not detailed here.

Hang weekly log return
0 5 10 15 20
Figure 4.2: Sample ACFs for the weekly log returns of Hang . The sample period is from January 2010 to De- cember 2021.
4.2.1 Portmanteau tests
While the interval ± √2 (or more generally ±zα/2 √1 for a given significance level) is TT
a valid interval for a fixed lag, we cannot use it simultaneously over multiple lags as this neglects the multiple testing issue. In particular, even for a white noise process, it is possible that some of the sample autocorrelation coefficients are “significantly nonzero”. Here we provide methods to test the null hypothesis (that Xt are i.i.d. with finite variance) by using the sample ACF at multiple lags.
From Theorem 4.3, for m ≥ 1 fixed, we have that ∗ 􏰏m d 2
Q (m) = T ρˆh −→ χm, (4.4) h=1
where χ2m is the chi-squared distribution with m degrees of freedom. Thus the null hypothesis is rejected if a large value of Q∗(m) is observed. This is called the Box- Pierce test.
The Ljung-Box test is a modification of the Box-Pierce test that improves the performance in finite samples. It adopts the test statistics
Q(m)=T(T+2)􏰏m ρˆh , (4.5) h=1 T − h
whose approximate asymptotic distribution is again χ2m. A typical value of m (sug- gested by simulation studies) is m ≈ log(T ).
Example 4.5. Consider the context of Example 4.4. Letting p = 6 (which is approx- imately log(T) where T = 626), the p-values of the Box-Pierce and Ljung-Box test statistics are both approximately 0.85. According to these tests, we fail to reject the null hypothesis that the weekly log returns of the Hang are i.i.d. at say 95% level.
Example 4.6. Consider the daily and weekly log returns of S&P 500 over the same period (January 2010 to December 2021). Their sample ACFs are shown in Figure 4.3. At both frequencies we observe some significant values (for more the daily log returns). At the daily frequency, the p-values of the Box-Pierce and Ljung-Box test statistics are less than 2.2 × 10−16. At weekly frequency, the p-value is about 0.03.
−0.2 0.0 0.2 0.4 0.6 0.8 1.0

S&P 500 daily log return
S&P 500 weekly log return
0 5 10 15 20 25 30 35
0 5 10 15 20 25
Figure 4.3: Sample ACFs for the daily (left) and weekly (right) log returns of the S&P 500. The sample period is from January 2010 to December 2021.
This suggests that the log returns of the S&P 500 exhibit statistically significant serial correlation, especially at daily frequency. Note that although some correlation coefficients are statistically significant, their values are usually small.
4.3 Autoregressive moving average models
The autoregressive moving average (ARMA) model is a useful linear time series model for modelling second order stationary processes.
Definition 4.7 (ARMA model). Let p ≥ 0 and q ≥ 0 be integers. An ARMA(p, q) model is a process X = (Xt) of the form
φiXt−i + εt − θjεt−j, (4.6)
􏰏p Xt = φ0 +
where (εt) is a white noise process. If q = 0, it is called an AR(p) model. If p = 0, it
is called an MA(q) model.
It can be shown that if the roots of the characteristic polynomial A(z) = 1 − φ1z − · · · − φpzp lie outside of the unit circle (in the complex plane C), then X is a second order stationary process.
Maximum likelihood inference involving ARMA models usually assumes that ε is a Gaussian white noise process; it is also possible to e.g. GARCH (generalized autoregressive conditional heteroskedasticity) models. The order (p,q) of an ARMA model may be specified using information criteria such as the Akaike information criteria (AIC) or the Bayesian information criterion (BIC). The rule is to pick the model with the smallest value. The adequateness of a fitted model can be examined in terms of the residuals. For an adequate model, the residuals should behave like a white noise process. One can consider the Ljung-Box test statistic Q(m) (see (4.5)), where under the null hypothesis that the model is correct, the approximate asymptotic distribution of the test statistic is χ2m−p−q. A fitted model can then be used for e.g. forecasting.
We illustrate the above procedure (which can be implemented using the function arima() in R) with the following example.
0.0 0.2 0.4 0.6 0.8 1.0
0.0 0.2 0.4 0.6 0.8 1.0

ACF of residuals
Normal Q−Q Plot
0 5 10 15 20 25 −3 −2 −1 0 1 2 3
Lag Theoretical Quantiles
Figure 4.4: Residuals from the fitted model (4.7).
Example 4.8. Consider the weekly log returns of the S&P 500 as in Example 4.6. There are 624 data points. We keep the last 50 and use the first T = 574 for fitting the model. Using pmax = 4 and qmax = 4, we compute the AIC and BIC for all ARMA(p, q) models with p ≤ pmax and q ≤ qmax. In Table 3 we give the values of the AIC.
p\q01234 0 −2708.26 −2710.85 −2709.92 −2708.78 −2711.26 1 −2711.20 −2709.92 −2708.09 −2721.99 −2720.02 2 −2709.98 −2708.02 −2709.89 −2719.77 −2718.18 3 −2708.23 −2721.51 −2718.64 −2719.54 −2728.80 4 −2710.66 −2719.58 −2717.79 −2716.57 −2724.60
Table 3: AIC for various ARMA(p,q) models fitted to the log return series in Example 4.8.
The model ARMA(3,4) gives the lowest value of the AIC. On the other hand, using BIC gives the white noise model (ARMA(0, 0)), but the ARMA(0, 1) model is rather close. Here, we consider the ARMA(3, 4) model and leave it as an exercise to examine the model(s) picked by BIC.
The fitted ARMA(3, 4) model is
Xt = 0.002 + 1.9179Xt−1 − 1.7604Xt−2 + 0.7984Xt−3
+ εt + 2.0495εt−1 − 2.0523εt−2 + 1.1354εt−3 − 0.1327εt−4,
and σˆε2 = 0.0004863. (Note that the sign convention of the MA coefficients in arima() is opposite to that in (4.6).)
Let us examine the residuals. The sample ACF and normal Q-Q plot of the residuals are given in Figure 4.4. All of the autocorrelation coefficients are quite close to zero. We also note that the residuals have fat tails relative to the normal distribution. As a diagnosis check, we consider the Ljung-Box test with lag m = 20. The p-value (using the chi-square distribution with df = 20 − 7 = 13) is 0.76. Thus it appears that this model provides a reasonable fit. However, weak stationarity of the series is a questionable assumption. Ffor example, the market crash due to COVID would be extremely unlikely if the return series was weakly stationary.
0.0 0.2 0.4 0.6 0.8 1.0
Sample Quantiles
−0.15 −0.10 −0.05 0.00 0.05

Predictions (red) vs actual values (black)
2020−01 2020−07 2021−01 2021−07 2022−01
Figure 4.5: Predicted values (in red) versus the actual values (in black) for the model (4.7).
Finally, we examine the out-of-sample forecasts from this model. Using the fit- ted model, we compute the forecasts for the next 50 values (essentially year 2021). This can be done by either the function predict(), or forecast() from the package forecast. The result is shown in Figure 4.5. Note that the forecasts decay quickly to the mean; this is typical for any stationary ARMA model.
Let xˆT +h be the predicted value at time T + h, and let xT +h be the actual value. Here h = 1, 2, . . . , 50. To evaluate the empirical performance of the forecasts, we compute
(xT+h −xˆT+h)2 =0.0138,
(xT +h − x1:T )2 = 0.0133, h=1
where x1:T = T1 􏰎Tt=1 xt is the empirical mean of the data (over the training period). Unfortunately, in this example, the model performs worse than the constant x1:T , if we use the square cost as the loss function. In the assignment we will consider other ways to test the out-of-sample performance.
and compare it with

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com