Financial Econometrics and Data Science
Univariate Models & Volatility and Correlation Modelling
Dr Ran Tao
Copyright By PowCoder代写 加微信 powcoder
6. Univariate Time Series Modelling
6.1 Notation and Concepts
6.2 Moving Average Processes (MA)
6.3 Autoregressive Processes (AR)
6.4 Autoregressive Moving Average Processes (ARMA)
6.5 ARMA Specifications: Box-
6.6 Forecasting
6. Univariate Time Series Modelling
6.1 Notation and Concepts
6.1 Notation and Concepts
Why Univariate Time Series Models?
Other explanatory variables not available.
Other explanatory variables not directly observable.
Examples: inflation rate, unemployment rate, exchange rate, firm’s sales, gold prices, interest rate, etc.
Before considering systems of equations, we begin with simple single equation, univariate time series models.
Although very simple, these models are often able to forecast economic & financial variables reasonably well.
6.1 Notation and Concepts
Univariate Time Series Models
A time series model relates a variable to previous values of itself and random disturbances.
Times series models require that yt is related to yt−1, for example, and that the process is stationary.
Two Types of Models
Autoregressive models (AR). Explain a variable in terms of its past values.
Moving average models (MA). Explain a variable in terms of weighted averages of a current and past values of a random variable.
And composite hybrid models. ARMA.
6.1 Notation and Concepts
A Strictly Stationary Process
A strictly stationary process is one where
P{yt1 ≤ b1,…,ytn ≤ bn} = P{yt1+m ≤ b1,…,ytn+m ≤ bn}
A Weakly Stationary Process
If a series satisfies the following assumptions, it is said to be weakly or covariance stationary
(1) E(yt) = μ t = 1,2,…,∞
(2) E(yt −μ)(yt −μ)=σ2 <∞
(3) E(yt1 −μ)(yt2 −μ)=γt2−t1 ∀t1,t2
So if the process is covariance stationary, all the variances are the same and all the covariances depend on the difference between t1 and t2. The moments
E(yt −E(yt))(yt−s −E(yt−s))=γs,s=0,1,2,...
are known as the covariance function.
6.1 Notation and Concepts
The covariances, γs, are known as autocovariances
However, the value of the autocovariances depend on the
units of measurement of yt
It is thus more convenient to use the autocorrelations which are the autocovariances normalised by dividing by the variance:
τs = γs , s = 0,1,2,... γ0
If we plot τs against s=0,1,2,... then we obtain the autocorrelation function or correlogram
6.1 Notation and Concepts
A White Noise Process
A while noise process is one with no discernible structure. A definition of a white noise process is:
E(yt) = u var(yt) = σ2
σ2 if t = r γt−r = 0 otherwise
The ACF will be 0 apart from a single peak of 1 at s=0. τˆ ∼ approx. N (0, 1/T ) where T = sample size
We can use this to do significant tests for the autocorrelation coefficients by constructing a confidence interval.
6.1 Notation and Concepts
For example, a 95% confidence interval (about the hypothesized value of 0) would be given by
±1.96 ∗ √ T
If the sample autocorrelation coefficient, τˆ , fall outside this s
region for any value of s, then we reject the null hypothesis that the autocorrelation at lag s is zero.
6.1 Notation and Concepts
Joint Hypothesis Tests
We can also test the joint hypothesis that all m of the τk correlation coefficients are simultaneously equal to zero using the Q-statistic develop by Box and Pierce:
Q = T τˆ 2 k
The Q-statistic is asymptotically distributed as a χ2m.
However, the Box Pierce test has poor small sample
properties, so a variant has been developed, called the Ljung-Box statistic:
This statistic is very useful as a general test of linear dependence in time series.
Q∗ = T (T + 2)
6.1 Notation and Concepts
An ACF Example
Question:
Suppose that a researcher had estimated the first 5 autocorrelation coefficients using a series of length 100 observations, and found them to be (from 1 to 5): 0.207, -0.013, 0.086, 0.005, -0.022.
Test each of the individual coefficient for significance, and use both the Box-Pierce and Ljung-Box tests to establish whether they are jointly significant.
Solution:
A coefficient would be significant if it lies outside (-0.196,+0.196) at the 5% level, so only the first autocorrelation coefficient is significant.
Q=5.09 and Q*=5.26
Compared with a tabulated χ2(5)=11.1 at the 5% level, so the 5 coefficients are jointly insignificant.
6.1 Notation and Concepts
Why does it matter if a variable is WN or not?
If a series is WN, we cannot predict it by its own values.
Knowing yt tell us nothing about yt+1, for example, because
yt and yt+1 are not related/correlated.
If a series is not WN, we can use these covariance
relationships to predict the series using its own lag values. Two major models for doing this - MA and AR.
6.2 Moving Average Processes (MA)
6.2 Moving Average Processes (MA)
Moving Average Processes
Let ut (t = 1, 2, 3,...) be a sequence of independently and identically distributed (iid) random variables with E(ut) = 0 and var(ut) = σ2, then
yt =μ+ut +θ1ut−1 +θ2ut−2 +···+θqut−q is a qth order moving average model MA(q).
Its properties are E(yt) = μ
var(yt) = γ0 = 1 + θ12 + θ2 + · · · + θq2σ2 Covariances
(θs +θs+1θ1 +θs+2θ2 +···+θqθq−s) σ2 for s=1,...,q
6.2 Moving Average Processes (MA)
Example of an MA Problem
1. Consider the following MA(2) process:
yt = ut + θ1ut−1 + θ2ut−2
where ut is a zero mean white noise process with variance σ2.
i. Calculate the mean and variance of Xt
ii. Derive the autocorrelation function for this process (i.e. express the autocorrelations, τ1, τ2, ...as functions of the parameters θ1 and θ2).
iii. If θ1 = −0.5 and θ2 = 0.25, sketch the acf of Xt.
6.2 Moving Average Processes (MA)
i. IfE(ut)=0,thenE(ut−i)=0∀iSo
var(yt ) = var(yt) = var(yt) =
E[(yt )(yt )]
E[(ut + θ1ut−1 + θ2ut−2)(ut + θ1ut−1 + θ2ut−2)] E u2t + θ12u2t−1 + θ2u2t−2 + cross-products
= E(ut) + θ1E(ut−1) + θ2E(ut−2) = 0
var(yt) = But E(yt) = 0, so
E(ut + θ1ut−1 + θ2ut−2) E[yt − E(yt)][yt − E(yt)]
But E[cross-products] = 0 since cov(ut, ut−s) = 0 for s ̸= 0.
6.2 Moving Average Processes (MA) (Cont’d)
So var(yt) = γ0 = E u2t + θ12u2t−1 + θ2u2t−2 = σ 2 + θ 12 σ 2 + θ 2 2 σ 2
= 1 + θ 12 + θ 2 2 σ 2
6.2 Moving Average Processes (MA)
ii. The acf of yt
γ1 = γ1 = γ1 = γ1 = γ1 = γ1 =
E[yt − E(yt )][yt−1 − E(yt−1 )]
E[yt ][yt−1 ]
E[(ut + θ1ut−1 + θ2ut−2)(ut−1 + θ1ut−2 + θ2ut−3)] E θ1u2t−1 + θ1θ2u2t−2
θ1σ2+θ1θ2σ2
(θ1+θ1θ2)σ2
6.2 Moving Average Processes (MA) (Cont’d)
γ2 = γ2 = γ2 = γ2 =
γ3 = γ3 = γ3 =
E[yt − E(yt )][yt−2 − E(yt−2 )]
E[yt ][yt−2 ]
E[(ut + θ1ut−1 + θ2ut−2)(ut−2 + θ1ut−3 + θ2ut−4)] Eθ2 u2t−2
E[yt − E(yt)][yt−3 − E(yt−3)]
E[yt ][yt−3 ]
E[(ut + θ1ut−1 + θ2ut−2)(ut−3 + θ1ut−4 + θ2ut−5)] 0
6.2 Moving Average Processes (MA) (Cont’d)
Soγs =0fors>2.
We have the autocovariances, now calculate the autocorrelations:
γ1 (θ1 + θ1θ2)σ2 (θ1 + θ1θ2) γ = 1+θ2 +θ2σ2 = 1+θ2 +θ2
γ2 (θ2)σ2 θ2
γ = 1+θ2 +θ2σ2 = 1+θ2 +θ2 01212
γs=0∀s>2 γ0
6.2 Moving Average Processes (MA) (Cont’d)
iiii. For θ1 = −0.5 and θ2 = 0.25, substituting these into the formulae above gives τ1 = −0.476, τ2 = 0.190.
6.2 Moving Average Processes (MA)
Thus the acf plot will appear as follows:
1.2 1 0.8 0.6 0.4 0.2 0
012345 –0.2
6.3 Autoregressive Processes (AR)
6.3 Autoregressive Processes (AR)
Autoregressive Processes
An autoregressive model of order p, an AR(p) can be expressed as
yt =μ+φ1yt−1 +φ2yt−2 +···+φpyt−p +ut
Or using the lag operator notation:
Liyt = yt−i yt = μ + φiyt−i + ut
yt = μ + φiLiyt + ut
Lyt = yt−1
or φ(L)yt = μ + ut where φ(L)=(1−φ1L−φ2L2 −···−φpLp).
6.3 Autoregressive Processes (AR)
To fit an AR(1) in R, you use the command
ar1 <- arima(examplevariable, order = c(1,0,0)) from the package stats.
The arima() command will later be used to fit ARMA models.
The command uncondMoments() from the package uGMAR can be used to calculate the unconditional mean and variance of Gaussian autoregressive models.
6.3 Autoregressive Processes (AR)
The Stationary Condition for an AR Model
The condition for stationarity of a general AR(p) model is thattherootsof1−φ1z−φ2z2−···−φpzp =0alllie outside the unit circle.
A stationary AR(p) model is required for it to have an MA(∞) representation.
Example 1: Is yt = yt−1 + ut stationary?
The characteristic root is 1, so it is a unit root process (so
non-stationary)
Example 2: Is yt = 3yt−1 − 2.75yt−2 + 0.75yt−3 + ut stationary?
The characteristic roots are 1, 2/3, and 2. Since only one of these lies outside the unit circle, the process is non-stationary.
6.3 Autoregressive Processes (AR)
Wold’s Decomposition Theorm
States that any stationary series can be decomposed into the sum of two unrelated processes, a purely deterministic part and a purely stochastic part, which will be an MA(∞).
For the AR(p) model, φ(L)yt = ut, ignoring the intercept, the Wold decomposition is
yt = ψ(L)ut
ψ(L)=φ(L)−1 =(1−φ1L−φ2L2 −...−φpLp)−1
E.g.,AR(1),φ(L)−1 =(1−φL)−1 =1+φ1L+φ2L2+...
yt =ut +φut−1 +φ2ut−2 +...
6.3 Autoregressive Processes (AR)
The Moments of an Autoregressive Process
The moments of an autoregressive process are as follows. The mean is given by
E(yt) = φ0
1−φ1 −φ2 −···−φp
The autocovariances and autocorrelation functions can be obtained by solving what are known as the Yule-Walker equations:
φ1 +τ1φ2 +···+τp−1φp τ1φ1 +φ2 +···+τp−2φp
τp−1φ1 +τp−2φ2 +···+φp
If the AR model is stationary, the autocorrelation function will decay exponentially to zero.
6.3 Autoregressive Processes (AR)
Sample AR Problem
Consider the following simple AR(1) model yt = μ + φ1yt−1 + ut
i. Calculate the (unconditional) mean of yt.
For the remainder of the question, set μ = 0 for simplicity.
ii. Calculate the (unconditional) variance of yt. iii. Derive the autocorrelation function for yt.
6.3 Autoregressive Processes (AR)
i. Unconditional mean:
E(yt) = E(μ + φ1yt−1)
= μ + φ1(μ + φ1E(yt−2))
= μ + φ1μ + φ21E(yt−2)
= μ + φ1μ + φ21(μ + φ1E(yt−3)) = μ + φ1μ + φ21μ + φ31E(yt−3)
E(yt) = μ + φ1E(yt−1)
6.3 Autoregressive Processes (AR) (Cont’d)
An infinite number of such substitutions would give
E(yt)=μ1+φ1 +φ21 +···)+φ∞1 y0
So long as the model is stationary, i.e. |φ1| < 1, then φ∞1 =
E(yt)=μ1+φ1 +φ21 +···= μ 1−φ1
ii. Calculating the variance of yt: yt = φ1yt−1 + ut
6.3 Autoregressive Processes (AR) (Cont’d)
From Wold’s decomposition theorem:
yt(1 − φ1L) = yt = yt =
(1 − φ1L)−1ut 1+φ1L+φ21L2 +···ut
So long as, |φ1| < 1, this will converge. var(yt) = E[yt − E(yt)][yt − E(yt)]
6.3 Autoregressive Processes (AR) (Cont’d)
but E(yt) = 0, since μ is set to zero.
= Eut + φ1ut−1 + φ21ut−2 + · · · ut + φ1ut−1
E[(yt)(yt)]
+ φ 21 u t − 2 + · · ·
= E u2t + φ21u2t−1 + φ41u2t−2 + · · · + cross-products
= Eu2t +φ21u2t−1 +φ41u2t−2 +···
= σu2 +φ21σu2 +φ41σu2 +···
= σu2 1+φ21 +φ41 +···
6.3 Autoregressive Processes (AR)
iii. Turning now to calculating the acf, first calculate the
autocovariances:
γ1 = cov (yt, yt−1) = E[yt − E(yt)][yt−1 − E(yt−1)]
Since a0 has been set to zero, E(yt) = 0 and E(yt−1) = 0, so
γ1 = E[ytyt−1]
under the result above that E(yt) = E(yt−1) = 0. Thus
Eut + φ1ut−1 + φ21ut−2 + · · · ut−1 + φ1ut−2 + φ 21 u t − 3 + · · ·
Eφ1u2t−1 + φ31u2t−2 + · · · + cross − products φ1σ2 +φ31σ2 +φ51σ2 +···
γ1 = 1−φ21
6.3 Autoregressive Processes (AR) (Cont’d)
For the second autocorrelation coefficient,
γ2 = cov(yt, yt−2) = E[yt − E(yt)][yt−2 − E(yt−2)] Using the same rules as applied above for the lag 1
covariance
γ2 = γ2 = γ2 =
E[yt yt−2 ]
Eut + φ1ut−1 + φ21ut−2 + · · · ut−2 + φ1ut−3 + φ 21 u t − 4 + · · ·
Eφ21u2t−2 + φ41u2t−3 + · · · +cross-products φ21σ2+φ41σ2+···
φ21σ21+φ21 +φ41 +···
φ 21 σ 2 1−φ21
6.3 Autoregressive Processes (AR) (Cont’d)
If these steps were repeated for γ3, the following expression would be obtained
γ 3 = 1 − φ 21
and for any lag s, the autocovariance would be given by
γ s = 1 − φ 21
6.3 Autoregressive Processes (AR) (Cont’d)
The acf can now be obtained by dividing the covariances by the variance:
τ0 = γ0=1 γ0
1 − φ 21
τ s = φ s1
1 − φ 21 σ2
1 − φ 21
=φ1 φ21σ2
1 − φ 21
6.3 Autoregressive Processes (AR)
The Partial Autocorrelation Function (denoted τkk)
Measures the correlation between an observation k periods ago and the current observation, after controlling for observations at intermediate lags (i.e. all lags