Lecture 5 ARMA Models
. Lochstoer
UCLA Anderson School of Management
Winter 2022
Copyright By PowCoder代写 加微信 powcoder
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 1 / 91
1 Autoregressive Models
2 Application: Bond Pricing
3 Moving Average Models
4 ARMA Models
5 References
6 Appendix
. Lochstoer UCLA Anderson School of Management ()
Lecture 5 ARMA Models Winter 2022
Autoregressive Models
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 3 / 91
ARMA Models
parsimonious description of (univariate) time series (mimicking autocorrelation etc.)
very useful tools for forecasting (and commonly used in industry)
I forecasting sales, earnings revenue growth at the Örm level or at the industry level
I forecasting GPD growth, ináation at the national level
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 4 / 91
Autoregressive process of order 1
lagged returns might be useful in predicting returns. we consider a model that allows for this:
rt+1 =φ0 +φ1rt +εt+1, εt+1 WN(0,σ2ε) I fεt g represents the ënewsí:
εt =rt Et 1[rt]
εt is what you know about the process at t but not at t 1
I Economists often call εt the ëshocksíor ëinnovationsí. this model is referred to as an AR(1)
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 5 / 91
Transition density DeÖnition
Given an information set Ft, the transition density of a random variable rt+1 is the conditional distribution of rt+1 given by:
rt+1 p(rt+1jFt;θ)
The information set Ft is often (but not always) the history of the process
rt,rt 1,rt 2,…
In this case, the transition density is written:
rt+1 p(rt+1jrt,rt 1,…,;θ)
A transition density is Markov if it depends on its Önite past.
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 6 / 91
AR(1) transition density
Consider the AR(1) model with Gaussian shocks
r t + 1 = φ 0 + φ 1 r t + ε t + 1 , ε t N ( 0 , σ 2ε )
The transition density is Markov of order 1.
rt+1 p(rt+1jrt;θ)
the rest of the history rt 2, rt 3, . . . is irrelevant. With Gaussian shocks εt , the transition density is:
rt+1 N(φ0+φ1rt,σ2ε) conditional mean and conditional variance:
E[rt+1jrt] = φ0+φ1rt, V[rt+1jrt] = V[εt+1]=σ2ε.
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 7 / 91
Unconditional mean of AR(1)
assume that the series is covariance-stationary
compute the unconditional mean μ. I take unconditional expectations:
E [rt +1 ] = φ0 + φ1 E [rt ] . I use stationarity: E [rt+1] = E [rt] = μ:
μ = φ0 + φ1 μ, and solving for the unconditional mean:
μ= φ0 . 1 φ1
meanexistsifφ1 6=1andiszeroifφ0 =0
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 8 / 91
Mean Reversion
if φ1 6= 1, we can rewrite the AR(1) process as:
rt+1 μ = φ1 (rt μ)+εt+1.
suppose0<φ1 <1
I when rt > μ, the process is expected to get closer to the mean:
Et[rt+1 μ] = φ1 (rt μ) < (rt μ).
I when rt < μ, the process is expected to get closer to the mean:
Et[rt+1 μ] = φ1 (rt μ) > (rt μ).
the smaller φ1, the higher the speed of mean reversion
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 9 / 91
Mean Reversion
we can rewrite the AR(1) process as:
rt+2 μ = φ21 (rt μ)+φ1εt+1 +εt+2.
suppose0<φ1 <1
I when rt > μ, the process is expected to get closer to the mean:
Et[rt+2 μ] = φ21 (rt μ) < (rt μ).
I when rt < μ, the process is expected to get closer to the mean:
Et[rt+2 μ] = φ21 (rt μ) > (rt μ).
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 10 / 91
we can rewrite the AR(1) process as:
rt+h μ = φh (rt μ) + φh 1εt+1 + . . . + εt+h.
suppose0<φ1 <1
I at the half-life, the process is expected to cover 1/2 of the distance to the
Et[rt+h μ] = φh1 (rt μ) = .5(rt μ). the half-life is deÖned by setting φh1 = 0.5 and solving
h = log(0.5)/ log(φ1 )
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 11 / 91
Variance of AR(1)
Compute the unconditional variance:
take the expectation of the square of:
rt+1 μ = φ1 (rt μ)+εt+1.
we obtain the following expression for the unconditional variance:
V[rt+1]= σ2ε , 1 φ 21
provided that φ21 < 1 because the variance has to be positive and bounded covariance stationarity requires that
in addition, if 1 < φ1 < 1, we can show that the series is covariance
stationary because the mean and variance are Önite
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 12 / 91
Continuous-Time Model DeÖnition
In a continuous-time model, the log of stock prices, pt = log Pt , follows an Ornstein-Uhlenbeck process if:
dpt = κ(μp pt)dt+σpdBt
Continuous-time version of a discrete-time, Gaussian AR(1) process. Suppose we observe the process (1) at discrete intervals ∆t, then this is equivalent to:
pt = μ+φ1(pt 1 μ)+σεt
σ2 = (1 exp( 2κ∆t))σp.
εt N(0,1)
φ1 = exp( κ∆t)
. Lochstoer UCLA Anderson School of Management ()
Lecture 5 ARMA Models
Winter 2022
Dynamic Multipliers
use the expression for the mean of the AR(1) to obtain: rt+1 μ = φ1 (rt μ)+εt+1.
by repeated substitution, we get:
r t μ = ∑ φ i1 ε t i + φ t + 1 ( r 1 μ ) .
value of rt at t is stated as a function of the history of shocks fετgτ=t and
its value at time t = 1
e§ect of shocks die out over time provided that 1 < φ1 < 1.
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 14 / 91
Dynamic Multipliers
calculate the e§ect of a change ε0 on rt :
∂ [ r t μ ] = φ t1 .
∂ [ r t + j μ ] = φ j1 . ∂εt
in a covariance stationary model, dynamic multiplier only depends on j, not on t Again, note that we need jφ1j < 1 for a stationary (non-explosive) system where
shocks die out: limj!∞ φj1 = 0
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 15 / 91
MA(inÖnity) representation
use the expression for the mean of the AR(1) to obtain: rt+1 μ = φ1 (rt μ)+εt+1.
by repeated substitution:
r t μ = ∑ φ i1 ε t i .
i=0 I linear function of past innovations!
I Öts into class of linear time series
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 16 / 91
Autocovariances of an AR(1)
take the unconditional expectation:
(rt μ) rt j μ = φ1 (rt 1 μ) rt j μ+εt rt j μ.
this yields:
E (rt μ) rt j μ = φ1E (rt 1 μ) rt j μ+E εt rt j μ.
or, using notation from Lecture 2:
γj = φ1γj 1, j>0
γ 0 = φ 1 γ 1 + σ 2ε , j = 0 . Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models
note that γ j = γj
Winter 2022 17 / 91
Autocorrelation Function
it immediately implies that the ACF is: ρj = φ1ρj 1,
and ρ0 = 1
combing these two equations imply that:
ρ j = φ j1
I exponential decay at a rate φ1
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models
Winter 2022
Autocorrelation Function of an AR(1)
11 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2
-0.2 -0.4 -0.6 -0.8
2 4 6 8 10
-0.2 -0.4 -0.6 -0.8
2 4 6 8 10
Autocorrelation Function for AR(1). The left panel considers φ1 = 0.8. The right panel considers φ1 = 0.8.
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 19 / 91
AR(p) DeÖnition
The AR(p) model is deÖned as: 2 rt =φ0+φ1rt 1+…+φprt p+εt, εt WN 0,σε
other lagged returns might be useful in predicting returns
similar to multiple regression model with p lagged variables as explanatory variables
the AR(p) is Markov of order p.
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 20 / 91
Conditional Moments
conditional mean and conditional variance:
Ert+1jrt,…,rt p+1 = φ0+φ1rt+…+φprt p+1
V rt+1jrt,…,rt p+1 = V [εt+1] = σ2ε
moments conditional on rt,…,rt p+1 are not correlated with rt i,i p
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 21 / 91
consider the model:
rt =φ0 +φ1rt 1 +φ2rt 2 +εt εt WN0,σ2ε
take unconditional expectations to compute the mean
E [rt] = φ0 +φ1E [rt 1]+φ2E [rt 2]
Assuming stationarity and solving for the mean: E [rt ] = μ = φ0
provided that φ1 + φ2 6= 1.
using this expression for μ write the model in deviation from means:
rt μ = φ1 (rt 1 μ)+φ2 (rt 2 μ)+εt
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 22 / 91
Autocorrelations of an AR(2)
take the expectation of :
(rt μ) rt j μ = φ1 (rt 1 μ) rt j μ
this yields:
+φ2 (rt 2 μ) rt j μ+εt rt j μ
E (rt μ) rt j μ = φ1E[(rt 1 μ) rt j μ]
+ φ2E[(rt 2 μ) rt j μ]
+ E εt rt j μ or, using di§erent notation:
γj = φ1γj 1+φ2γj 2,
γ 0 = φ 1 γ 1 + φ 2 γ 2 + σ 2ε ,
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models
Winter 2022
Autocorrelations of an AR(2)
ρj = φ1ρj 1+φ2ρj 2, j2
ρ 0 = φ 1 ρ 1 + φ 1 ρ 2 + σ 2ε / γ 0 , j = 0
which implies that the ACF of an AR(2) satisÖes a second-order di§erence equation:
ρ1 = φ1ρ0+φ2ρ1
ρj = φ1ρj 1 + φ2ρj 2, j 2
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models
Winter 2022
Roots DeÖnition
The second-order di§erence equation for the ACF: (1 φ1B φ2B2)ρj =0,
where B is the back-shift operator: Bρj = ρj 1 Note that we can write the above as:
(1 π1B)(1 π2B)ρj = 0 Intuitively, the AR(2) is an “AR(1) on top of another AR(1)”
From AR(1) math, we had that each AR(1) is stationary if its autocorrelation is less than one in absolute value.
The írootsíπj should satisfy similar property for AR(2) to be stationary
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 25 / 91
A useful factorization
Finding the roots
A simple case:
1 φ1B φ2B2 = (1 π1B)(1 π2B)
= 1 (π1+π2)B+π1π2B2
and so we solve using the relations:
φ1 = π1+π2
φ2 = π1π2
The solutions to this are the inverses to the solutions to the second order
polynomial in the scalar-valued x:
(1 φ1x φ2×2) = 0,
the solutions to this equation are given by:
x 1 , x 2 = φ 1 q φ 21 + 4 φ 2
the inverses are the characteristic roots: π1 = x 1 and π2 = x 1
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 26 / 91
Roots (real, distinct case)
two characteristic roots: π1 = x 1 and π2 = x 1 12
both characteristic roots are real-valued if the discriminant is greater than zero: φ21 + 4φ2 > 0
I then we can factor the polynomial as:
(1 φ1B φ2B2) = (1 π1B)(1 π2B)
I two AR(1) models on top of each other
The ACF will decay like an AR(1) at long lags
I Intuition: the e§ect of the smallest π dies out more quickly and you are left e§ectively with just an AR(1)
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 27 / 91
Roots (complex-valued case)
two characteristic roots: π1 = x 1 and π2 = x 1 12
both characteristic roots are complex-valued if the discriminant is negative: φ 21 + 4 φ 2 < 0
Then, π1 = x 1 and π2 = x 1 are complex numbers. 12
The ACF will look like damped sine and cosine waves.
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 28 / 91
Autocorrelation for AR(2)
0.6 0.4 0.2
1 2 3 4 5 6 7 8 9 10 lag
φ =1.2, φ =-.35 φ =.2, φ =.35 12 12
1 0.5 0.8 0.4 0.6 0.3 0.4 0.2 0.2 0.1
00 1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10 lag
φ1=-.2, φ2=.35
lag φ1=.6,φ2=-.4
0.6 0.4 0.2
1 2 3 4 5 6 7 8 9 10 lag
Autocorrelation Function for AR(2) processes.
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 29 / 91
AR(2) Example: The Dividend Price Ratio
The stock market Dividend to Price ratio is:
I Sum of last yearís dividends to Örms in the market divided by current market
I A "Valuation Ratio"
I Very slow-moving (persistent); quarterly postWW2 data for U.S.:
0.06 0.055 0.05 0.045 0.04 0.035 0.03 0.025 0.02 0.015 0.01
Stock market D/P ratio
0 50 100 150 200 250 Quarterly observation #
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 30 / 91
Estimate AR(2) on this variable
Stationarity test:
I Roots greater than 1, so stationary despite φ1 = 1.093 > 1 as φ2 = 0.137.
1 1.09319x + 0.13731x 2 = 0 I Unconditional mean:
μ = 0.00123254 = 0.0279 1 1.09319 + 0.13731
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models
Winter 2022 31 / 91
AR(2) DP prediction
Pred_DP1 = uncond_mean + phi1*(DP(2:end)-uncond_mean) + phi2*(DP(1:end-1)-uncond_mean); Pred_DP2 = uncond_mean + phi1*(Pred_DP1-uncond_mean) + phi2*(DP(2:end)-uncond_mean); Pred_DP3 = uncond_mean + phi1*(Pred_DP2-uncond_mean) + phi2*(Pred_DP1-uncond_mean); etc.
0.06 0.055 0.05 0.045 0.04 0.035 0.03 0.025 0.02 0.015 0.01
DP predicted values
1 quarter ahead
8 quarters ahead
inf inite quarters ahead
0 50 100 150 200 250 Quarterly observation #
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 32 / 91
Stationarity
Recall: The modulus of z = a + bi is jzj = pa2 + b2. Thus, for real numbers the modulus is simply the absolute value.
An AR(1) process is stationary if its characteristic root is less than one, i.e. if 1/x = φ1 is less than one in modulus. This condition implies that ρj = φj1 converges to zero as j ! ∞.
An AR(2) process is stationary if the two characteristic roots π1 and π2 (the inverses of the solutions to those two equations) are less than one in modulus.
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 33 / 91
Stationarity of AR(p)
An AR(p) process is stationary if all p characteristic roots of the below polymonial are less than one in modulus
1 φ1x φ2×2 …