CS代写 Lecture 5 ARMA Models

Lecture 5 ARMA Models
. Lochstoer
UCLA Anderson School of Management
Winter 2022

Copyright By PowCoder代写 加微信 powcoder

. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 1 / 91

1 Autoregressive Models
2 Application: Bond Pricing
3 Moving Average Models
4 ARMA Models
5 References
6 Appendix
. Lochstoer UCLA Anderson School of Management ()
Lecture 5 ARMA Models Winter 2022

Autoregressive Models
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 3 / 91

ARMA Models
parsimonious description of (univariate) time series (mimicking autocorrelation etc.)
very useful tools for forecasting (and commonly used in industry)
I forecasting sales, earnings revenue growth at the Örm level or at the industry level
I forecasting GPD growth, ináation at the national level
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 4 / 91

Autoregressive process of order 1
lagged returns might be useful in predicting returns. we consider a model that allows for this:
rt+1 =φ0 +φ1rt +εt+1, εt+1 WN(0,σ2ε) I fεt g represents the ënewsí:
εt =rt Et1[rt]
εt is what you know about the process at t but not at t 1
I Economists often call εt the ëshocksíor ëinnovationsí. this model is referred to as an AR(1)
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 5 / 91

Transition density DeÖnition
Given an information set Ft, the transition density of a random variable rt+1 is the conditional distribution of rt+1 given by:
rt+1 p(rt+1jFt;θ)
The information set Ft is often (but not always) the history of the process
rt,rt1,rt2,…
In this case, the transition density is written:
rt+1  p(rt+1jrt,rt1,…,;θ)
A transition density is Markov if it depends on its Önite past.
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 6 / 91

AR(1) transition density
Consider the AR(1) model with Gaussian shocks
r t + 1 = φ 0 + φ 1 r t + ε t + 1 , ε t  N ( 0 , σ 2ε )
The transition density is Markov of order 1.
rt+1 p(rt+1jrt;θ)
the rest of the history rt2, rt3, . . . is irrelevant. With Gaussian shocks εt , the transition density is:
rt+1  N(φ0+φ1rt,σ2ε) conditional mean and conditional variance:
E[rt+1jrt] = φ0+φ1rt, V[rt+1jrt] = V[εt+1]=σ2ε.
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 7 / 91

Unconditional mean of AR(1)
assume that the series is covariance-stationary
compute the unconditional mean μ. I take unconditional expectations:
E [rt +1 ] = φ0 + φ1 E [rt ] . I use stationarity: E [rt+1] = E [rt] = μ:
μ = φ0 + φ1 μ, and solving for the unconditional mean:
μ= φ0 . 1φ1
meanexistsifφ1 6=1andiszeroifφ0 =0
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 8 / 91

Mean Reversion
if φ1 6= 1, we can rewrite the AR(1) process as:
rt+1 μ = φ1 (rt μ)+εt+1.
suppose0<φ1 <1 I when rt > μ, the process is expected to get closer to the mean:
Et[rt+1 μ] = φ1 (rt μ) < (rt μ). I when rt < μ, the process is expected to get closer to the mean: Et[rt+1 μ] = φ1 (rt μ) > (rt μ).
the smaller φ1, the higher the speed of mean reversion
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 9 / 91

Mean Reversion
we can rewrite the AR(1) process as:
rt+2 μ = φ21 (rt μ)+φ1εt+1 +εt+2.
suppose0<φ1 <1 I when rt > μ, the process is expected to get closer to the mean:
Et[rt+2 μ] = φ21 (rt μ) < (rt μ). I when rt < μ, the process is expected to get closer to the mean: Et[rt+2 μ] = φ21 (rt μ) > (rt μ).
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 10 / 91

we can rewrite the AR(1) process as:
rt+h μ = φh (rt μ) + φh1εt+1 + . . . + εt+h.
suppose0<φ1 <1 I at the half-life, the process is expected to cover 1/2 of the distance to the Et[rt+h μ] = φh1 (rt μ) = .5(rt μ). the half-life is deÖned by setting φh1 = 0.5 and solving h = log(0.5)/ log(φ1 ) . Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 11 / 91 Variance of AR(1) Compute the unconditional variance: take the expectation of the square of: rt+1 μ = φ1 (rt μ)+εt+1. we obtain the following expression for the unconditional variance: V[rt+1]= σ2ε , 1 φ 21 provided that φ21 < 1 because the variance has to be positive and bounded covariance stationarity requires that in addition, if 1 < φ1 < 1, we can show that the series is covariance stationary because the mean and variance are Önite . Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 12 / 91 Continuous-Time Model DeÖnition In a continuous-time model, the log of stock prices, pt = log Pt , follows an Ornstein-Uhlenbeck process if: dpt = κ(μppt)dt+σpdBt Continuous-time version of a discrete-time, Gaussian AR(1) process. Suppose we observe the process (1) at discrete intervals ∆t, then this is equivalent to: pt = μ+φ1(pt1μ)+σεt σ2 = (1exp(2κ∆t))σp. εt N(0,1) φ1 = exp(κ∆t) . Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 Dynamic Multipliers use the expression for the mean of the AR(1) to obtain: rt+1 μ = φ1 (rt μ)+εt+1. by repeated substitution, we get: r t μ = ∑ φ i1 ε t i + φ t + 1 ( r 1 μ ) . value of rt at t is stated as a function of the history of shocks fετgτ=t and its value at time t = 1 e§ect of shocks die out over time provided that 1 < φ1 < 1. . Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 14 / 91 Dynamic Multipliers calculate the e§ect of a change ε0 on rt : ∂ [ r t μ ] = φ t1 . ∂ [ r t + j μ ] = φ j1 . ∂εt in a covariance stationary model, dynamic multiplier only depends on j, not on t Again, note that we need jφ1j < 1 for a stationary (non-explosive) system where shocks die out: limj!∞ φj1 = 0 . Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 15 / 91 MA(inÖnity) representation use the expression for the mean of the AR(1) to obtain: rt+1 μ = φ1 (rt μ)+εt+1. by repeated substitution: r t μ = ∑ φ i1 ε t i . i=0 I linear function of past innovations! I Öts into class of linear time series . Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 16 / 91 Autocovariances of an AR(1) take the unconditional expectation: (rt μ)rtj μ = φ1 (rt1 μ)rtj μ+εt rtj μ. this yields: E (rt μ)rtj μ = φ1E (rt1 μ)rtj μ+E εt rtj μ. or, using notation from Lecture 2: γj = φ1γj1, j>0
γ 0 = φ 1 γ 1 + σ 2ε , j = 0 . Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models
note that γj = γj
Winter 2022 17 / 91

Autocorrelation Function
it immediately implies that the ACF is: ρj = φ1ρj1,
and ρ0 = 1
combing these two equations imply that:
ρ j = φ j1
I exponential decay at a rate φ1
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models
Winter 2022

Autocorrelation Function of an AR(1)
11 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2
-0.2 -0.4 -0.6 -0.8
2 4 6 8 10
-0.2 -0.4 -0.6 -0.8
2 4 6 8 10
Autocorrelation Function for AR(1). The left panel considers φ1 = 0.8. The right panel considers φ1 = 0.8.
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 19 / 91

AR(p) DeÖnition
The AR(p) model is deÖned as: 2 rt =φ0+φ1rt1+…+φprtp+εt, εt WN 0,σε
other lagged returns might be useful in predicting returns
similar to multiple regression model with p lagged variables as explanatory variables
the AR(p) is Markov of order p.
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 20 / 91

Conditional Moments
conditional mean and conditional variance:
Ert+1jrt,…,rtp+1 = φ0+φ1rt+…+φprtp+1
V rt+1jrt,…,rtp+1 = V [εt+1] = σ2ε
moments conditional on rt,…,rtp+1 are not correlated with rti,i  p
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 21 / 91

consider the model:
rt =φ0 +φ1rt1 +φ2rt2 +εt εt WN0,σ2ε
take unconditional expectations to compute the mean
E [rt] = φ0 +φ1E [rt1]+φ2E [rt2]
Assuming stationarity and solving for the mean: E [rt ] = μ = φ0
provided that φ1 + φ2 6= 1.
using this expression for μ write the model in deviation from means:
rt μ = φ1 (rt1 μ)+φ2 (rt2 μ)+εt
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 22 / 91

Autocorrelations of an AR(2)
take the expectation of :
(rt μ)rtj μ = φ1 (rt1 μ)rtj μ
this yields:
+φ2 (rt2 μ)rtj μ+εt rtj μ
E (rt μ)rtj μ = φ1E[(rt1 μ)rtj μ]
+ φ2E[(rt2 μ)rtj μ]
+ E εt rtj μ or, using di§erent notation:
γj = φ1γj1+φ2γj2,
γ 0 = φ 1 γ 1 + φ 2 γ 2 + σ 2ε ,
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models
Winter 2022

Autocorrelations of an AR(2)
ρj = φ1ρj1+φ2ρj2, j2
ρ 0 = φ 1 ρ 1 + φ 1 ρ 2 + σ 2ε / γ 0 , j = 0
which implies that the ACF of an AR(2) satisÖes a second-order di§erence equation:
ρ1 = φ1ρ0+φ2ρ1
ρj = φ1ρj1 + φ2ρj2, j  2
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models
Winter 2022

Roots DeÖnition
The second-order di§erence equation for the ACF: (1φ1Bφ2B2)ρj =0,
where B is the back-shift operator: Bρj = ρj1 Note that we can write the above as:
(1π1B)(1π2B)ρj = 0 Intuitively, the AR(2) is an “AR(1) on top of another AR(1)”
From AR(1) math, we had that each AR(1) is stationary if its autocorrelation is less than one in absolute value.
The írootsíπj should satisfy similar property for AR(2) to be stationary
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 25 / 91
A useful factorization

Finding the roots
A simple case:
1φ1B φ2B2 = (1π1B)(1π2B)
= 1(π1+π2)B+π1π2B2
and so we solve using the relations:
φ1 = π1+π2
φ2 = π1π2
The solutions to this are the inverses to the solutions to the second order
polynomial in the scalar-valued x:
(1φ1x φ2×2) = 0,
the solutions to this equation are given by:
x 1 , x 2 = φ 1  q φ 21 + 4 φ 2
the inverses are the characteristic roots: π1 = x1 and π2 = x1
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 26 / 91

Roots (real, distinct case)
two characteristic roots: π1 = x1 and π2 = x1 12
both characteristic roots are real-valued if the discriminant is greater than zero: φ21 + 4φ2 > 0
I then we can factor the polynomial as:
(1φ1B φ2B2) = (1π1B)(1π2B)
I two AR(1) models on top of each other
The ACF will decay like an AR(1) at long lags
I Intuition: the e§ect of the smallest π dies out more quickly and you are left e§ectively with just an AR(1)
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 27 / 91

Roots (complex-valued case)
two characteristic roots: π1 = x1 and π2 = x1 12
both characteristic roots are complex-valued if the discriminant is negative: φ 21 + 4 φ 2 < 0 Then, π1 = x1 and π2 = x1 are complex numbers. 12 The ACF will look like damped sine and cosine waves. . Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 28 / 91 Autocorrelation for AR(2) 0.6 0.4 0.2 1 2 3 4 5 6 7 8 9 10 lag φ =1.2, φ =-.35 φ =.2, φ =.35 12 12 1 0.5 0.8 0.4 0.6 0.3 0.4 0.2 0.2 0.1 00 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 lag φ1=-.2, φ2=.35 lag φ1=.6,φ2=-.4 0.6 0.4 0.2 1 2 3 4 5 6 7 8 9 10 lag Autocorrelation Function for AR(2) processes. . Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 29 / 91 AR(2) Example: The Dividend Price Ratio The stock market Dividend to Price ratio is: I Sum of last yearís dividends to Örms in the market divided by current market I A "Valuation Ratio" I Very slow-moving (persistent); quarterly postWW2 data for U.S.: 0.06 0.055 0.05 0.045 0.04 0.035 0.03 0.025 0.02 0.015 0.01 Stock market D/P ratio 0 50 100 150 200 250 Quarterly observation # . Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 30 / 91 Estimate AR(2) on this variable Stationarity test: I Roots greater than 1, so stationary despite φ1 = 1.093 > 1 as φ2 = 0.137.
1 1.09319x + 0.13731x 2 = 0 I Unconditional mean:
μ = 0.00123254 = 0.0279 1 1.09319 + 0.13731
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models
Winter 2022 31 / 91

AR(2) DP prediction
Pred_DP1 = uncond_mean + phi1*(DP(2:end)-uncond_mean) + phi2*(DP(1:end-1)-uncond_mean); Pred_DP2 = uncond_mean + phi1*(Pred_DP1-uncond_mean) + phi2*(DP(2:end)-uncond_mean); Pred_DP3 = uncond_mean + phi1*(Pred_DP2-uncond_mean) + phi2*(Pred_DP1-uncond_mean); etc.
0.06 0.055 0.05 0.045 0.04 0.035 0.03 0.025 0.02 0.015 0.01
DP predicted values
1 quarter ahead
8 quarters ahead
inf inite quarters ahead
0 50 100 150 200 250 Quarterly observation #
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 32 / 91

Stationarity
Recall: The modulus of z = a + bi is jzj = pa2 + b2. Thus, for real numbers the modulus is simply the absolute value.
An AR(1) process is stationary if its characteristic root is less than one, i.e. if 1/x = φ1 is less than one in modulus. This condition implies that ρj = φj1 converges to zero as j ! ∞.
An AR(2) process is stationary if the two characteristic roots π1 and π2 (the inverses of the solutions to those two equations) are less than one in modulus.
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 33 / 91

Stationarity of AR(p)
An AR(p) process is stationary if all p characteristic roots of the below polymonial are less than one in modulus
1φ1xφ2×2…φpxp =0 see chapter 2 in Hamilton (1994) for details.
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 34 / 91

Partial Autocorrelation Function DeÖnition
The PACF of a stationary series is deÖned as fφj,jg,j = 1,…,n
rt = rt = rt =
φ0,1 + φ1,1rt1 + v1t
φ0,2 + φ1,2rt1 + φ2,2rt2 + v2t
φ0,3 + φ1,3rt1 + φ2,3rt2 + φ3,3rt3 + v3t …
These are simple multiple regressions that can be estimated with least squares.
φp ,p shows the incremental contribution of rt p to rt over an AR (p 1) model
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 35 / 91

DeÖnition
The sample partial autocorrelations (PACF) of a time series are deÖned as φb1,1,φb2,2,…,φbp,p,…,
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 36 / 91

Partial Autocorrelation Function
The PACF of an AR (p) satisÖes:
1 φˆp,p ! φp as sample size increases
2 φˆ j , j ! 0 f o r j > p
for an AR(p) series, the sample PACF cuts o§ after lag p
) look at the sample PACF to determine an appropriate value of p
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 37 / 91

PACF of Daily Log Returns
Sample partial autocorrelation coefficients
PACF for Daily log Returns on VW-CRSP Index. Two standard error bands around zero. 1926-2007.
0 2 4 6 8 10 12 k-values
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 38 / 91
spacf values

Information Criteria
information criteria help determine the optimal lag length the Akaike (1973) information criterion:
AIC = 2 ln(likelihood ) + 2(number of parameters ) the Bayesian information criterion of Schwarz (1978):
BIC = 2 ln(likelihood ) + ln T (number of parameters ) I the BIC penalty depends on the sample size T
for di§erent values of p, compute AIC(p) and/or BIC(p) pick the lag length with the minimum AIC/BIC
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 39 / 91

Manufacturing White Noise
to check the performance of the AR model youíve selected: check the residuals!!
residuals should look like white noise
I look at the ACF of the residuals
I perform Ljung-Box test on residuals
I Q(m)  χ2(m p) where p is the lag length of the AR(p) model
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 40 / 91

Forecasting
suppose we have an AR(p) model
we want to forecast rt+h using all the info Ft available at t
assume we choose the forecast to minimize the mean square error: E hy yprediction 2 i
The conditional mean minimizes the mean squared forecast error. we will come back to optimal forecasting later
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 41 / 91

1-step ahead forecast error
the AR(p) model is given by:
rt+1 = φ0 +φ1rt +…+φprtp+1 +εt+1 take the conditional expectation:
Et [rt+1] = φ0 +φ1rt +…+φprtp+1 the one-step ahead forecast error:
p vt(1)=rt+1φ0∑φirti+1 =εt+1
the variance of the one-step ahead forecast error:
V [ v t ( 1 ) ] = σ 2ε
I if εt is normally distributed, then the 95 % conÖdence interval: 1.96σε
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 42 / 91

2-step ahead forecast error
the AR(p) model is given by:
rt+2 = φ0 +φ1rt+1 +…+φprtp+2 +εt+2 we just take the conditional expectation:
E t [ r t + 2 ] = φ 0 + φ 1 br t ( 1 ) + . . . + φ p r t p + 2 the two-step ahead forecast error:
vt(2)=φ1vt(1)+εt+2 =φ1εt+1+εt+2 the variance of the two-step ahead forecast error:
V [ v t ( 2 ) ] = σ 2ε ( 1 + φ 21 )
I the variance of the two-step ahead forecast error is larger than the variance of
the one-step ahead forecast error
. Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 43 / 91

Multi-step ahead forecast error
The h-step ahead forecast is given by:
rˆ t ( h ) = φ 0 + ∑ φ i rˆ t ( h i )
whererˆt(j)=rt+j ifj<0. the h-step ahead forecast converges to the unconditional expectation E (rt ) this is referred to as mean reversion . Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 44 / 91 Estimation: conditional least squares assume we observe or can condition on the Örst p observations. AR(p) model is then a linear regression model: rt =φ0+φ1rt1+...+φprtp+εt, t=p+1,...,T using least squares, the Ötted model is brt =φb0+φb1rt1+...+φbprtp and the residual is vt = rt brt the estimated variance of the residuals is: σb 2ε = ∑ Tt = p + 1 v t 2 T 2p 1 . Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 45 / 91 ML Estimation alternatively, we could use maximum likelihood. the log-likelihood function is: lnp(r1,r2,...,rT;θ)= ∑lnp(rtjrt1,...,r1;θ)+lnp(r1;θ) for example, assume Gaussian shocks εt then p(rtjrt1,...,rtp;θ) is normal the di§erence between least squares and ML estimation of (φ0 , φ1 , . . . , φp ) are the initial distributions p(r1; θ), p(r2jr1; θ) . . .. Conditional least squares of an AR(p) drops the Örst p terms in the likelihood. . Lochstoer UCLA Anderson School of Management () Lecture 5 ARMA Models Winter 2022 46 / 91 Example: ML Estimation of AR(1) assume the initial value r1 comes from the stationary dist. unconditional moments: E[r1]= φ0 , V[r1]= σ2ε , 1φ1 1φ21 hence, the density p(r1;θ) of the Örst observation r1 is normal with the above (unconditional) mean and variance for t > 1, the conditional moments:
E[rtjrt1] = φ0+φ1rt1, V[rtjrt1] = σ2ε
hence, the conditional density p(rt jrt1; θ) is normal wit

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com