1 On the Board ñLecture 2
We want to forecast using regressions (for now):
rt+1= + xt+”t+1;
where rt+1 is our (excess) return in, say, month t + 1, and xt is our, say, month
Copyright By PowCoder代写 加微信 powcoder
t forecasting variable. Then our return forecast at time t is: Et (rt+1) = + xt:
As usual, Et (xt”t+1) = 0 so the residual “t+1 is the part we cannot forecast. Autocorrelations mean that we use lagged rís as our xís. E.g.:
rt+1= + rt+”t+1:
What does the R2 mean? Say we get an R2 = 0:01. Is that small? Yes,
R2 = Var( + xt) = Var(Et (rt+1)): V ar (rt+1) V ar (rt+1)
Say V ar (rt+1) = 0:12. What V ar (Et (rt+1)) is implied by R2 = 0:01? Var(Et(rt+1))=R2 Var(rt+1):
I like to talk about standard deviations, as they are more intuitive numbers:
pV ar (Et (rt+1)) = pR2pV ar (rt+1) = p0:01p0:12
So, expected returns have a standard deviation of 1%. Is that a lot? If on average, the expected return is 4%, then a standard deviation of variation in expected returns of 1% seems meaningful. Some years it is, say, 5% some years it is 3%: Recall:
E(rt) = 0 +1E(rt 1)+E(“t)
!/Var(r ) 1E re t tt+1 tt+1
So, if variance is constant, then portfolio weight is increased or decreased by 25% on average based on variation in Et (rt+1).
2 AR (autoregression) models
Start with AR(1) model (1 lag only):
rt =0 +1rt 1 +”t;
. Under stationarity, E (rt) = E (rt 1) = . Then:
E(rt) = 0 +1E(rt) E(rt) = 0 :
Seems we need 1 6= 1… Under stationarity, V ar (rt) = V ar (rt 1): Var(rt) = Var(0 +1rt 1)+Var(“t)
= 21V ar (rt 1) + 2;
2 Var(rt) = 1 2:
1 For stationarity, we need j1j < 1.
For convenience, deÖne:
Thus, E (xt) = 0 and:
Prove this yourself. Now, notice that:
1 (1xt 2 + "t 1) + "t
xt rt E(rt): xt = 1xt 1 + "t:
= 21 xt 2 + 1 "t 1 + "t
= 21 (1xt 3 + "t 2) + 1"t 1 + "t
= 3x +2" + " +" P1 t 3 1 t 2 1 t 1
= 1j1"t j; j=0
as limj!1 j1xt j = 0. This is a Wold decomposition (using past shocks). Dynamic multiplier:
D M j D M j
Et (xt+1) Et (xt+2)
Et (xt+3) Et (xt+j)
@ x t = j1 : @"t j
= @ x t + j = j1 : @"t
= Et (Et+1 (xt+2 )) = 1 Et (xt+1 )
= 31 x t ;
= j1 x t :
By stationarity:
Forecast across horizons:
Hey professor, i wanted the prediction of Et (rt+j ) not Et (xt+j ). Well: Et (rt+j ) = E (rt) + Et (xt+j).
2.1 Autocovariances of AR(1) process
Autocovariance at lag 0 is simply the variance:
Cov(xt;xt)=Var(xt)= 0: NotethatsinceE(xt)=0,Cov(xt;xt j)=E(xtxt j). Recall:
Cov(X;Y)=E[(X E(X))(Y E(Y))]: If E (X) = E (Y ) = 0, then
Cov (X; Y ) = E (XY ) : Multiply original process by xt 1:
Take expectations:
xtxt 1 = xt 1xt 1 + "txt 1:
1E (xt 1xt 1) + E ("txt 1)
What about the second order autocovariance:
Taking expectations:
xtxt 2 = 1xt 1xt 2 + "txt 2: 2= E (xt 1 xt 2 )
= 1E (xt 1xt 2) + E ("txt 2)
j = j1 0 .
What about autocorrelations? The deÖnition of a correlation is covariance over variance. So:
j = j = j1 : 0
An AR(p) process:
rt =0 +1rt 1 +2rt 2 +:::+prt p +"t:
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com