CS代写 1 On the Board ñLecture 2

1 On the Board ñLecture 2
We want to forecast using regressions (for now):
rt+1= + xt+”t+1;
where rt+1 is our (excess) return in, say, month t + 1, and xt is our, say, month

Copyright By PowCoder代写 加微信 powcoder

t forecasting variable. Then our return forecast at time t is: Et (rt+1) = + xt:
As usual, Et (xt”t+1) = 0 so the residual “t+1 is the part we cannot forecast. Autocorrelations mean that we use lagged rís as our xís. E.g.:
rt+1= + rt+”t+1:
What does the R2 mean? Say we get an R2 = 0:01. Is that small? Yes,
R2 = Var( + xt) = Var(Et (rt+1)): V ar (rt+1) V ar (rt+1)
Say V ar (rt+1) = 0:12. What V ar (Et (rt+1)) is implied by R2 = 0:01? Var(Et(rt+1))=R2 Var(rt+1):
I like to talk about standard deviations, as they are more intuitive numbers:
pV ar (Et (rt+1)) = pR2pV ar (rt+1) = p0:01p0:12
So, expected returns have a standard deviation of 1%. Is that a lot? If on average, the expected return is 4%, then a standard deviation of variation in expected returns of 1% seems meaningful. Some years it is, say, 5% some years it is 3%: Recall:
E(rt) = 0 +1E(rt1)+E(“t)
!/Var(r )1Ere  t tt+1 tt+1
So, if variance is constant, then portfolio weight is increased or decreased by 25% on average based on variation in Et (rt+1).
2 AR (autoregression) models
Start with AR(1) model (1 lag only):
rt =0 +1rt1 +”t;
. Under stationarity, E (rt) = E (rt1) = . Then:
E(rt) = 0 +1E(rt) E(rt) = 0 :

Seems we need 1 6= 1… Under stationarity, V ar (rt) = V ar (rt1): Var(rt) = Var(0 +1rt1)+Var(“t)
= 21V ar (rt1) + 2;
2 Var(rt) = 12:
1 For stationarity, we need j1j < 1. For convenience, deÖne: Thus, E (xt) = 0 and: Prove this yourself. Now, notice that: 1 (1xt2 + "t1) + "t xt rt E(rt): xt = 1xt1 + "t: = 21 xt2 + 1 "t1 + "t = 21 (1xt3 + "t2) + 1"t1 + "t = 3x +2" + " +" P1 t3 1 t2 1 t1 = 1j1"tj; j=0 as limj!1 j1xtj = 0. This is a Wold decomposition (using past shocks). Dynamic multiplier: D M j D M j Et (xt+1) Et (xt+2) Et (xt+3) Et (xt+j)  @ x t =  j1 : @"tj = @ x t + j =  j1 : @"t = Et (Et+1 (xt+2 )) = 1 Et (xt+1 ) =  31 x t ; =  j1 x t : By stationarity: Forecast across horizons: Hey professor, i wanted the prediction of Et (rt+j ) not Et (xt+j ). Well: Et (rt+j ) = E (rt) + Et (xt+j). 2.1 Autocovariances of AR(1) process Autocovariance at lag 0 is simply the variance: Cov(xt;xt)=Var(xt)= 0: NotethatsinceE(xt)=0,Cov(xt;xtj)=E(xtxtj). Recall: Cov(X;Y)=E[(XE(X))(Y E(Y))]: If E (X) = E (Y ) = 0, then Cov (X; Y ) = E (XY ) : Multiply original process by xt1: Take expectations: xtxt1 = xt1xt1 + "txt1: 1E (xt1xt1) + E ("txt1) What about the second order autocovariance: Taking expectations: xtxt2 = 1xt1xt2 + "txt2: 2= E (xt1 xt2 ) = 1E (xt1xt2) + E ("txt2) j =  j1 0 . What about autocorrelations? The deÖnition of a correlation is covariance over variance. So:  j = j =  j1 : 0 An AR(p) process: rt =0 +1rt1 +2rt2 +:::+prtp +"t: 程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com