CS代考 Notes on the Law of Iterated Expectations, LIE

Notes on the Law of Iterated Expectations, LIE
August 26, 2021
In the first lecture, we used the LIE a couple of times. I gave you intuition for its logic. The derivations below, however, contain a proof. The proof only uses properties of conditional and joint probabilities. Hence, it only uses basic notions in statistics.
Please note: These derivations are just for fun, in case you are curious about technical details. They will not be asked in any test.

Copyright By PowCoder代写 加微信 powcoder

1 Law of iterated expectations
If we have two random variables X and Y , we can write the expected value of Y as follows
E(Y ) = E(E[Y |X]).
In other words, first we compute a conditional expectation of Y given X and then take another expectation (with respect to X). This is equivalent to taking an expectation of Y directly. Note that this is what we wrote on slide 12 of Lecture 1.
Let us convince ourselves that this is true. For simplicity, we will assume that both random variables are discrete: X has only two possible outcomes x1, x2 and Y has, also, only two possible outcomes y1, y2. However, the result holds for more complex discrete random variables and for continuous random variables as well.
Consider the following function
g(X) = E(Y |X).
Notice that g(X) is a random variable because its value depends on X, which is random. In other words, conditional expectations are random variables. (Unconditional expectations are numbers!)
We can now compute the expected value of g(X) as follows E(g(X)) =
􏰍 g(xi)p(xi) i=1

plug in the definition of g(X) =
g(X) is a conditional expectation =
conditional probs are joint probs divided by marginal probs =
􏰍E(Y|X=xi)p(xi) i=1
22  􏰍􏰍yp(yj,xi)p(x)
 j p(xi)  i i=1 j=1
􏰍􏰍yjp(yj,xi) i=1 j=1
􏰍y􏰍p(y,x) jji
= 􏰍yjp(yj) j=1
In class, during the first lecture, we used the LIE to prove
returns are not forecastable, according to the efficient market hypothesis.
Remember that in our notation It denoted a random variable (or more than one random variable) in the information set at time t. We said that the infor- mation available at time t is usually less than the information at t + 1, that is It ⊂ It+1. For simplicity, we will assume that It will only contain one source of information (one random variable) and that It+1 contains an additional source of information (an additional random variable). Therefore, let It = {X} and let It+1 = {X, Z}. Information at time t only contains one random variable X, while information at time t + 1 contains the same random variable X and an additional random variable Z.
In class, we used the following result
E[E(V|It+1)|It] = E[V|It].
This is what we wrote on slide 11 of Lecture 1.
Using our example with one and two random variables in the two information sets, we can write:
E [E (V |X, Z) |X] = E [V |X] . 2
􏰍 􏰍 yj p(yj |xi) p(xi) i=1 j=1
This proves our stated result.
2 Another version of the LIE
p(xi ) cancels out = now we can integrate X out =
that the expected

For simplicity, we will assume that each random variable can only take on a finite number of outcomes (before it was 2, now it is potentially more than 2). So, let the possible outcomes of V be {v1, …, vK }. The outcomes of the information variables are X = {x1 , …, xI } and Z = {z1 , …, zJ }.
Let us follow the same steps used in the previous derivation and define a function
g(X, Z) = E(V |X, Z). Let us now compute the conditional expected value:
E[g(X,Z)|X=xi] = given the definition of conditional probability = write the expression for the function g = explicitlywritewhatE[V|X=xi,Z=zj]is = note that p(xi , zj ) cancels out in this formula = re-arrange terms = notice that we are integrating out Z = realize that this is now a conditional expectation for V = =
J 􏰍g(xi,zj)p(zj|xi) j=1
p ( x i , z j ) g(xi,zj) p(xi)
p ( x i , z j ) E[V|X=xi,Z=zj] p(xi)
J 􏰳K 􏰂p(xi,zj)
􏰍 􏰍vkp(vk|xi,zj) p(xi) j=1 k=1
J 􏰳 K p(vk,xi,zj)􏰂 p(xi,zj)
􏰍 􏰍vk p(xi,zj) j=1 k=1
JK1 􏰍􏰍vkp(vk,xi,zj)p(xi) j=1 k=1
p(vk,xi,zj)
j=1 p ( v k , x i )
This proves the result E [E (V |X, Z) |X] = E [V |X] which, in our notation in class, was
E[E(V|It+1)|It] = E[V|It].
􏰍vkp(vk|xi) k=1 E(V|X=xi)

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com