CS计算机代考程序代写 chain Alastair Hall ECON61001: Semester 1 2020-21 Econometric Methods

Alastair Hall ECON61001: Semester 1 2020-21 Econometric Methods
Solutions to Problem Set for Tutorial 9
1.(a) Since yi only takes two values {0,1}, it follows that given xi, ui can only take two values 1 − x′iβ0 and −x′iβ0 and
P(ui =1−x′iβ0|xi) = P(yi =1|xi) = x′iβ0, P(ui = −x′iβ0 |xi) = P(yi = 0|xi) = 1 − x′iβ0.
Therefore, we have
E[ui | xi] = (1 − x′iβ0)x′iβ0 + (−x′iβ0)(1 − x′iβ0) = 0.
1.(b) Using part (a), we have V ar[ui | xi] = E[u2i | xi] where
E[u2i | xi] = (1 − x′iβ0)2(x′iβ0) + (−x′iβ0)2(1 − x′iβ0) = (x′iβ0)(1 − x′iβ0).
1.(c) As shown in the answer to part (a), ui is a discrete random variable conditional on xi, and so does not have a normal distribution.
2.(a) Since this is binary response model, there are only two outcomes for yi, namely {0, 1}. We are given that P (yi = 1|xi) = Λ(x′iβ0) and so P (yi = 0|xi) = 1 − Λ(x′iβ0). Recall that the
conditional LF is
N
CLFN(β) = 􏰜p(yi|xi;β)
i=1
where p(yi|xi) is the conditional probability function for yi given xi evaluated at the sample
values. We have:
Note that we can write:
p ( y i | x i ; β ) = Λ ( x ′i β ) i f y i = 1
= 1 − Λ ( x ′i β ) i f y i = 0
p(yi|xi; β) = 􏰑Λ(x′iβ)􏰒yi 􏰑1 − Λ(x′iβ)􏰒1−yi , and so the likelihood function is:
N
CLF (β) = 􏰜 􏰃 􏰑Λ(x′iβ)􏰒yi 􏰑1 − Λ(x′iβ)􏰒1−yi 􏰄 .
i=1
1

(b) The (conditional) log likelihood function is CLLFN (β) = ln[CLFN (β)] which, using part (a),
is
N
CLLFN (β) = 􏰛 􏰑 yiln[Λ(x′iβ)] + (1 − yi)ln[1 − Λ(x′iβ) 􏰒 .
i=1 (c) Since ∂ez/∂z = ez, it follows that:
∂Λ(z) = Λ(z) − {Λ(z)}2 = Λ(z){1 − Λ(z)} ∂z
If we now set z = x′iβ and use the chain rule then
∂Λ(z) = 􏰓∂Λ(z)􏰔 ∂z .
Since ∂z/∂xi,l = βl, we obtain:
∂ xi,l
∂xi,l ∂z ∂xi,l
∂P (yi = 1|xi; β) = Λ(x′iβ){1 − Λ(x′iβ)}βl.
3.(a) Usingy∼N(Xβ0,σ02IT)andDefinition2.3intheLectureNotes(Section2.1),iffollowsthat the pdf of y is given by:
fy(y; θ) = (2πσ2)−T/2exp{−(y − Xβ)′(y − Xβ)/2σ2}. Taking logs, we obtain:
LLFT (θ) = −T ln[2π] − T ln[σ2] − (y − Xβ)′(y − Xβ). 2 2 2σ2
(b) Recall that the score equations are:
∂LLFT(θ)􏰂􏰂 =0
∂ θ 􏰂􏰂 θ = θˆ T
where θˆT denotes the MLE of θ0. Differentiating the LLF (using the results in Lemma 2.2 in Section 2.2. of the Lecture Notes) we obtain:
∂LLF(θ) ∂LLFT(θ)  􏰗
T ∂β 2
∂θ =∂LLFT(θ)= ∂(σ2)
1X′(y−Xβ) 􏰘 −T +σ1(y−Xβ)′(y−Xβ) .
2σ2 2σ4 Therefore the score equations imply MLE’s satisfy:
􏰗 X ′ ( y − X βˆ T ) 􏰘 = 0 . −TσˆT2 +(y−XβˆT)′(y−XβˆT)
(c)FromX′(y−XβˆT)=0itfollowsthatβˆT =(X′X)−1X′y.From−TσˆT2 +(y−XβˆT)′(y− XβˆT)=0,itfollowsthatσˆT2 =(y−XβˆT)′(y−XβˆT)/T.
(d) It can be recognized that the OLS and MLE estimators of β0 are the same, but that the OLS and MLE estimators of σ02 are different. Notice that as the OLS estimator of σ02 is unbiased, it must follow that the MLE of σ02 is actually a biased estimator (although it is consistent).
2