Alastair Hall ECON61001: Semester 1, 2020-21 Econometric Methods
Solutions to Problem Set for Tutorial 6
1.(a) If both equations (1) and (2) on the Problem set hold then vg = u ̄g = n−1 Ng ui. g i=Ng−1 +1
Define Hg = {hi; i = Ng−1 + 1, Ng−1 + 2, . . . , Ng). Using the LIE-II in the hint, we have: E[vg| ̄hg] = E[u ̄g| ̄hg] = EE[u ̄g|Hg, ̄hg]| ̄hg], (1)
Using the hint, it follows that
E[u ̄g|Hg, ̄hg] = E[u ̄g|Hg] = E[
Multiplying out, we have
Ng i=Ng−1+1
ui/ng|Hg] =
Ng
E[ui|Hg]/ng. (2)
i=Ng−1+1
From Assumption CS2, we have that ui and hj are independent for all i ̸= j and so E[ui|Hg] = E[ui|hi] which equals zero via Assumption CS4. Therefore, it follows from (2) that E[vg|Hg] = 0, which in turn implies E[vg| ̄hg] = 0 via (1).
Now consider V ar[vg| ̄hg]. Since E[vg| ̄hg] = 0, it follows that V ar[vg| ̄hg] = E[vg2| ̄hg]. Using LIE-II and the suggestion in the hint,
E [ v g2 | ̄h g ] = E E [ v g2 | H g ] | ̄h g .
Ng Ng Ng Ng
( 3 )
E[v2|H ] = E[ u u /n2 | H ] = n−2 E[u u |H ]. (4) gg ijggi ijg
i=Ng−1+1 j=Ng−1+1
i=Ng−1+1 j=Ng−1+1
Assumption CS2 states that (ui,hi) and (uj,hj) are independent for all i ̸= j. Therefore, if i ̸= j then E[uiuj|Hi] = E[ui|hi]E[uj|hj] = 0 from Assumption CS4. From Assumptions CS2 and CS5, it follows that E[u2i |Hi] = E[u2i |hi] = σ02. Using these results in (4), we obtain:
Ng Ng
E[v2|H] = n−2 E[u2|H] = n−2 E[u2|h] = σ2/n. igg igg ii0g
i=Ng−1 +1
1.(b) Assumption CS2 states that (ui,hi) is independent of (uj,hj) for all i ̸= j. So E[vg|X ̄] =
E[vg| ̄hg] = 0, from part (a). Since this holds for all i, we obtain E[v|X ̄] = 0.
Similarly, V ar[vg|X ̄] = V ar[vg| ̄hg] = σ02/ng. Assumption CS2 states that (ui, hi) is indepen-
dent of (uj, hj) and so (vg, ̄h′g) is independent of (vl, ̄h′l)′ which implies Cov[vg, vl| ̄hg, ̄hl] = 0 1
i=Ng−1 +1
for all g ̸= l. Again using Assumption CS2, it follows that Cov[vg,vl|X ̄] = 0. Combining these results, we obtain:
n−1 0 0 … 0 1
0 n−1 0 … 0 2
̄ 2. .. . Var[v|X]=σ0 . . . .
written as
y ̄ = X ̄ β + v
.
. ..
.. . 0 0 … 0 n−1
G
1.(c) If ng ̸= n, say, for all g then {vg} are heteroscedastic. This means that the OLS estimator based on equation (1) in the question is unbiased but inefficient.
1.(d) Let βˆGLS be the GLS estimator of β. From (a), it follows that the regression model can be
wherey ̄istheG×1vectorwithgth elementy ̄g,X ̄ istheG×2matrixwithgth row[1, ̄hg]and
v is the G × 1 vector with gth element vg. From part (a) it follows that V ar[v] = Σ = σ02V
whereV =diag[n−1,n−1,…n−1]. FromLecture5,itcanberecalledthattheGLSestimator 12G
is given by
as the multiple of σ02 in Σ cancels out. This estimator is feasible – provided that {ng; i =
βˆGLS = (X ̄′Σ−1X ̄)−1X ̄′Σ−1y ̄ = (X ̄′V −1X ̄)−1X ̄′V −1y ̄, 1,2…G} are known.
2. Ifβi|xi ∼N(β0,σ02IK)thenwecanwriteβi =β0+vi wherevi|xi ∼N(0,σ02IK). Thismeans that the model for yi can be written as
yi = x′i(β0 + vi) = x′iβ0 + ui
where ui = x′ivi. Notice that E[ui|xi] = E[x′ivi|xi] = x′iE[vi|xi] = 0 and Var[ui|xi] = Var[x′ivi|xi] = σ02x′ixi. Therefore one possible justification for the heteroscedsatic linear model is parameter variation in the mean of a linear model.
3.(a) Substituting for y, we obtain:
βˆW = (X′W2X)−1X′W2(Xβ0 + u) = β0 + (X′W2X)−1X′W2u.
3.(b) Using part (a), we have
E βˆ W = β 0 + E ( X ′ W 2 X ) − 1 X ′ W 2 u . Since X and W2 are constants, it follows that:
E(X′W2X)−1X′W2u = (X′W2X)−1X′W2E[u] = 0usingCA4. 2
3.(c) Using parts (a) and (b), we have: Var[βˆW] = E(βˆW − β0)(βˆW − β0)′
= E (X′W2X)−1X′W2uu′W2X(X′W2X)−1
= (X′W2X)−1X′W2E[uu′]W2X(X′W2X)−1, as both X and W2 are constants,
= (X′W2X)−1X′W2ΣW2X(X′W2X)−1, as V ar[u] = Σ.
3.(d) Using part(a), CA6, and X and W2 constant, βˆW is a linear combination of u ∼ N(0,Σ).
Therefore, it follows from Lemma 2.1 in the Lecture Notes that βˆW ∼ N β0, V ar[βˆW ] .
3.(e) Using the LIE, we have:
EβˆW = β0 + EE(X′W2X)−1X′W2u|X Since W2 is constant, we have
EE(X′W2X)−1X′W2u|X = E(X′W2X)−1X′W2E[u|X] = 0usingSR4. Therefore, we have E βˆW = β0 and so βˆW is an unbiased estimator of β0.
4.(a) This follows directly because the WLS estimator is OLS applied to the regression model (wi∗yi)=(wi∗xi)′β0 +(wi∗ui).
4.(b) Since {(x′i,ui)}Ni=1 is an i.i.d. sequence and {wi}Ni=1 are constants, it follows that (xˇ′i,uˇi) and (xˇ′j, uˇj) are independent for all i ̸= j. However, (xˇ′i, uˇi) and (xˇ′i, uˇj) are not identically distributed: to see this, note that E[xˇixˇ′i] = wi2E[xix′i] = wi2Q and so changes with i (assuming the weights are not independent of i).
4.(c)(i) Using Question 3(a), we have:
βˆW = β0 + (N−1X′W2X)−1N−1X′W2u. (5)
We are given in the question that,
N−1X′W2X →p Qw, a positive definite matrix,
and so via Slutsky’s Theorem,
We are also given that form which it follows that
(N−1X′W X)−1 →p Q−1. (6) 2w
N−1/2X′W2u →d N(0,Ωw),
N−1X′W2u = N−1/2 N−1/2X′W2u →p 0. (7) 3
Combining (5)-(7) and using Slutsky’s Theorem, we obtain: βˆ →p β + Q − 1 × 0 = β ,
W0w0 and so βˆW is a consistent estimator for β0.
4.(ii) Using Question 3(a), we have:
N1/2(βˆW − β0) = (N−1X′W2X)−1N−1/2X′W2u. (8)
Using the limit theorems given in the question, it then follows from Lemma 3.5 in the Lecture Notes (in Section 3.1) that:
N 1 / 2 ( βˆ − β ) →d N ( 0 , Q − 1 Ω Q − 1 ) . W0www
4.(c)(iii) A suitable test statistic is:
SN = N(RβˆW − r)′[RVˆWR′]−1(RβˆW − r),
where Vˆ = Qˆ−1Ωˆ Qˆ−1, Qˆ = N−1X′W X, Ωˆ = N−1X′Mˆ X and Wwwww 2w w
M ˆ w = d i a g ( u ˆ 2 1 w 1 4 , u ˆ 2 2 w 2 4 , . . . , u ˆ 2 N w N4 ) a n d u ˆ i = y i − x ′ i β ˆ W . I t c a n b e s h o w n u n d e r t h e c o n d i t i o n s here that VˆW →p VW = Q−1ΩwQ−1 and so it follows from Lemma 3.6 in the Lecture Notes
ww
(in Section 3.1) that under H0 SN →d χ2nr .
4