Alastair Hall ECON61001: Semester 1, 2020-21 Econometric Methods
Problem Set for Tutorial 6
In some cases, our data are group averages. In this first question, you explore the use of OLS and GLS to estimate a simple linear regression model when the data take this form. As will emerge this is a setting in which heteroscedasticity naturally arises even if the errors in the individual level data are homoscedastic.
1. Suppose that
yi = β0,1 + β0,2hi + ui = x′iβ0 + ui, i=1,2,…N (1) where hi is scalar, and xi = (1, hi)′, ui satisfy Assumptions CS1-CS5 in Lecture 4 (or equiv-
alently in Section 3.2 of the Lecture Notes).
Suppose that the observations are collected into G groups as follows. Group 1 consists of observations i = 1,2,…,N1, group 2 consists of observations i = N1 + 1,N1 + 2,…,N2, and so on with group G consisting of observations i = NG−1 + 1,NG−1 + 2,…N. This structure can be presented in generic notation as follows. Group g consists of observations i = Ng−1 + 1, Ng−1 + 2, . . . , Ng for g = 1, 2, . . . , G where we set N0 = 0 and NG = N .
Now consider the case where the researcher only observes the the group average data,
Ng
y ̄ =n−1 y, ̄h =n−1 h, forn =N −N andg=1,2,…,G,
Ng
g g i g g i g g g−1
i=Ng−1+1 i=Ng−1+1 and estimates the regression model
y ̄ g = β 0 , 1 + β 0 , 2 h ̄ g + v g g = 1 , 2 . . . G , ( 2 ) where vg denotes the error term.
(a) Derive E[vg| ̄hg] and V ar[vg| ̄hg].
Hint: (i) if (1) and (2) hold then vg is a function of {ui} – what function? (ii) use the version of the LIE-II in Lemma 3.9 of the Lecture Notes with “G”= h ̄g, “H”= {hi; i = Ng−1 +1, Ng−1 +2, …,Ng), and note that expectations conditional on “G” and “H” is the same as expectations conditional on “H” in this case.
(b) Derive E[v|X ̄] and V ar[v|X ̄] where v = (v1, v2, . . .vG)′ and X ̄ is the G × 2 matrix with g t h r o w ( 1 , ̄h g ) .
(c) What are the properties of the OLS estimators of β0 = (β0,1, β0,2)′ based on (2)?
(d) What is the GLS estimator of β0 in (2)? Is it a feasible estimator?
1
In this question, you explore the connection between linear models with parameter variation and linear regression models with heteroscedasticity.
2. Consider the model
yi = x′iβi (3)
inwhichβi|xi ∼N(β0,σ02IK). Rewrite(3)inthestandardlinearregressionmodelframework: yi = x′iβ0 + ui. What are the mean and variance of the error term of ui conditional on xi? Hint: Substitute for βi in (3).
In this question, you explore the finite sample properties of the WLS estimator.
3. Consider the linear regression model
y = Xβ0 + u (4)
in which Assumptions CA1-CA4, CA5-H and CA6 hold. Let βˆW = (X′W2X)−1X′W2y be the Weighted Least Squares estimator of β0 based on (4) and with W2 = diag(w12, w2, . . . , wN2 ) for positive constants {wi}Ni=1.
(a)
( b ) (c)
(d) (e)
Show that
S h o w t h a t E [ βˆ W ] = β 0 .
βˆW = β0 + (X′W2X)−1X′W2u
Show that V ar[βˆW ] = (X′W2X)−1X′W2ΣW2X(X′W2X)−1, where Σ = diag(σ12, σ2, . . ., σN2 ). ShowthatβˆW ∼Nβ0,Var[βˆW].
Suppose now that the regressors are stochastic and conditions SR1-SR4 hold. Assuming E[βˆW] exists, is βˆW an unbiased estimator for β0? Explain briefly.
In ths question, you consider the large sample properties of the WLS estimator and an asso- ciated test statistic.
4. Consider the model
yi = x′iβ0 + ui
where Assumptions CS1-CS4 and CS5-H hold. Let βˆW be the WLS estimator defined in Question 3.
2
(a) Show that
ˆ N −1N
βW =β0+ xˇixˇ′i xˇiuˇi
t=1 t=1
where xˇi = wixi and uˇi = wiui.
(b) Show that { (xˇ′i, uˇi) }Nt=1 form an independently but not identically distributed sequence.
(c) Assuming that
N−1X′W2X →p Qw, a positive definite matrix of finite constants,
N−1/2X′W2u→d N(0,Ωw),
where Ωw = plimN→∞N−1X′W2ΣW2X, Σ = diag(σ12, σ2, . . ., σN2 ) and σi2 = h(xi). (i) Show that βˆW is consistent for β0.
(ii)ShowthatN1/2(βˆ −β)→d N(0,Q−1Ω Q−1). W0www
(iii)SupposeitisdesiredtotestH0 : Rβ0 =rversusHA : Rβ0 ̸=rwhereRisa nr × k matrix of constants with rank{R} = nr and r is a nr × 1 vector of constants. Propose a test statistic based on the WLS estimator and state its distribution under the null hypothesis.
3