程序代写代做代考 Alastair Hall ECON61001: Autumn 2020 Econometric Methods

Alastair Hall ECON61001: Autumn 2020 Econometric Methods
Problem Set for Tutorial 2
The first question considers the consequences for OLS of omitting relevant regressors from the model.
1. Consider the linear regression model
y = Xβ0 + u
and assume it satisfies Assumptions CA1-CA6 discussed in lecture 1. Let X = [X1,X2], β0 = [β0′,1,β0′,2]′ and Xl, β0,l are T × kl, kl × 1 for l = 1,2. Suppose that a researcher estimates the following model by OLS,
y = X1γ∗ + error.
Let γˆT be the OLS estimator of γ∗.
(a) Show that E[γˆT ] = β0,1 +(X1′ X1)−1X1′ X2β0,2. Under what condition is γˆT an unbiased
estimator of β0,1. Interpret this condition in terms of the regression model.
(b) Now consider the case where k2 = 1 with X2 = x2. Use part (a) to show that
E [ γˆ T ] = β 0 , 1 + δˆ T β 0 , 2
where δˆT is the OLS estimator of the regression coefficients in x2 = X1δ + “error”.
(Here, to match our assumptions about X we treat δˆT as a constant.)
(c) Consider the case where k1 = 2 with X1 = [ιT,x1] and the variables are defined as follows: y is log wage, ιT is a T × 1 vector of ones, x1 is the number of years of education and x2 denotes innate (intellectual) ability. Use the result in part(b) to suggest the likely direction of bias, if any, in the estimator of the return to education.
Question 2 establishes a result used in the proof that σˆT2 is an unbiased estimator of σ02. 2. Show that, for two conformable matrices A and B, trace(AB) = trace(BA).
Question 3 establishes a result quoted in Lectures as part of our discussion of prediction intervals
3. Consider the linear model
yt = x′tβ0 + ut, 1

for t = 1,2,…T +1. Assume CA1, CA2, CA4-CA6 are satisfied. Let X be the T ×k matrix whose tth row is x′t, and assume rank(X) = k. Define βˆT to be the OLS estimator of β0 based on sample {yt, xt; t = 1, 2, . . . T }. Consider the prediction error
e pT + 1 = u T + 1 − x ′T + 1 ( βˆ T − β 0 ) , discussed in Lecture 2. Show that
epT+1 ∼ N􏰀0,σ02(1+x′T+1(X′X)−1xT+1)􏰁.
In question 1, you establish that the omission of relevant regressors (most likely) leads to bias. In
this question you consider the consequences of the inclusion of irrelevant regressors.
4. Consider the case where the true regression model is
y = X1β1,0 +u (1)
where y is a (T × 1) vector of observations on the dependent variable, X1 is a (T × k1) matrix of observed explanatory variables that are fixed in repeated samples, rank(X1) = k1, u is (T × 1) vector containing the unobservable error term, E[u] = 0 and V ar[u] = σ02IT , where σ02 > 0. Let βˆ1 be the OLS estimator of β0,1 based on (1). Now consider β ̃1, the OLS estimator of β1, based on estimation of the model
y = X1β1 +X2β2 +u = Xβ+u (2) where X2 is a (T × k2) matrix of observed explanatory variables that are fixed in repeated
samples and rank(X) = k where X = (X1, X2) and k = k1 + k2. Thus, β ̃1 is defined via β ̃ = 􏰄 β ̃1 􏰅 = (X′X)−1X′y.
β ̃2
(a) Using the partitioned matrix inversion formula, show that V ar[β ̃1] = σ02(X1′ M2X1)−1
w h e r e M 2 = I T − X 2 ( X 2′ X 2 ) − 1 X 2′ .
(b) Show that βˆ1 is at least as efficient as β ̃1 that is, V ar[β ̃1] − V ar[βˆ1] is a positive semi- definite (psd) matrix. Hint: For two conformable nonsingular matrices A and B, if B−1 −A−1 is psd then A−B is psd.
2