程序代写代做代考 GMM ECONOMETRICS I ECON GR5411

ECONOMETRICS I ECON GR5411
Lecture 19 – Instrumental Variables III TSLS and GMM
by
Seyhan Erden Columbia University MA in Economics

TSLS estimator (case when 𝑙 > 𝑘): From the reduced form (1st stage regression):
𝑋 = 𝑍Γ + 𝑈
OLS estimator of Γ,
Γ* = 𝑍 + 𝑍 , – 𝑍 + 𝑋
Then we can find estimated 𝑋,
𝑋* = 𝑍 Γ* = 𝑍 𝑍 + 𝑍 , – 𝑍 + 𝑋 = 𝑃 / 𝑋
2nd stage regression: *
𝑦 = 𝑋𝛽 + 𝜀
𝛽34565 = 𝑋*+𝑋* ,-𝑋*+𝑦 = 𝑋′𝑃/𝑋 ,-𝑋′𝑃/𝑦
11/18/20 Lecture 19 GR5411 by Seyhan Erden 2

Now let’s show that we can derive 𝛽389 from 𝛽34565 when 𝑙 = 𝑘.
Start with: ,- 𝛽34565 = 𝑋*+𝑋*
𝑋*+𝑦 = 𝑋′𝑃/𝑋 ,-𝑋′𝑃/𝑦
= 𝑋+𝑍 𝑍+𝑍 ,-𝑍+𝑋 ,-𝑋+𝑍 𝑍+𝑍 ,-𝑍+𝑦
when 𝑙 = 𝑘, size of 𝑋 and 𝑍 are the same we can write the above as
𝛽34565 = 𝑍+𝑋 ,- 𝑍+𝑍 𝑋+𝑍 ,-𝑋+𝑍 𝑍+𝑍 ,-𝑍+𝑦 = 𝑍+𝑋 ,-𝑍+𝑦
= 𝛽3 8 9
11/18/20
Lecture 19 GR5411 by Seyhan Erden 3

Endogeneity examples from Greene:
ØOmitted Variables: misspecification, 𝑦 is consumption of a good, 𝑋- is price and 𝑋4 is income. If 𝑋4 is omitted, the error term is ε + 𝑋4𝛽4 will be correlated with 𝑋-, creating an omitted variables bias.
ØErrors in variables bias: 𝑋 = 𝑋∗ + 𝑣 Proxy variables (Measurement Error):
𝑤𝑎𝑔𝑒 = 𝛽- + 𝛽4𝑒𝑥𝑝 + 𝛽B𝑎𝑏𝑖𝑙𝑖𝑡𝑦 + 𝜀
we observe
𝑃𝑒𝑟𝑓𝑜𝑟𝑚𝑎𝑛𝑐𝑒 = 𝛼- + 𝛼4𝑎𝑏𝑖𝑙𝑖𝑡𝑦 + 𝜔 𝑤𝑎𝑔𝑒 = 𝛽- + 𝛽4𝑒𝑥𝑝 + 𝛽B𝑝𝑒𝑟𝑓𝑜𝑟𝑚𝑎𝑛𝑐𝑒 + 𝜐
The mismeasurement of a relevant variable causes
inconsistency, attenuation bias in 𝛽B.
11/17/20 Lecture 19 GR5411 by Seyhan Erden 4

Endogeneity examples:
ØUnobserved heterogeneity (endogenous treatment effect): 𝑦 is earnings 𝑋- is observed covariates and 𝑋4 is a dummy that indicates attendance at elite college. But unobserved heterogeneity such as ambition, intelligence in 𝜀 could be correlated with 𝑋4. This creates bias. Dale and Krueger examined this.
ØSimultaneous equations: for example a market model
𝑄P = 𝛼Q + 𝛼-𝑃𝑟𝑖𝑐𝑒 + 𝛼4𝐼𝑛𝑐𝑜𝑚𝑒 + 𝜀P
𝑄5 = 𝛽Q + 𝛽-𝑃𝑟𝑖𝑐𝑒 + 𝛽4𝐼𝑛𝑝𝑢𝑡 𝑃𝑟𝑖𝑐𝑒 + 𝜀5
𝑄P =𝑄5
11/17/20
Lecture 19 GR5411 by Seyhan Erden 5

Endogeneity examples:
ØSimultaneous equations (cont’): take the simplest case (Similar to Greene 10.6)
𝑞UV =𝛾V𝑝U+𝜀UV 𝑞UX =𝛾X𝑝U+𝜀UX
with 𝑞U = 𝑞UV = 𝑞UX, we can express 𝑞U and 𝑝U in terms of the ‘exogeneous’ shocks
𝑝U = 𝜀UX − 𝜀UV 𝛾V − 𝛾X
𝑞U = 𝛾V𝜀UX − 𝛾X𝜀UV
𝛾V − 𝛾X
11/17/20 Lecture 19 GR5411 by Seyhan Erden 6

Endogeneity examples:
ØSimultaneous equations (cont’):
• price and quantity are both functions of 𝜀UV and
𝜀UX
• when we regress the observed (equilibrium) 𝑞 on 𝑝, the error of the regression will be correlated with 𝑝.
• 𝑝 is an endogenous regressor that is determined simultaneously with 𝑞: − ‘simultaneity bias’
• Intuitively, when 𝜀UV changes, it changes both 𝑞 and 𝑝
• OLS is inconsistent when the regression error
and the regressor are not orthogonal
11/17/20
Lecture 19 GR5411 by Seyhan Erden 7

Endogeneity examples:
ØSimultaneous equations (cont’):
• bigger problem: we can’t distinguish the
supply from the demand curve!
• How to estimate 𝛾V and 𝛾X ?
• Suppose that there is an observed variable 𝑥UX that shifts the supply but not the demand
curve
11/17/20
Lecture 19 GR5411 by Seyhan Erden 8
𝑞 U = 𝛾 V 𝑝 U + 𝜀 UV
𝑞U = 𝛾X𝑝U + 𝑥UX𝛽X + 𝜀UX

Endogeneity examples:
ØSimultaneous equations (cont’): • At the equilibrium:
=1−𝛾V,- 0+𝜀UV
𝑞U 𝑝U
1 − 𝛾 X
𝛾V𝛽X 𝑥UX 𝛾V − 𝛾X
𝑥 UX 𝛽 X 𝜀 UX
𝛾V𝜀UX −𝛾X𝜀UV 𝛾V − 𝛾X
𝜀UX − 𝜀UV
𝛾V − 𝛾X
11/17/20
Lecture 19 GR5411 by Seyhan Erden 9
=+
𝛽X 𝑥UX 𝛾V − 𝛾X

Endogeneity examples:
ØReduced Form: expresses the endogenous variables in terms of exogenous and predetermined variables
𝑞 U = 𝜃 ^ 𝑥 UX + 𝜔 UV 𝑝 U = 𝜃 X 𝑥 UX + 𝜔 U_
where 𝜃^ = `abc `a,`c
𝜔UV = `adec,`cdea `a,`c
and 𝜃_ = bc `a,`c
and
𝜔U_ = dec,dea `a,`c
𝜔UV and 𝜔U_ are correlated through 𝜀UV and 𝜀UX. 11/17/20 Lecture 19 GR5411 by Seyhan Erden 10

Endogeneity examples:
ØA regression of 𝑞 on 𝑝 is still invalid. However, 𝑐𝑜𝑣𝑥X,𝑞 =𝜃^𝑣𝑎𝑟(𝑥X)and
𝑐𝑜𝑣 𝑥X, 𝑃 = 𝜃X𝑣𝑎𝑟(𝑥X) then 𝜃^
𝜃X =𝛾V
Therefore, we can use variation of 𝑥X to estimate
𝛾V.
Ø The endogenous variable in the regression of 𝑞
on 𝑝 is said to be “instrumented” by 𝑥X. ØEssentially, 𝑥Xshifts the supply to trace out
demand
ØWe now formalize these ideas behind estimation
by instrumental variables
11/17/20 Lecture 19 GR5411 by Seyhan Erden 11

Two Stages with Partitioned 𝑋 : Recall that the first regress 𝑋 on 𝑍, OLS estimator is
Γ* = 𝑍 + 𝑍 , – 𝑍 + 𝑋
Predicted dependent variable:
𝑋* = 𝑍 Γ* = 𝑍 𝑍 + 𝑍 , – 𝑍 + 𝑋 = 𝑃 / 𝑋 Second regress 𝑦 on 𝑋*, OLS estimator from this
regression is
𝛽34565 = 𝑋*+𝑋* ,-𝑋*+𝑦 Nowlet𝑋= 𝑋-,𝑋4 and𝑍= 𝑋-,𝑍4
So 𝑍4 contains the instruments for 𝑋4, but 𝑋-
contains exogenous variables to begin with.
11/17/20 Lecture 19 GR5411 by Seyhan Erden 12

Notice
Partitioning X matrix:
𝑋* = 𝑃 𝑋 = 𝑋 -/–
since 𝑋- lies in the span of 𝑍. For proof see the next slide from Lecture 6, slide 16.
Then,
𝑋*=𝑋*,𝑋* =𝑋,𝑋* -4 -4
Thus, in the 2nd stage, we regress 𝑦 on 𝑋- and 𝑋*4, so only the endogenous variables 𝑋4 are replaced by their fitted values: 𝑋*4 (cont’ on the
slide after the next one)
11/17/20 Lecture 19 GR5411 by Seyhan Erden 13

Recall from Lecture 6: 𝑃𝑋- = 𝑋- Projection matrix:
𝑃=𝑋 𝑋+𝑋 ,-𝑋+ 𝑃𝑋=𝑋 𝑋+𝑋 ,-𝑋+𝑋=𝑋
Observe that,
For any 𝑍 = 𝑋Γ for any Γ matrix (𝑍 lies in the range
spaceof𝑋) + ,- + 𝑃𝑍=𝑃𝑋Γ=𝑋 𝑋𝑋 𝑋𝑋Γ=𝑋Γ=𝑍
As an important example if we partition 𝑋 into two matrices 𝑋- and 𝑋4, so that
then
𝑋=𝑋- 𝑋4
𝑃𝑋- = 𝑋-
11/17/20 Lecture 6 GR5411 by Seyhan Erden 14

Partitioning X matrix:
Fitted values of 𝑋4 from the 1st stage: 𝑋* = 𝑋 Γ* + 𝑍 Γ*
2nd stage:
𝑦 = 𝑋 𝛽3 + 𝑋* 𝛽3 + 𝜀 ̂ — 44
4 – -4 4 44
This will help us introduce another representation of 2SLS:
Set
𝑃 – = 𝑋 – 𝑋 -+ 𝑋 – , – 𝑋 -+
Then apply FWL theorem… (next slide)
11/17/20 Lecture 19 GR5411 by Seyhan Erden 15

Another representation of 2SLS:
Let𝑀-be + ,-+ 𝑀-=𝐼−𝑃-=𝐼−𝑋- 𝑋-𝑋- 𝑋-
Applying FWL theorem:
𝛽3 = 𝑋* + 𝑀 𝑋* , – 𝑋* + 𝑀 𝑦 44-44-
= 𝑋4+𝑃k 𝐼−𝑃- 𝑃k𝑋4 ,- 𝑋4+𝑃k 𝐼−𝑃- 𝑦 = 𝑋4+ 𝑃k−𝑃- 𝑋4 ,- 𝑋4+ 𝑃k−𝑃- 𝑦
since𝑃k𝑃- =𝑃-
11/17/20 Lecture 19 GR5411 by Seyhan Erden 16

Recall Validity of Instrumental Variables
Assume that there is a set of additional variables, 𝑍 is an 𝑛×𝑙 matrix that has two essential properties:
1. Relevance: 𝑍+s (IV’s) are correlated with endogenous regressors 𝑋.
2. Exogeneity: they are uncorrelated with the disturbance.
We have
𝑝𝑙𝑖𝑚 n- 𝑍+𝑍 = 𝑄// a finite p.d. matrix (well behaved data)
11/17/20 Lecture 19 GR5411 by Seyhan Erden 17

Recall Assumptions:
Ø 𝑥U , 𝑧U , 𝑦U are i.i.d. sequence of random variables. Ø𝐸𝑦4 <∞ Ø𝐸 𝑥4 <∞ Ø𝐸𝑧4<∞ Ø𝐸 𝑧U𝑧U+ = 𝑄// is positive definite. Ø𝐸 𝑧U𝑥U+ = 𝑄/s has full rank. Ø𝐸𝑧U𝜀U =0 Ø𝐸 𝑥4 Ut = 𝑄 < ∞, a finite constant. ss,tt = 𝑄//,uu < ∞, a finite constant. Ø𝐸 𝑧Uu𝑥Ut = 𝑄/s,ut < ∞, a finite constant. Ø𝐸 𝑥4 Uu Ø𝐸𝜀U|𝑧U =0 11/17/20 Lecture 19 GR5411 by Seyhan Erden 18 We have 𝑝𝑙𝑖𝑚 n- 𝑍+𝑍 = 𝑄// a finite positive definite matrix (well behaved data) (sameasn-∑𝑧U𝑧U+ →_ 𝐸 𝑧U𝑧U+ =𝑄//) 𝑝𝑙𝑖𝑚 n- 𝑍+𝑋 = 𝑄/s a finite 𝑙×𝑘 matrix with rank k (relevance) (same as n- ∑ 𝑧U 𝑥U+ →_ 𝐸 𝑧U 𝑥U+ = 𝑄/s) 𝑝𝑙𝑖𝑚 n- 𝑍+𝜀 = 0 (exogeneity) (same as 𝐸 𝑧U𝜀U = 0) 11/17/20 Lecture 19 GR5411 by Seyhan Erden 19 Proof of Consistency of 2SLS est. 𝛽34565 = 𝑋*+𝑋* ,-𝑋*+𝑦 = 𝑋′𝑃/𝑋 ,-𝑋′𝑃/𝑦 = 𝛽 + 𝑋′𝑃/𝑋 ,-𝑋+𝑃/𝜀 = 𝛽 + 𝑋+𝑍 𝑍+𝑍 ,-𝑍′𝑋 ,-𝑋+𝑍 𝑍+𝑍 ,-𝑍𝜀 Applying WLLN and Slutsky’s theorem, ,- 1+ 1+,-1+ _ 𝑛𝑋𝑍𝑛𝑍𝑍𝑛𝑍𝜀 →𝑄𝑄,-𝑄 ,-𝑄𝑄,-𝐸𝑧𝜀=0 s/ // /s s/ // U U 3 𝛽4565 − 𝛽 = 1+ 1+,-1+ 𝑛 𝑋 𝑍 𝑛 𝑍 𝑍 𝑛 𝑍 𝑋 𝛽3 4 5 6 5 →_ 𝛽 11/17/20 Lecture 19 GR5411 by Seyhan Erden 20 Asymptotic Distr. of 2SLS est. Additional Assumptions: 𝐸 𝑦y < ∞ and 𝐸 𝑧 y < ∞ and let Ω = 𝐸 𝑧U𝑧U+𝜀U4 is positive definite. We will prove that 𝑛𝛽34565−𝛽 →V 𝑁0,𝑉b where 𝑉 = 𝑄 𝑄,-𝑄 ,- 𝑄 𝑄,-Ω𝑄,-𝑄 b s/ // /s s/ // // s/ 𝑄 𝑄,-𝑄 ,- s/ // /s Note that under homoskedasticity this will further simplify. 11/17/20 Lecture 19 GR5411 by Seyhan Erden 21 Proof: 𝑛 𝛽3 4 5 6 5 − 𝛽 = ,- 1+ 1+,-1+ 𝑛𝑋𝑍 𝑛𝑍𝑍 𝑛𝑍𝜀 Applying WLLN and Slutsky for the moment matrices involving 𝑋 and 𝑍 (same as in the proof of consistency). -𝑋+𝑍 -𝑍+𝑍 ,- -𝑍+𝑋 nnn 11/17/20 Lecture 19 GR5411 by Seyhan Erden 22 Proof: By CLT for iid observations 1+1nV 𝑛𝑍𝜀=𝑛}𝑧U𝜀U →𝑁0,Ω U~- because the vector 𝑧U𝜀U is iid and mean zero and has finite 2nd moment 𝑛𝛽3 −𝛽→V𝑄𝑄,-𝑄,-𝑄𝑄,-𝑁0,Ω 4565 s/ // /s s/ // =𝑁 0,𝑉b For completeness we must demonstrate 𝑧U𝜀U has finite 2nd moment (using Minkowski’s inequality) see Hansen August 2019 p.409 11/17/20 Lecture 19 GR5411 by Seyhan Erden 23 The Instrumental Variables Estimator For a model with a constant term and a single 𝑥 and an instrumental variable 𝑧, we have 𝛽389 =∑(𝑧U −𝑧̅)(𝑦U −𝑦Ä)=𝑐𝑜𝑣(𝑧,𝑦) →_ 𝛽 ∑(𝑧U − 𝑧̅)(𝑥U − 𝑥̅) 𝑐𝑜𝑣(𝑥, 𝑧) 11/17/20 Lecture 19 GR5411 by Seyhan Erden 24 For 𝑙 = 𝑘 case, consistency of 𝛽389 : Both in matrix scalar/vector form: 𝛽389 = 𝑍+𝑋 ,-𝑍+𝑦 = 𝑍+𝑋 ,-𝑍+ 𝑋𝛽+𝜀 1 + ,- 1 + =𝛽+ 𝑛𝑍𝑋 𝑛𝑍𝜀 Note that using WLLN and Slutsy’s theorem, 𝑛1 𝑍 + 𝑋 = 𝑛1 } 𝑧 U 𝑥 U + →_ 𝐸 𝑧 U 𝑥 U + = 𝑄 / s 𝑛1𝑍+𝜀=𝑛1}𝑧U𝜀U →_ 𝐸𝑧U𝜀U =0 𝛽3 →_ 𝛽+𝑄,-Å0=𝛽 89 /s 11/17/20 Lecture 19 GR5411 by Seyhan Erden 25 For 𝑙 = 𝑘 case, asymptotic dist. of 𝛽389: Since, 𝛽389 = 𝑍+𝑋 ,-𝑍+𝑦 We can write 𝛽3 8 9 = 𝑍 + 𝑋 , - 𝑍 + 𝑋 𝛽 + 𝜀 = 𝑍+𝑋 ,-𝑍+𝑋𝛽 + 𝑍+𝑋 ,-𝑍+𝜀 =𝛽+ 𝑍+𝑋,-𝑍+𝜀 Then, we can express 𝛽3 8 9 − 𝛽 = 𝑍 + 𝑋 , - 𝑍 + 𝜀 Then, 𝑛 𝛽3 8 9 − 𝛽 = 𝑍 + 𝑋 , - 1 𝑍 + 𝜀 𝑛𝑛 11/17/20 Lecture 19 GR5411 by Seyhan Erden 26 𝑛 𝛽3 8 9 − 𝛽 = 𝑍 + 𝑋 , - 1 𝑍 + 𝜀 𝑛𝑛 This has the same limiting distribution as 𝑄,- 1 𝑍′𝜀 kÇ 𝑛 1 𝑍 ′ 𝜀 →V 𝑁 0 , 𝜎 4 𝑄 k k 𝑛 We know that and ,- 𝑍+𝑋 1𝑍′𝜀 →V 𝑁0,𝜎4𝑄,-𝑄𝑄,- 𝑛𝑛 kÇkkÇk 11/17/20 Lecture 19 GR5411 by Seyhan Erden 27 This step completes the derivation for the next theorem: Theorem: Asymptotic Distribution of the Instrumental Variables Estimator... If assumptions listed above all hold for 𝑦U,𝑥U,𝑧U,𝜀U , where 𝑧 is a valid set of 𝑙 = 𝑘 instrumental variables estimator 𝛽389 = 𝑍+𝑋 ,-𝑍+𝑦 3V 𝜎4,-,- 𝛽89 → 𝑁𝛽,𝑛𝑄kÇ𝑄kk𝑄Çk 11/17/20 Lecture 19 GR5411 by Seyhan Erden 28 Covariance Matrix Estimation: 𝑉* = 𝑄* 𝑄*,-𝑄* ,- 𝑄* 𝑄*,- ΩÑ𝑄*,-𝑄* 𝑄* 𝑄*,-𝑄* ,- b where s/ // /s s/ // // /s *1n1 𝑄 / / = 𝑛 } 𝑧 U 𝑧 U+ = 𝑛 𝑍 + 𝑍 U~- *1n1 𝑄 s / = 𝑛 } 𝑥 U 𝑧 U+ = 𝑛 𝑋 + 𝑍 U~- s/ // /s 11/17/20 Lecture 19 GR5411 by Seyhan Erden 29 Ñ1n Ω= }𝑧U𝑧+𝜀̂4 𝑛UU U~- Under Homoskedasticity: Under homoskedasticity Ω = 𝜎4𝐼, hence, ,- where 1n 𝜎Ö4= }𝜀̂4 𝑉* = 𝜎Ö 4 𝑄* 𝑄* , - 𝑄* b s/ // /s Standard errors for the coefficients are obtained as the square roots of the diagonal elements of 𝑛,-𝑉* b 𝑛U~- U 11/17/20 Lecture 19 GR5411 by Seyhan Erden 30 Consistency of the variance: Under the assumptions listed for asymptotic distribution of 𝛽3 : 𝑉* is consistent 4565 b estimator of 𝑉b, that is, 𝑉* →_ 𝑉 bb To prove this we need to show that ΩÑ →_ Ω Because other convergence resultency of s were already proved in the proof of consistency of 𝛽34565. 11/17/20 Lecture 19 GR5411 by Seyhan Erden 31 Ñ1n Ω= }𝑧𝑧+𝜀̂4 𝑛U~- By WLLN, n UUU 1 } 𝑧 𝑧 + 𝜀 ̂ 4 →_ 𝐸 𝑧 𝑧 + 𝜀 4 UUU UUU But, we know that due to exogeneity of 𝑍, 𝑛U~- 𝐸 𝑧U𝜀U = 0, hence 𝑣𝑎𝑟 - 𝑧U𝜀U = - 𝐸 𝑧U𝑧U+𝜀U4 n1n 𝐸𝑧U𝑧U+𝜀U4=𝑉𝑎𝑟 𝑛𝑍′𝜀≡Ω Thus, ΩÑ →_ Ω and therefore, 𝑉* →_ 𝑉 bb 11/17/20 Lecture 19 GR5411 by Seyhan Erden 32 Estimating Asymptotic Covariance Matrix: Let 1n 41n 𝜎Ö4= }𝑦−𝑥′𝛽3 =}𝜀̂4 𝑛−𝑘 U 89 𝑛 U U~- U~- The correction for degrees of freedom is unnecessary, as all results are asymptotic and 𝜎Ö4 would not be unbiased in any event. However, it is standard practice to make the df correction. We will estimate the asymptotic variance of 𝛽389. 1 𝜀̂+𝜀̂ 𝑍+𝑋 ,- 𝑍+𝑍 𝑋+𝑍 ,- 𝐴𝑣𝑎𝑟𝛽89=𝑛𝑛 𝑛 𝑛 𝑛 á3 11/17/20 Lecture 19 GR5411 by Seyhan Erden 33 Estimating Asymptotic Covariance Matrix: 1 𝜀̂+𝜀̂ 𝑍+𝑋 ,- 𝑍+𝑍 𝑋+𝑍 ,- 𝐴𝑣𝑎𝑟𝛽89=𝑛𝑛 𝑛 𝑛 𝑛 á3 = 𝜎Ö4 𝑍+𝑋 ,-(𝑍+𝑍) 𝑋+𝑍 ,- 11/17/20 Lecture 19 GR5411 by Seyhan Erden 34 • • • • • • • • • • • • • • • • • • ivreg lwage exper expersq (educ = motheduc fatheduc), r IV/2SLS Estimation in the Wage Equation Instrumental variables (2SLS) regression Number of obs F(3, 424) Prob > F
R-squared
Root MSE
= 428
= 6.15
= 0.0004
= 0.1357
= .67471
—————————————————————————— | Robust
lwage | Coef. Std. Err. t P>|t| [95% Conf. Interval] ————-+—————————————————————- educ| .0613966 .0333386 1.84 0.066 -.0041329 .1269261 exper| .0441704 .0155464 2.84 0.005 .0136128 .074728 expersq| -.000899 .0004301 -2.09 0.037 -.0017443 -.0000536 _cons| .0481003 .4297977 0.11 0.911 -.7966992 .8928998 ——————————————————————————
Instrumented: educ
Instruments: exper expersq motheduc fatheduc ——————————————————————————

𝐿𝑛 𝑤𝑎𝑔𝑒 = 0.048 + 0.061𝐸𝑑𝑢𝑐 + 0.044𝐸𝑥𝑝𝑒𝑟 − 0.0009𝐸𝑥𝑝𝑒𝑟𝑆𝑞
0.429 0.033 0.016 0.0004
First Stage: 𝐸𝑑𝑢𝑐 = 𝛾Q + 𝛾-𝑀𝑜𝑡h𝑒𝑟𝐸𝑑𝑢𝑐 + 𝛾4𝐹𝑎𝑡h𝑒𝑟 𝐸𝑑𝑢𝑐 +𝛾4𝐸𝑥𝑝𝑒𝑟 + 𝛾B𝐸𝑥𝑝𝑒𝑟𝑆𝑞 + 𝜐

Second Stage: 𝐿𝑛 𝑤𝑎𝑔𝑒 = 𝛽Q + 𝛽-𝐸𝑑𝑢𝑐 + 𝛽4𝐸𝑥𝑝𝑒𝑟 + 𝛽B𝐸𝑥𝑝𝑒𝑟𝑆𝑞 + 𝑢
11/17/20 Lecture 19 GR5411 by Seyhan Erden 35

Points:
ØOLS estimator is based on the population
orthogonality condition 𝐸 𝑥U𝜀U = 0 Ø𝑘 equations in 𝑘 unknowns: exact
identification
ØThe IV estimator is based on the population
orthogonality condition 𝐸 𝑧U𝜀U = 0 Ø𝑙 equations in 𝑘 unknowns: over
identifies unless 𝑙 = 𝑘
11/17/20 Lecture 19 GR5411 by Seyhan Erden 36

Standard errors in the 2nd Stage:
However, be careful!……… because in the computation of the asymptotic covariance matrix; 𝜎Ö4 should not be based on 𝑋*.
𝑆4 =}𝑦−𝑥Ö+𝛽3 ≠𝜎4 4565 𝑛 U U4565
U~-
1n
Recall however,
1n_
𝑆 4 = } 𝑦 − 𝑥 + 𝛽3 → 𝜎 4 89 𝑛 U U89
U~-
The appropriate calculation is built into modern software (ex. Stata ivreg command calculates standard errors correctly under 2𝑆𝐿𝑆)
11/17/20 Lecture 19 GR5411 by Seyhan Erden 37

Asymptotic Distribution of Two-stage Least Squares (TSLS)
Rewrite TSLS estimator:
𝛽3ñ565 = 𝑋′𝑃/𝑋 ,-𝑋′𝑃/𝑌 Substitute 𝑌 = 𝑋𝛽 + 𝜀
𝛽3 ñ 5 6 5 = 𝑋 ′ 𝑃 / 𝑋 , – 𝑋 ′ 𝑃 / 𝑋 𝛽 + 𝜀
Rearranging,
𝛽3ñ565−𝛽= 𝑋′𝑃/𝑋,-𝑋′𝑃/𝜀 Multiply and divide right hand side by 𝑛
𝑋′𝑃/𝑋 ,- 𝑋′𝑃/𝜀
𝛽3 ñ 5 6 5 − 𝛽 =
𝑛𝑛𝑛
11/17/20 Lecture 19 GR5411 by Seyhan Erden 38

Asymptotic Distribution of TSLS
Multiply both sides by 𝑛
𝑋′𝑃/𝑋 ,- 𝑋′𝑃/𝜀 𝑛𝑛
𝑛 𝛽3 ñ 5 6 5 − 𝛽 =
=
𝑋+𝑍 𝑍+𝑍 ,-𝑍′𝑋 ,- 𝑋′𝑍 𝑍+𝑍 ,-𝑍′𝜀 𝑛𝑛
𝑋+𝑍 𝑍+𝑍 ,- 𝑍+𝑋 ,- 𝑋+𝑍 𝑍+𝑍 ,- 𝑍′𝜀 𝑛𝑛𝑛𝑛𝑛𝑛
=
11/17/20
Lecture 19 GR5411 by Seyhan Erden 39

Asymptotic Distribution of TSLS
Under the IV assumptions,
and
where
and
𝑋 + 𝑍 →_ 𝑄 s / 𝑛
𝑍 + 𝑍 →_ 𝑄 / / 𝑛
𝑄 s / = 𝐸 𝑄 U 𝑍 U+
𝑄 / / = 𝐸 𝑍 U 𝑍 U+
11/17/20
Lecture 19 GR5411 by Seyhan Erden 40

Asymptotic Distribution of TSLS
In addition, under the IV assumptions, 𝑍U𝜀U is i.i.d. with mean zero (that is, under instrument exogeneity 𝐸 𝑍U𝜀U = 0) and a positive definite covariance matrix, so its sum, divided by 𝑛, satisfies the conditions of the multivariate central limit theorem and
𝑍′𝜀 →V 𝑄/d 𝑛
where 𝑄/d ~ 𝑁(0, Ω), where Ω = 𝐸 𝑍U𝑍U+𝜀U4
11/17/20 Lecture 19 GR5411 by Seyhan Erden 41

Asymptotic Distribution of TSLS
Thus, applying the limits for 𝑋+𝑍⁄𝑛 and 𝑍+𝑍⁄𝑛
to 𝑛 𝛽3ñ565 − 𝛽 yields the results that, under IV regression assumptions, the TSLS estimator is asymptotically normally distributed:
𝑛𝛽3ñ565−𝛽 →V 𝑁0,𝑉Ñöõúõ b
where
𝑉Ñ ö õ ú õ
b ,- ,- ,- ,-
= 𝑄s/𝑄//𝑄/s 𝑄s/𝑄//𝐻𝑄//𝑄/s 𝑄s/𝑄//𝑄/s
,- ,-
where Ω = 𝐸 𝑍U𝑍U+𝜀U4
11/17/20 Lecture 19 GR5411 by Seyhan Erden 42

Standard Errors for TSLS
By substituting sample moments, we can estimate 𝑉ñ565 with 𝑉* ñ 5 6 5
𝑉*ñ565 = 𝑄* 𝑄*,-𝑄* ,-𝑄* 𝑄*,-ΩÑ𝑄*,-𝑄* 𝑄* 𝑄*,-𝑄* ,-
where
s/ // /s
s/ //
// /s
s/ // /s
𝑄* s / = 𝑋 + 𝑍 𝑛
𝑄* / / = 𝑍 + 𝑍 𝑛
𝑄* / s = 𝑍 + 𝑋
Ñ1n𝑛 Ω= }𝑧𝑧+𝜀̂4
11/17/20
Lecture 19 GR5411 by Seyhan Erden
43
𝑛U~-
UUU

Properties of TSLS when the Errors are Homoskedastic:
If the errors are homoscedastic then the TSLS estimator is asymptotically efficient among the class of IV estimators in which the instruments are linear combinations of the rows of 𝑍.
This result is the IV counterpart to the Gauss- Markov theorem and constitutes an important justification for using TSLS.
11/17/20 Lecture 19 GR5411 by Seyhan Erden 44

TSLS under homoscedastic errors:
If errors are homoscedastic – that is if
then
𝐸𝜀U4|𝑍U =𝜎d4
Ω = 𝐸 𝑍 U 𝑍 U+ 𝜀 U4
= 𝐸 𝐸 𝑍 U 𝑍 U+ 𝜀 U4 | 𝑍 U = 𝐸 𝑍 U 𝑍 U+ 𝐸 𝜀 U4 | 𝑍 U
= 𝑄 / / 𝜎 d4
11/17/20
Lecture 19 GR5411 by Seyhan Erden 45

TSLS under homoskedastic errors:
In this case the variance of the asymptotic distribution of the TSLS simplifies to homoskedasticity – only variance matrix as follows:
𝑉Ñöõúõ = 𝑄 𝑄,-𝑄 ,-𝜎4 b s////sd
The homoskedasticity – only estimator of the TSLS
variance matrix is
𝑉á = 𝑄* 𝑄*,-𝑄*
bÑ ö õ ú õ s / / / / s
𝜎Ö 4 = 𝜀 ̂ ′ 𝜀 ̂
d
,-
d
𝜎Ö4
where
The homoskedasticity – only TSLS standard errors are the square roots of the diagonal elements.
11/17/20 Lecture 19 GR5411 by Seyhan Erden 46
𝑛

College Proximity OLS results
. reg lwage educ exper expersq_scaled black south urban, r
Linear regression Number of obs = F(6, 3003) = Prob>F = R-squared = Root MSE =
3,010
217.74
0.0000
0.2905
.37419
——————————————————————————– | Robust
lwage | Coef. Std. Err. t P>|t| [95% Conf. Interval] —————+—————————————————————- educ| .074009 .003642 20.32 0.000 .0668679 .0811501 exper| .0835958 .0067326 12.42 0.000 .0703948 .0967969 expersq_scaled | -.2240885 .0318114 -7.04 0.000 -.2864627 -.1617142 black | -.1896315 .0174324 -10.88 0.000 -.2238123 -.1554508 south | -.1248615 .0153508 -8.13 0.000 -.1549606 -.0947625 urban| .161423 .0151751 10.64 0.000 .1316683 .1911776 _cons| 4.733664 .0701577 67.47 0.000 4.596102 4.871226 ——————————————————————————–
11/17/20 Lecture 18 GR5411 by Seyhan Erden 47

Card’s College Proximity Example 2SLS results and Endogeneity Tests:
ivregress 2sls lwage (educ = nearc4 age) exper expersq_scaled black south urban, r note: age dropped due to collinearity
Instrumental variables (2SLS) regression
Number of obs =
Wald chi2(6) =
Prob > chi2 =
R-squared =
Root MSE =
3,010
792.07
0.0000
0.2252
.39058
——————————————————————————– | Robust
lwage | Coef. Std. Err. z P>|z| [95% Conf. Interval] —————+—————————————————————- educ| .1322888 .0485213 2.73 0.006 .0371888 .2273889 exper| .107498 .0211129 5.09 0.000 .0661175 .1488785 expersq_scaled | -.2284072 .0346338 -6.59 0.000 -.2962883 -.1605261 black | -.1308019 .0514513 -2.54 0.011 -.2316445 -.0299592 south | -.1049005 .0228997 -4.58 0.000 -.1497831 -.0600179 urban| .1313237 .0297684 4.41 0.000 .0729787 .1896686 _cons| 3.752781 .8167498 4.59 0.000 2.151981 5.353582 ——————————————————————————–
Instrumented: educ
Instruments: exper expersq_scaled black south urban nearc4
. estat endogenous
Tests of endogeneity
Ho: variables are exogenous
Robust score chi2(1)
Robust regression F(1,3002)
= 1.60908 (p = 0.2046)
= 1.60609 (p = 0.2051)
11/17/20
Lecture 18 GR5411 by Seyhan Erden 48

Acemoglu et. al. Example
2SLS results and Endogeneity Tests:
. ivregress 2sls loggdp (risk= mortnaval1), r Instrumental variables (2SLS) regression
Number of obs = 53
Wald chi2(1) = 6.35
Prob > chi2 = 0.0118
R-squared = .
Root MSE = 1.115
—————————————————————————— | Robust
loggdp | Coef. Std. Err. z P>|z| [95% Conf. Interval] ————-+—————————————————————- risk| 1.071826 .4254925 2.52 0.012 .237876 1.905776 _cons| .938585 2.823759 0.33 0.740 -4.595882 6.473052 ——————————————————————————
Instrumented: risk
Instruments: mortnaval1
. estat endogenous
Tests of endogeneity
Ho: variables are exogenous
Robust score chi2(1)
Robust regression F(1,50)
= 5.3865 (p = 0.0203)
= 3.49527 (p = 0.0674)
11/17/20
Lecture 19 GR5411 by Seyhan Erden 49

Relevance test for Card’s
. reg educ nearc4 nearc2 exper expersq_scaled black south urban,r
Linear regression Number of obs = F(7, 3002) = Prob>F = R-squared = Root MSE =
3,010
522.11
0.0000
0.4748
1.9421
——————————————————————————– | Robust
educ | Coef. Std. Err. t P>|t| [95% Conf. Interval] —————+—————————————————————- nearc4| .3312388 .0805747 4.11 0.000 .1732516 .4892261 nearc2| .1076585 .0731378 1.47 0.141 -.0357467 .2510637 exper| -.409533 .0319212 -12.83 0.000 -.4721226 -.3469435 expersq_scaled| .06956 .1699638 0.41 0.682 -.2636974 .4028174 black| -1.01298 .0877548 -11.54 0.000 -1.185046 -.8409146 south | -.2786568 .0787816 -3.54 0.000 -.4331282 -.1241855 urban| .3886608 .0856653 4.54 0.000 .2206922 .5566295 _cons| 16.62244 .1494693 111.21 0.000 16.32936 16.91551 ——————————————————————————–
. test nearc4 nearc2
(1) nearc4=0 (2) nearc2=0
F( 2, 3002) =
Prob > F =
9.72 < 10 (so NO! nearc4 and nearc2 are not relevant) 0.0001 11/17/20 Lecture 18 GR5411 by Seyhan Erden 50