程序代写代做代考 ECONOMETRICS I ECON GR5411

ECONOMETRICS I ECON GR5411
Lecture 13 – Restricted Least Squares by
Seyhan Erden
Columbia University
MA in Economics

A test of 𝐻” on Wald criterion : Conditioned on 𝑋, we write Wald test statistic as 𝑊=𝑚’𝑣𝑎𝑟𝑚𝑋 +,𝑚
= ( 𝑅 ′ 𝛽0 − 𝑞 ) ′ 𝑅 ‘ 𝜎 6 𝑋 ‘ 𝑋 + , 𝑅 + , ( 𝑅 ′ 𝛽0 − 𝑞 )
This is using Theorem B.11 (Greene) If𝑋~𝑁(𝜇,Σ)then 𝑋−𝜇 ‘ Σ+, 𝑋−𝜇 ~𝜒=6
According to this theorem the Wald criterion above must be distributed with Chi-square also,
( 𝑅 ′ 𝛽0 − 𝑞 ) ′ 𝑅 ‘ 𝜎 6 𝑋 ‘ 𝑋 + , 𝑅 + , ( 𝑅 ′ 𝛽0 − 𝑞 ) ~ 𝜒 >6
10/21/20 Lecture 13 GR5411 by Seyhan Erden 2

A test of 𝐻” on Wald criterion :
Intuitively as 𝑚 gets larger, remember 𝑚 is the difference between 𝑅𝛽 and 𝑞, this difference becomes different from zero, (that is the worse failure of LS to satisfy the restrictions) the larger is the Chi-square statistic.
Thus a large Chi-square statistic will weigh against 𝐻”
However, Chi-square is not usable because 𝜎6 is unknown in
(𝑅’𝛽0 −𝑞)′ 𝑅’ 𝜎6 𝑋’𝑋 +, 𝑅 +, (𝑅’𝛽0 −𝑞)~𝜒>6 10/21/20 Lecture 13 GR5411 by Seyhan Erden 3

A usable test:
How can we use 𝑠6 instead of 𝜎6
Recall the distribution theory: Ratio of two chi- squares divided by their degrees of freedom follows F distribution with numerator’s and denominator’s degrees of freedom. If we have
𝑊 ~ 𝜒 >6
(𝑛 − 𝑘)𝑠6 6 𝜎6 ~𝜒=+B
in the denominator, then
𝑊/𝑗 ~𝐹>,=+B 𝑠6/𝜎6
in the numerator, and
10/21/20 Lecture 13 GR5411 by Seyhan Erden 4

Must prove that (=+B)FG ~ 𝜒6 : HG =+B
Fact: If 𝑍 is normal and 𝑀 is idempotent with size 𝑛 − 𝑘 then 𝑍’𝑀𝑍~ 𝜒6
‘ =+B’
𝑛−𝑘 𝜀𝑀𝜀 = 1𝜀 𝑀 1𝜀 =𝑍’𝑀𝑍 𝜎6 (𝑛−𝑘) 𝜎 𝜎
𝜀~𝑁 0, 𝜎6𝐼 𝜎𝜀 ~ 𝑁 ( 0 , 𝐼 )
M is idempotent and rank of 𝑀 is 𝑛 − 𝑘, then
(𝑛 − 𝑘)𝑠6 6 𝜎6 ~𝜒=+B
10/21/20 Lecture 13 GR5411 by Seyhan Erden 5

Back to the usable test:
𝑊/𝑗 ~𝐹>,=+B 𝑠6/𝜎6
Then, 0
(𝑅’𝛽−𝑞)′ 𝑅’ 𝜎6 𝑋’𝑋 +, 𝑅 +,(𝑅’𝛽−𝑞) 𝜎6
0
𝑗 𝑠6
~𝐹>,=+B
~𝐹>,=+B
~𝐹>,=+B
(𝑅’𝛽0−𝑞)′𝑅’ 𝑋’𝑋+, 𝑅+,(𝑅’𝛽0−𝑞) 𝜎6 𝑗𝜎6 𝑠6
(𝑅’𝛽0 −𝑞)′ 𝑅’ 𝑋’𝑋 +, 𝑅 +,(𝑅’𝛽0 −𝑞) 1 𝑗 𝑠6
(𝑅’𝛽0 − 𝑞)′ 𝑅’ 𝑠6 𝑋’𝑋 +, 𝑅 +,(𝑅’𝛽0 − 𝑞) ~𝐹>,=+B
𝑗
10/21/20 Lecture 13 GR5411 by Seyhan Erden 6

F test for j restrictions (summary):
𝐻”: 𝑅’𝛽 − 𝑞 = 0 and 𝐻,: 𝑅’𝛽 − 𝑞 ≠ 0 ØInfeasible test under A6: Let 𝑚S = 𝑅’𝛽0 − 𝑞
𝑊=(𝑅’𝛽0−𝑞)′ 𝑅’ 𝜎6 𝑋’𝑋 +, 𝑅 +,(𝑅’𝛽0−𝑞)~𝜒>6
ØFeasible test
(𝑅’𝛽0 − 𝑞)′ 𝑅’ 𝑠6 𝑋’𝑋 +, 𝑅 +,(𝑅’𝛽0 − 𝑞) ~𝐹>,=+B
𝑗
This result follows by combining asymptotic normality of 𝛽0 and consistency of heteroskedasticity- robust estimator of the covariance matrix.
10/21/20 Lecture 13 GR5411 by Seyhan Erden 7

Hypothesis Testing Example 1:
. reg testscr str el_pct meal_pct
Source | SS df MS ————-+———————————- Model | 117811.294 3 39270.4312 Residual | 34298.3001 416 82.4478368 ————-+———————————- Total | 152109.594 419 363.030056
Number of obs F(3, 416) Prob>F R-squared Adj R-squared Root MSE
= 420
= 476.31
= 0.0000
= 0.7745
= 0.7729
= 9.0801
—————————————————————————— testscr | Coef. Std. Err. t P>|t| [95% Conf. Interval] ————-+—————————————————————- str | -.9983092 .2387543 -4.18 0.000 -1.467624 -.528994 el_pct | -.1215733 .0323173 -3.76 0.000 -.1850988 -.0580478 meal_pct | -.5473456 .0215988 -25.34 0.000 -.589802 -.5048891 _cons| 700.15 4.685687 149.42 0.000 690.9394 709.3605 ——————————————————————————
10/21/20 Lecture 13 GR5411 by Seyhan Erden 8

Hypothesis Testing Example 2:
. reg testscr str el_pct comp_stu
Source | SS df MS ————-+———————————- Model | 66004.0238 3 22001.3413 Residual | 86105.5698 416 206.984543 ————-+———————————- Total | 152109.594 419 363.030056
Number of obs F(3, 416) Prob>F R-squared Adj R-squared Root MSE
= 420
= 106.29
= 0.0000
= 0.4339
= 0.4298
= 14.387
—————————————————————————— testscr | Coef. Std. Err. t P>|t| [95% Conf. Interval] ————-+—————————————————————- str | -.8489998 .3932246 -2.16 0.031 -1.621955 -.0760449 el_pct | -.6303601 .039997 -15.76 0.000 -.7089814 -.5517387 comp_stu| 27.26961 11.62113 2.35 0.019 4.426158 50.11307 _cons| 677.0642 8.303396 81.54 0.000 660.7424 693.3861 ——————————————————————————
10/21/20 Lecture 13 GR5411 by Seyhan Erden 9

Hypothesis Testing Example 2 with
heteroskedasticity-robust errors:
. reg testscr str el_pct comp_stu, r Linear regression
Number of obs F(3, 416) Prob>F R-squared Root MSE
= 420
= 154.76
= 0.0000
= 0.4339
= 14.387
—————————————————————————— | Robust
testscr | Coef. Std. Err. t P>|t| [95% Conf. Interval] ————-+—————————————————————- str | -.8489998 .4317359 -1.97 0.050 -1.697656 -.0003439 el_pct | -.6303601 .0313454 -20.11 0.000 -.6919753 -.5687449 comp_stu| 27.26961 12.62941 2.16 0.031 2.444203 52.09503 _cons| 677.0642 9.203911 73.56 0.000 658.9722 695.1562 ——————————————————————————
10/21/20 Lecture 13 GR5411 by Seyhan Erden 10

Restricted Least Squares Regression:
In general we can study hypothesis testing under restricted least squares topic: Let there be a set of a priori restrictions on the elements of the vector 𝛽 of the regression coefficients which can be taken into account in the process of estimation.
A set of 𝑗 linear restrictions on the vector 𝛽 can be written as 𝑅′𝛽 = 𝑟, where 𝑅 is a 𝑗×𝑘 matrix of linearly independent rows, such as 𝑟𝑎𝑛𝑘 𝑟 = 𝑗, and 𝑟 is a vector of 𝑗 elements.
10/21/20 Lecture 13 GR5411 by Seyhan Erden
11

Restricted Least Squares Regression:
To combine this a priori information with the sample information with the sample information, we adopt the criterion of minimizing the sum of squares errors
subject to
𝑦−𝑋𝛽 ′ 𝑦−𝑋𝛽
𝑅′𝛽 = 𝑟
𝐿= 𝑦−𝑋𝛽 ‘ 𝑦−𝑋𝛽 +2𝜆’ 𝑅’𝛽−𝑟
= 𝑦’𝑦 − 2𝑦’𝑋𝛽 + 𝛽’𝑋’𝑋𝛽 + 2𝜆’𝑅’𝛽 − 2𝜆’𝑟
10/21/20 Lecture 13 GR5411 by Seyhan Erden 12

Restricted Least Squares Regression:
On differentiating 𝐿 with respect to 𝛽 and setting the result to zero, we get the following first-order
condition 𝜕𝐿 = 0, 𝜕𝐿 = 0 𝜕𝐿 𝜕𝛽 𝜕𝜆
𝜕𝛽 = −2𝑋’𝑦 + 2𝑋’𝑋𝛽 + 2𝑅𝜆 𝜕𝐿 = 2 𝑅’𝛽 − 𝑟
10/21/20
Lecture 13 GR5411 by Seyhan Erden
13
‘𝜕𝜆 ‘
𝑋 𝑋𝛽 = 𝑋 𝑦 − 𝑅𝜆
(1) (2)
𝑅’𝛽 = 𝑟

Restricted Least Squares Regression:
𝑋’𝑋 𝑅′ 𝛽 = 𝑋’𝑦 𝑅0𝜆𝑟
𝑋’𝑋 𝛽+𝑅𝜆=𝑋’𝑦
𝛽0 \ = 𝑋 ‘ 𝑋 𝑅 ′ + , 𝛽 𝜆0 𝑅 0 𝜆
(3)
10/21/20 Lecture 13 GR5411 by Seyhan Erden
14

Inverse of Partition Matrices from Greene:
10/21/20 Lecture 13 GR5411 by Seyhan Erden 15

Restricted Least Squares Regression:
From (3) we can directly solve for restricted 𝛽0 𝛽0\ = 𝑋’𝑋 +,𝑋’𝑦 − 𝑋’𝑋 +,𝑅𝜆
𝛽0 \ = 𝛽0 − 𝑋 ‘ 𝑋 + , 𝑅 𝜆
Since,𝑅’𝛽0\ = 𝑟
𝑅 ‘ 𝛽0 − 𝑋 ‘ 𝑋 + , 𝑅 𝜆 = 𝑟
( 4 )
or
𝑅’𝛽0−𝑅’ 𝑋’𝑋 +,𝑅𝜆=𝑟
10/21/20 Lecture 13 GR5411 by Seyhan Erden
16

Restricted Least Squares Regression:
We can find the same solution by solving for 𝜆 (we want to find an expression for 𝜆)
𝜆 = 𝑅 ‘ 𝑋 ‘ 𝑋 + , 𝑅 + , 𝑅 ‘ 𝛽0 − 𝑟 Plugging this back to (4)
𝛽0\ =𝛽0− 𝑋’𝑋 +,𝑅 𝑅’ 𝑋’𝑋 +,𝑅 +, 𝑅’𝛽0−𝑟
or
𝛽0\ =𝛽0+ 𝑋’𝑋 +,𝑅 𝑅’ 𝑋’𝑋 +,𝑅 +, 𝑟−𝑅’𝛽0
Restricted LS is the unrestricted one plus a term.
10/21/20 Lecture 13 GR5411 by Seyhan Erden 17

Restricted Least Squares Regression:
It can be shown that the covariance matrix for 𝛽0\ is equal to 𝑉𝑎𝑟 𝛽0|𝑋 − a nonnegative definite matrix
𝑉𝑎𝑟 𝛽0\|𝑋 = 𝜎6 𝑋’𝑋 +,
−𝜎6 𝑋’𝑋 +,𝑅 𝑅’ 𝑋’𝑋 +,𝑅 +,𝑅’ 𝑋’𝑋 +,
Multiply the 2nd term by vector z,
𝜎6𝑧′ 𝑋’𝑋 +,𝑅 𝑅’ 𝑋’𝑋 +,𝑅 +,𝑅’ 𝑋’𝑋 +,𝑧 Let𝑧̃=𝜎𝑅’ 𝑋’𝑋+,𝑧
10/21/20 Lecture 13 GR5411 by Seyhan Erden 18

Then we can write the 2nd term of 𝑉𝑎𝑟 𝛽0\|𝑋 as 𝑧̃′ 𝑅’ 𝑋’𝑋 +,𝑅 +,𝑧̃
Since,
we know that the inverse also is positive semi-
𝑅’ 𝑋’𝑋 +,𝑅 ≥ 0 𝑅’ 𝑋’𝑋 +,𝑅 +, ≥ 0
definite,
Then the 2nd term is a quadratic form with positive semi-definite matrix in the middle, so it must be non-negative:
𝑧̃′ 𝑅’ 𝑋’𝑋 +,𝑅 +,𝑧̃ ≥ 0
10/21/20 Lecture 13 GR5411 by Seyhan Erden 19

Then we know that,
𝑉𝑎𝑟 𝛽0\|𝑋 = 𝜎6 𝑋’𝑋 +,
−𝜎6 𝑋’𝑋 +,𝑅 𝑅’ 𝑋’𝑋 +,𝑅 +,𝑅’ 𝑋’𝑋 +,
𝑉𝑎𝑟 𝛽0\|𝑋
= 𝑉𝑎𝑟 𝛽0|𝑋 − 𝑎 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 𝑠𝑒𝑚𝑖 𝑑𝑒𝑓𝑖𝑛𝑖𝑡𝑒 𝑡𝑒𝑟𝑚
Thus, it must be true that
𝑉𝑎𝑟 𝛽0|𝑋 ≥ 𝑉𝑎𝑟 𝛽0\|𝑋
10/21/20 Lecture 13 GR5411 by Seyhan Erden 20

Finite Sample Properties of Restricted LS:
Under the linear model
𝑦 = 𝑋𝛽 + 𝜀
with usual assumptions,
𝐸𝜀|𝑋 =0
𝑉𝑎𝑟𝜀|𝑋 =𝐸𝜀𝜀’|𝑋 =𝜎6𝐼
𝑥l and 𝑦l are iid and their 4th moments are finite. 𝐸 𝑋’𝑋 = 𝑄nn is positive definite.
10/21/20 Lecture 13 GR5411 by Seyhan Erden 21

First some useful properties:
Let
𝐴= 𝑋’𝑋 +,𝑅 𝑅’ 𝑋’𝑋 +,𝑅 +,𝑅’ 𝑋’𝑋 +,
Then,
1. 𝑅’𝛽0−𝑟=𝑅’ 𝑋’𝑋 +,𝑋’𝜀 2.𝛽0\−𝛽= 𝑋’𝑋+,𝑋’−𝐴𝑋’
3. 𝜀̂ = 𝐼−𝑃+𝑋𝐴𝑋′ 𝜀 \
4. 𝐼 − 𝑃 + 𝑋𝐴𝑋 is symmetric and idempotent 5. 𝑡𝑟 𝐼−𝑃+𝑋𝐴𝑋 =𝑛−𝑘+𝑗
10/21/20 Lecture 13 GR5411 by Seyhan Erden 22

Proof of these useful properties:
1. 𝑅’𝛽0−𝑟=𝑅’ 𝑋’𝑋 +,𝑋’𝜀
𝑅’𝛽0−𝑟=𝑅’ 𝑋’𝑋 +,𝑋’𝑦−𝑟
=𝑅’ 𝑋’𝑋 +,𝑋’ 𝑋𝛽+𝜀 −𝑟
=𝑅′𝛽+𝑅’ 𝑋’𝑋 +,𝑋’𝜀−𝑟 = 𝑅’ 𝑋’𝑋 +,𝑋’𝜀
2.𝛽0\−𝛽= 𝑋’𝑋+,𝑋’−𝐴𝑋’𝜀
𝛽0\ =𝛽0− 𝑋’𝑋 +,𝑅 𝑅’ 𝑋’𝑋 +,𝑅 +, 𝑅’𝛽0−𝑟 =𝛽+ 𝑋’𝑋 +,𝑋’𝜀− 𝑋’𝑋 +,𝑅 𝑅’ 𝑋’𝑋 +,𝑅 +,
𝑅’ 𝑋’𝑋 +,𝑋’𝜀
=𝛽+ 𝑋’𝑋+,𝑋’−𝐴𝑋’𝜀
10/21/20 Lecture 13 GR5411 by Seyhan Erden 23

Proof of these useful properties:
3. 𝜀̂ = 𝐼−𝑃+𝑋𝐴𝑋′ 𝜀 \
𝜀̂ =𝑦−𝑋𝛽0 \\
=𝑦−𝑋𝛽+ 𝑋’𝑋+,𝑋’−𝐴𝑋’ 𝜀
=𝑋𝛽+𝜀−𝑋𝛽−𝑋 𝑋’𝑋 +,𝑋’ −𝐴𝑋’ 𝜀 =𝜀−𝑋 𝑋’𝑋 +,𝑋’𝜀+𝑋𝐴𝑋’𝜀
= 𝐼−𝑃+𝑋𝐴𝑋′𝜀
10/21/20 Lecture 13 GR5411 by Seyhan Erden 24