Testing using asymptotic theory
February 22, 2022
This is a brief note on testing using asymptotic theory. What I will show is that testing linear (single or multiple) restrictions using asymptotic arguments is always conducted in the same way, irrespective of the estimator, provided the estimator is asymptotically normally distributed.
I will therefore assume that I have an estimator θ which is, asymptotically, normally distributed with an asymptotic variance V/T. In other words, I will
Copyright By PowCoder代写 加微信 powcoder
as T → ∞. Notice that this is the same, with some abuse of notation, as
θ−θd N(0,V/T). (1)
Some examples: The estimator could be the least-squares estimator (i.e., θLS = (X⊤X)−1X⊤Y ) which is asymptotically normal under typical assump- tions (recall your linear econometrics classes). It could also be the GMM esti-
T (θ − θ) N (0, V ),
mator (θGMM) or the MLE estimator (θMLE), whose asymptotic normality we proved in class. What changes from estimator to estimator is the asymptotic variance (V ).
1 Single restriction
Test the hypothesis H0 : c⊤θ = γ against HA : c⊤θ ̸= γ. • Example: H0 : θ1 = 2θ2. Then,
c⊤ = [1,−2,0,0,.,.], γ = 0.
• Example: H0 : θ3 = 0. Then,
c⊤ = [0,0,1,0,.,.],
Construction of the test: Because θ is asymptotically normally distributed, using Eq. (1), we can write
⊤⊤d⊤V c θ − c θ N 0 , c T c
c⊤θ−c⊤θ d
N(0,1) and, under the null hypothesis H0 : c⊤θ = γ, we have
N(0,1).
c ⊤ VT c H 0
Implementation: For a 5% level test, we reject the null hypothesis if
c ⊤ θ − γ
> 2,
c⊤Vc T
where V is an estimate of the asymptotic variance.
2 Multiple restrictions
TestthehypothesisH0 : R θ = r againstHA :Rθ̸=r. qisthe (q×k)(k×1) (q×1)
number of restrictions.
Example: H0 :θ2 =θ3 =…=θk =0.Then,
R = 0 0 1 0 =[0k−1,1|Ik−1],
0 0…0 001
where 0k−1,1 is a k − 1 column vector of zeros and Ik−1 is the identity matrix
of dimension k − 1.
Construction of the test: Because θ is asymptotically normally distributed,
using Eq. (1) again, we can write
dV⊤ Rθ−θN0,RTR .
V⊤−1/2 d
Zq= RTR R θ−θ N(0,1).
⊤ ⊤V⊤−1 d 2 ZqZq=Rθ−θ RTR Rθ−θ χq,
since the inner product of q independent normal random variables is chi-squared with q degrees of freedom. Under the null, H0 : Rθ = r, we have
⊤V⊤−1 d2 Rθ−r RTR Rθ−r χq.
Implementation: For a 5% level test, we reject when −1
⊤V⊤ 2 Rθ−r RTR Rθ−r >χq,0.05,
where χ2q,0.05 is the value such that P(χ2q < χ20.05) = 0.95. Like before, V is an estimate of the asymptotic variance.
3 Some comments about linear regression
Consider a linear regression Y = Xθ + ε and assume that the matrix X is an n × k matrix of deterministic (non-random) regressors. We are working with cross-sectional observations (like in assignment 1) rather than with time-series observations (like in assignment 2). (In fact, it would not make sense to assume deterministic regressors in a time-series context.) The number of observations is, therefore, n (the number of units) not T (the number of time periods).
If the errors are normal, the least-squares estimator is also normal, and that is not an asymptotic approximation. It is an exact result for all sample sizes. Therefore, we can write:
θ−θ=d N(0,V/n), where V/n = σ2(X⊤X)−1. Thus,
θ−θ=d N(0,σ2(X⊤X)−1).
Notice that I did not use the symbol . I used the symbol = because the distribution is exactly normal, if the errors are normal and the regressors are deterministic.
Single linear restrictions: Going through the same steps as before, we can show that, under the null hypothesis H0 : c⊤θ = γ, we have
When we replace σ
σ c (X X) c 0 22
c ⊤ θ − γ d
2 ⊤ ⊤ −1 H= N(0,1).
with σ , we obtain
c ⊤ θ − γ d
2 ⊤ ⊤ − 1 H= t n − k ,
σ c ( X X ) c 0
which is, again, an exact result. Thus, we test with the t distribution.
Multiple linear restrictions: Going through the same steps as before, we can show that, under the null H0 : Rθ = r, we have
Rθ− r⊤ σ2R(X⊤X)−1R⊤ −1 Rθ− r =d χ2q. H0
⊤2 ⊤−1⊤−1
Rθ−r σ R(X X) R Rθ−r /q = Fq,n−k,
When we replace σ with σ and divide the statistic by q, we obtain
which is, once more, an exact result. Thus, we test with the F distribution. Remarks:
• The t distribution and the F distribution are only used in cross-sectional linear regressions with non-random (predetermined) regressors and normal error terms. Without both assumptions, they would not be the right distributions. We used them in the first assignment only to review linear econometrics (and we made both assumptions, after taking logs).
• Now we are more sophisticated and understand that - all we need - is the asymptotic distribution of the estimator. Our tests are asymptotic. We have a very general framework which is applicable to all estimators. In two-sided tests, we use the normal distribution (never the t). In one-sided tests, we use the chi-squared distribution (never the F ).
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com