Econometrics (M.Sc.) 1 Exercises (with Solutions) 359134
1. Problem
Program the Newton Raphson algorithm for a numerical computation of the ML estimate θˆ of the parameter θ = P(Coin = HEAD) in our coin toss example of Chapter 6 of our script. Replicate the results in Table 6.1. of our script.
Solution
## Below I use the same data that was used to produce the
## results in Table 6.1 of our script. However, you
## can produce new data by setting another seed-value
theta_true <- 0.2 # unknown true theta value
n <- 5 # sample size
## Use a common Random Number Generator:
RNGkind(sample.kind = "Rounding")
## Warning in RNGkind(sample.kind = "Rounding"): non-uniform 'Rounding' sampler used set.seed(1)
# simulate data: n many (unfair) coin tosses
x <- sample(x = c(0,1),
size =n,
replace = TRUE,
prob = c(1-theta_true, theta_true))
## number of heads (i.e., the number of "1"s in x)
h <- sum(x)
## First derivative of the log-likelihood function
Lp_fct <- function(theta, h = h, n = n){
(h/theta) - (n - h)/(1 - theta)
}
## Second derivative of the log-likelihood function
Lpp_fct <- function(theta, h = h, n = n){
- (h/theta^2) - (n - h)/(1 - theta)^2
}
t <- 1e-10
check <- TRUE
i <- 0
theta <- 0.4
Lp <- Lp_fct( theta, h=h, n=n)
Lpp <- Lpp_fct(theta, h=h, n=n)
# convergence criterion
# for stopping the while-loop
# count iterations
# starting value
Econometrics (M.Sc.) 2
while(check){
i <- i + 1
##
theta_new <- theta[i] - (Lp_fct(theta[i], h=h, n=n) / Lpp_fct(theta[i], h=h, n=n))
Lp_new
Lpp_new
##
theta
Lp
<- Lp_fct( theta_new, h = h, n = n)
<- Lpp_fct(theta_new, h = h, n = n)
<- c(theta, theta_new)
<- c(Lp, Lp_new)
<- c(Lpp, Lpp_new)
Lpp
##
if( abs(Lp_fct(theta_new, h=h, n=n)) < t ){check <- FALSE}
}
cbind(theta, Lp, Lp/Lpp)
## theta Lp
## [1,] 0.4000000 -4.166667e+00 2.400000e-01
## [2,] 0.1600000 1.488095e+00 -3.326733e-02
## [3,] 0.1932673 2.159084e-01 -6.558924e-03
## [4,] 0.1998263 5.433195e-03 -1.736356e-04
## [5,] 0.1999999 3.539786e-06 -1.132731e-07
## [6,] 0.2000000 1.504574e-12 -4.814638e-14
2. Problem
Assume an i.i.d. random sample X1, . . . , Xn from an exponential distribution, i.e. the un-
derlying density of Xi is given by f(x|θ) = θexp(−θx). We then have μ := E(Xi) = 1 as
wellasVar(X)= 1 . i θ2
(a) What is the log-likelihood function for the i.i.d. random sample X1 , . . . , Xn ?
(b) Derive the maximum likelihood (ML) estimator θˆn of θ.
(c) From maximum likelihood theory we know that
ˆ1 (θn−θ)→dN 0,nJ(θ)
θ
Derive the expression for the Fischer information I(θ) = nJ (θ). Use the Fisher infor- mation to give the explizit formula for the asymptotic distribution of θˆn.
Solution
(a) The log-likelihood function is given by
n
l(θ) = ln(θ exp(−θXi)))
i=1 n
= (ln θ − θXi) i=1
n
= n ln θ − θXi
i=1
Econometrics (M.Sc.) 3 (b) The ML estimator is defined as θˆn = arg max l(θ). Deriving the ML estimator θˆn:
′ 1n
ln(θ) = nθ −
′ˆ 1n
ln(θn)=0⇔0=nθˆ− Xi n i=1
1 n ⇔nθˆ= Xi
n i=1
ˆ11 ⇔ θ n = 1 n X = X ̄
n i=1 i
(c) The Fisher information is given by I(θ) = nJ (θ), where J (θ) = − 1 E(l′′(θ)). The
n
second derivative of l(θ) is given by
l′′(θ) = −n 1
i=1
Xi
θ2
So, the expression for l′′(θ) is here deterministic as it doesn’t depend on the random
variables Xi.
1 ′′ 1 1 1 J(θ) = −nE(l (θ)) = −n −nθ2 = θ2
That is, the Fisher information is I(θ) = nJ(θ) = n/θ2. Therefore, the asymptotic distribution of θˆn is
ˆ θ2 (θn−θ)→dN 0,n
⇔ √n(θˆn−θ)→dN0,θ2