The University of of Mathematics and Statistics
Solutions to Tutorial Week 5
STAT3023/3923/4023: Statistical Inference Semester 2, 2021
Lecturers: and
Copyright By PowCoder代写 加微信 powcoder
1. Assume that for the ordered pairs (x1, y1), . . . , (xn, yn) the xis are not all zero. Find the value of θ that minimises S(θ) = ni=1(yi − θxi)2.
Solution: S(θ) = ni=1(yi − θxi)2 = i yi2 − 2θ i xiyi + θ2 i x2i is a parabola (degree 2 polynomial) in θ. So long as the xi’s are not all zero, the coefficient of θ2 is strictly positive, so the parabola is “concave up”. Thus the minimiser is obtained by differentiating with respect to θ and setting to zero. The derivative is S′(θ) = −2 i xiyi + 2θ i x2i , setting to zero gives
i xiyi θ = x2 .
2. IfY=a+bXshowthatCov(X,Y)2=Var(X)Var(Y).
Solution: Note firstly that Var(Y ) = Var(a + bX) = Var(bX) = b2Var(X). So the product of
the variances is
Var(X)Var(Y ) = b2 [Var(X)]2 . (1) There are various ways to write covariance. The “definition” is
Cov(X,Y)=E{[X−E(X)][Y −E(Y)]}. A more convenient alternative formula is
Cov(X, Y ) = E(XY ) − E(X)E(Y )
= E [X(a + bX)] − E(X)E(a + bX)
= aE(X) + bE(X2) − E(X) [a + bE(X)] = aE(X) + bE(X2) − aE(X) − b [E(X)]2 = b E(X2) − [E(X)]2
= bVar(X).
So comparing this with equation (1) above we see that
Cov(X, Y )2 = b2 [Var(X)]2 = Var(X)Var(Y ) .
3. Suppose X1, . . . , Xn are iid random variables with
P(X1 =x)=(1−p)x−1p
for some 0 < p < 1 and x = 1, 2, . . .
(a) Show that the moment-generating function of X1 is
tX1 pet
E e = 1 − (1 − p)et .
For which values of t is this finite? Solution:
E etX1 = etx(1 − p)x−1p
= p et(1−p)x
Copyright © 2021 The University of Sydney 1
The infinite sum is a geometric series and only converges to a finite value if the “common ratio” et(1 − p) is less than 1 in absolute value. Since it is always non-negative, we need
For such t, the infinite sum is
et(1 − p) < 1 et< 1
t < − log(1 − p) .
∞ t et(1−p)x = e (1−p)
E etX1 = p × et(1 − p) = pet
1−p 1−et(1−p) 1−et(1−p) (b) Use the previous question to determine E(X1) and Var(X1).
Solution: There are two ways we can do this. One is the differentiate the moment- generating function twice, set t = 0 to obtain E(X1) and E(X12), and from there get the variance. Another way is to first take the logarithm of the MGF to get the cumulant- generating function, differentiate this twice, set t = 0 and then get the first two cumulants, E(X1) and Var(X1).
The form of the MGF means it will probably be easier to take logs and use the second method. The cumulant generating function (and its first two derivatives) are
as required.
ψ(t) = log E etX1 = log p + t − log 1 − et(1 − p)
′ −et(1−p) 1−et(1−p)+et(1−p) 1
ψ(t)=1− 1−et(1−p) = 1−et(1−p) = 1−et(1−p) ψ′′(t) = − 1 − et(1 − p)−2 −et(1 − p) = et(1 − p)
Setting these equal to zero gives
{1−et(1−p)}2 E(X1) = ψ′(0) = p1
(c) Determine the (i) likelihood
Var(X1) = ψ′′(0) = 1 − p . p2
fθ(x) = (1 − θ)xi−1θ = (1 − θ)i xi−nθn = (1 − θ)t−nθn
where t = ni=1 xi is the sample total. (ii) score function
logfθ(x) = (t−n)log(1−θ)+nlogθ
∂logfθ(x) = n − t−n = n(1−θ)−θ(t−n) = n−θt
θ 1−θ θ(1−θ) n t 1
=−1−θ n−θ =n−t.
θ(1−θ) 1−θ 2
(note, this is not quite in the “nice” form C θˆ(x) − θ, rather it is C g(x) − 1 , θθθ
see (2) above);
(iii) Cram ́er-Rao lower bound
Solution: Viewed as a random variable,
∂logfθ (X) = ∂θ
n − T θ(1−θ) 1−θ
Varθ ∂θ = 1−θ Varθ(T) = 1−θ
So for any unbiased estimator θˆ(X), we have
1 2n(1−θ) 1−θ θ2
∂logfθ(X) 1 2
nVarθ(X1) = =n.
ˆ Varθ θ(X) ≥
1 ∂logfθ(X)=
θ2(1−θ) n .
when the parameter of interest is θ = p. Is the score function in the “nice” form required for a MVU estimator to exist? If yes, can you identify it and verify that it attains the lower bound?
Solution: It is not in the nice form, so we cannot “read off” what the optimal estimator is; indeed an MVU does not exist.
(d) Determine the (i) likelihood
Solution: In this case the common PMF, as a function of θ = 1/p, is
so the likelihood now looks like
1x−1 1 P(X1 =x)= 1−θ θ
(θ − 1)x−1 = θx
(θ − 1)t−n θt
n (θ − 1)xi−1
where again t = ni=1 xi is the “sample total”. (ii) score function
log fθ (x) = (t − n) log(θ − 1) − t log θ ; ∂logfθ(x) = (t−n) − t
= θ(t−n)−t(θ−1)
θ(θ − 1) = t−θn
θ(θ − 1) nt
=θ(θ−1) n−θ = n ( x ̄ − θ )
wherex ̄=t/nisthesampleaverage.NotethatthisisintheniceformC θˆ(x)−θ!. θ
(iii) Cram ́er-Rao lower bound
Solution: Note firstly that the population variance is
1 − p Varθ(X1)= p2
Viewed as a random variable,
=12 =θ(θ−1).
∂ log fθ (X ) ∂ θ
n ̄ =Varθ θ(θ−1) X−θ
= θ(θ−1) VarθX ̄
=n. θ(θ − 1)
2 Varθ (X1) n
2 θ(θ−1) n
So for any unbiased estimator θˆ(X) we have
ˆ Varθ θ(X) ≥
1 ∂logfθ(X)=
θ(θ − 1) n .
when the parameter of interest is θ = 1/p = E(X1). Is the score function in the “nice” form required for a MVU estimator to exist? If yes, can you identify it and verify that it attains the lower bound?
Solution: Yes! The score function is in the “nice” form, we can read off the MVU estimator as X ̄. And as we showed above,
Varθ X ̄ = Varθ (X1) = θ(θ − 1) nn
attains the lower bound. So X ̄ is MVU (minimum variance unbiased) in this case. 4. Suppose X1, . . . , Xn are iid exponential random variables with mean θ, so that the CDF is
1 − e−x/θ for x > 0, Fθ(x) = Pθ(X1 ≤ x) = 0 otherwise
Show that the sample mean X ̄ = n1 ni=1 Xi is a minimum-variance unbiased estimator of θ. Hint: recall that the exponential distribution has the population SD equal to the population mean.
Solution: We may follow the same steps as the previous question. The common density is given by
1e−x/θ for x > 0, θ
0 otherwise. so the likelihood based on observations x = (x1, . . . , xn)T is
logfθ(x)=−θt −nlogθ 4
where t = ni=1 xi is the sample total. The log-likelihood is
n 1 −xi/θ 1 −t/θ
and the score function is
∂logfθ(x) t n n t
∂logfθ(X) n ̄ n2 ̄
Varθ ∂θ = Varθ θ2 X = θ4 Varθ X = θ4 n
∂ θ = θ 2 − θ = θ 2 n − θ which is in the “nice form” C θˆ(x) − θ!
= θ 2 ( x ̄ − θ )
Viewed as a random variable (note that Varθ(X1) = θ2),
n2Varθ(X1)
So for any unbiased estimator θˆ(X) we have
ˆ 1 θ2 ̄
Varθ θ(X) ≥ ∂logfθ(x)= n =VarθX, Varθ ∂ θ
so X ̄ is MVU. Note also that the (random) score function is of the form C θ X ̄ − θ
which also shows us directly that X ̄ is MVU.
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com