CS代写 STAT3023: Statistical Inference

The University of of Mathematics and Statistics
Solutions to Tutorial Week 9
STAT3023: Statistical Inference
Lecturers: and

Copyright By PowCoder代写 加微信 powcoder

1. If Y has a gamma distribution with shape α and rate􏰡 λ its PDF is
Semester 2, 2021
for y > 0.
(a) Determine a
0 < k < α. Solution: for E 􏰀Y −k 􏰁 􏰟 ∞ E􏰄Y−k􏰅= y−k 0 yα−1e−λyλα Γ(α) yα−1e−λyλα fY (y) = Γ(α) , which is valid for all (positive) k such that dy yα−k−1e−λy dy zα−k−1e−z dz (changing to z = λy) = λkΓ(α − k) . Γ(α) Now,forallα>0,Γ(α+1)=αΓ(α). Sowealsohave
Γ(α) = (α − 1)Γ(α − 1) = (α − 1)(α − 2)Γ(α − 2) = · · · = (α − 1)(α − 2) · · · (α − k)Γ(α − k) so long as α − k > 0. Substituting this into the denominator above we get
E Y = (α−1)(α−2)···(α−k).
(b) The random variable X = Y −1 is said to have an inverse Gamma distribution. Use the previous part to determine the mean and variance of X (for α > 2).
E(X)=E􏰄Y−1􏰅= λ ; α−1
2 􏰄 −2􏰅 λ2
E(X ) = E Y = (α − 1)(α − 2) .
Var(X) = E(X2) − [E(X)]2
= (α−1)(α−2) − α−1 = λ2 {(α−1)−(α−2)}
(α−1)2(α−2) λ2
= (α − 1)2(α − 2) .
􏰡Note that the gamma rate parameter is the reciprocal of the gamma scale parameter.
Copyright © 2021 The University of Sydney 1

(c) Use the CDF method to derive the PDF of X.
Solution: Writing FY (·) and fY (·) for the CDF and PDF (respectively) of Y , the CDF
FX(x)=P(X≤x)=P􏰄Y−1 ≤x􏰅=P􏰄Y ≥x−1􏰅=1−P􏰄Y 0, given by
fθ(x) = 1θe−x/θ .
(a) Determine the maximum likelihood estimator θˆML(X).
Solution: The likelihood is
n n 􏰖1−Xi/θ􏰗 1 −( n Xi)/θ
for x > 0.
i=1 i=1 Taking logs, the log-likelihood is
􏰝fθ(Xi)=􏰝 e = ne 􏰑i=1 (1)
−n log θ − θ
differentiating and setting equal to zero gives the maximum-likelihood estimator as
ˆ 􏰑ni=1 Xi ̄ θML(X) = n = X .
(b) Determine the Bayes estimator θˆflat(X) under squared-error loss using the weight function w(θ) ≡ 1 (the “flat prior”).
Solution: The likelihood at (1), when viewed as a function of θ, is a multiple of the inverse gamma density with shape n−1 and rate T = 􏰑ni=1 Xi so that very density is the posterior. Since we have squared-error loss, the Bayes estimator is the posterior mean which, according to question 1(b), equals
(c) Determine the Bayes estimator θˆ conj
θˆflat(X) = T . n−2
(X) under squared-error loss using the conjugate prior λα0 e−λ0 /θ
w(θ)= 0 , θα0+1Γ(α0)
for θ > 0.

Solution: The product of the weight function and the likelihood is of the form const. e−(T +λ0 )/θ
where (as usual) “const.” is a factor depending on everything except θ. This in turn is a multiple of the inverse gamma density with shape n + α0 and rate T + λ0; that density is the posterior. Again, the estimator is the posterior mean, which in this case is
T+λ0 . n + α0 − 1
(d) Determine the risk R(θ|d) of the estimator
d(X) = l + 􏰑ni=1 Xi
and hence also determine the limiting (rescaled) risk limn→∞ nR(θ|d).
Solution: We know that, as the sum of n iid exponentials with mean θ, T = 􏰑ni=1 Xi has a gamma distribution with shape n and scale parameter θ (the reciprocal of the rate!). Thus we have E(T) = nθ and Var(T) = nθ2.
The variance of this estimator is given by
􏰌 1 􏰍2 nθ2 Varθ[d(X)]= n+k Varθ(T)=(n+k)2.
The expected value of the estimator is
Eθ [d(X)] = l+E(T) = l+nθ .
Thus the bias is given by
Biasθ [d(X)] = Eθ [d(X)] − θ = l + nθ − (n + k)θ = l − kθ .
Therefore the risk is
R(θ|d)=E 􏰂[d(X)−θ]2􏰃 θ
= Varθ [d(X)] + {Biasθ [d(X)]}2 nθ2 + (l − kθ)2
= (n+k)2 .
n􏰊θ2 + (l−kθ)2 􏰋
The limiting (rescaled) risk is
lim nR(θ|d)= lim n
n n→∞ n2 􏰄1 + nk 􏰅
􏰊θ2 + (l−kθ)2 􏰋 n
= lim 2 n→∞ 􏰄1 + nk 􏰅
(e) Determine the risk R(θ|d) and limiting (rescaled) risk limn→∞ nR(θ|d) where d is replaced
the same for all (fixed) l and k.
by each of the 3 estimators in the questions (a)–(c) above.
Solution: The three estimators θˆ , θˆ and θˆ are all special cases of d(X), for certain ML flat conj
choices of l and k:
• θˆML correspondstol=0,k=0;

• θˆflat corresponds to l = 0, k = −2;
• θˆ correspondstol=λ ,k=α −1.
Therefore we have
􏰊 ˆ 􏰋 θ2 Rθ|θML =n;
􏰊 ˆ 􏰋 (n+4)θ2 R θ|θflat = (n−2)2 ;
􏰊 ˆ 􏰋 nθ2 +(λ0 −(α0 +1)θ)2 R θ|θconj = (n+α0+1)2 .
nR􏰊θ|θˆ 􏰋 = lim nR􏰊θ|θˆ 􏰋 = lim nR􏰊θ|θˆ ML n→∞ flat n→∞ conj
3. Suppose X1, . . . , Xn are iid U[0, θ] and it is desired to estimate θ using squared-error loss.
(a) Write down the CDF Fθ(x) = Pθ {X1 ≤ x}.
0 forx<0, Fθ(x) = xθ for 0 ≤ x ≤ θ, 1 forx>θ.
(b) For any n iid random variables Y1 , . . . , Yn the CDF of the maximum Y(n) = maxi=1,…,n Yi
is given by
P(Y(n) ≤ y) = P(Y1 ≤ y,…,Yn ≤ y) = P(Y1 ≤ y)···P(Yn ≤ y) (by independence).
Use this to derive the CDF of X(n) above (the U[0,θ] sample maximum). Solution: The CDF is
0 Gn(x;θ)=Pθ􏰀X(n) ≤x􏰁=Fθ(x)n =􏰄x􏰅n
(c) DerivethePDFofX(n) andhenceaformulaforE􏰂Xk 􏰃,fork=1,2,….
xk+n−1 dx =
(d) Determine the bias, variance and thus mean-squared error (risk) of the maximum likelihood
The PDF is gn(x; θ) =
􏰐nxn−1 Gn(x; θ) = θn
Eθ 􏰆X(n)􏰇 = nθ n+1
for x < 0, for0≤x≤θ, for x > θ.
for 0 ≤ x ≤ θ,
for x < 0 or x > θ.
Eθ estimator X(n).
so the bias is
􏰂 􏰃 􏰟∞ Xk =
xkgn(x;θ)dx =
n􏰖xk+n􏰗θ nθk
Solution: The expectation of the sample maximum is
Biasθ 􏰆X(n)􏰇 = Eθ 􏰆X(n)􏰇 − θ = nθ −θ
= nθ−θ(n+1)

The mean-square of the sample maximum is
E 􏰔X2 􏰕= nθ2
so the variance is
Var 􏰆X 􏰇=E 􏰔X2 􏰕−􏰀E 􏰆X 􏰇􏰁2
θ (n) θ (n) nθ2 􏰌 nθ 􏰍2
= θ2 􏰎n(n + 1)2 − n2(n + 2)􏰏
(n+2)(n+1)2
= θ2 􏰎n3 + 2n2 + n − (n3 + 2n2)􏰏
θ (n) θ (n)
(n+2)(n+1)2 = (n+2)(n+1)2 .
Therefore the mean-squared error
E 􏰂􏰆X −θ􏰇2􏰃=Var 􏰆X 􏰇+􏰀Bias 􏰆X 􏰇􏰁2
= (n+2)(n+1)2 + (n+1)2
nθ2 (n + 2)θ2
= (n+2)(n+1)2 + (n+2)(n+1)2 = 2(n + 1)θ2
(n+2)(n+1)2 = 2θ2
(n+2)(n+1)
(e) Determine the limiting (rescaled) risk lim
− θ􏰇2􏰃. (n+2)(n+1)
= 2θ2 􏰎 n2 􏰏
(n+2)(n+1)
(f) Defining the unbiased estimator θˆunb(X) = 􏰄n+1􏰅X(n), determine
Firstly, since this estimator is unbiased, the mean-squared error is simply the
= 􏰄 1 + n2 􏰅 􏰄 1 + n1 􏰅
2 􏰎􏰔ˆ 􏰕2􏰏 lim n Eθ θunb(X)−θ
n 􏰄 1 + n2 􏰅 n 􏰄 1 + n1 􏰅 2θ2

variance, which is
􏰔ˆ􏰕 􏰖􏰌n+1􏰍􏰗 θunb = Varθ n X(n)
= n Varθ 􏰆X(n)􏰇
􏰌n+1􏰍2 nθ2
= n (n+2)(n+1)2
= n(n + 2) .
􏰕2􏰏 2􏰌 n 􏰍 =θ n+2
􏰎􏰔ˆ θunb(X)−θ
2􏰌 1 􏰍 =θ 1+n2

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com