Semester 2, 2019
Seat Number: …………………….. Last Name: ……………………….. Other Names: …………………….. SID: ……………………………….
The University of of Science
STAT3023 and STAT3923: Statistical Inference and Statistical Inference (Advanced)
Copyright By PowCoder代写 加微信 powcoder
Lecturers: and Time allowed: Two hours
This booklet contains 7 pages.
CONFIDENTIAL
There are 6 questions of equal value over pages 2, 3 and 4. Attempt all questions. Show all working. Pages 5, 6 and 7 have a list of useful formulae.
Let X = (X1, . . . , Xn) be a random sample from a distribution with density function f(x;α), which is the density of a Gamma (α,β) distribution with β = 1/α.
(a) Determine whether the family of distributions f(x;α) belongs to the one- parameter exponential family.
(b) Find a sufficient statistic T(X) for α.
Semester 2, 2019 page 2 of 7
Let X be a Binomial (n, p) random variable.
(i) By writing X as the sum of independent Bernoulli (p) random vari-
ables, find the moment generating function MX(t) = E(etX) for X.
(ii) Find the moment generating function of
X − np Y = √n .
(b) Let U1, U2, . . . , Un be iid Uniform (0, 1) random variables.
(i) Define V = − log U1. Find the density function of V .
(ii) Suppose n is sufficiently large, use (i) and the central limit theorem to find an approximation for the probability
P(U1U2 ···Un ≤ x), x ∈ (0,1).
Write your answer in terms of the standard normal c.d.f. Φ(·), x and
(iii) Suppose W has the same distribution as V and is independent of V .
Find the density function of Z = V/W.
Suppose X = (X1, . . . , Xn) consists of iid exponential random variables with mean
(a) Verify using integration by parts that Varθ(X1) = θ2.
(b) Write down the likelihood and derive the Cram ́er-Rao lower bound for the variance of an unbiased estimator of θ. Hence show that the sample mean X ̄ = n1 ni=1 Xi is minimum variance unbiased.
turn to page 3
Semester 2, 2019 page 3 of 7 4. Suppose X = (X1, . . . , Xn) consists of iid Poisson(θ) random variables. It is desired
to test H0 : θ = 1 against H1 : θ < 1.
(a) Write down the likelihood and verify that this family of distributions has the monotone likelihood ratio property in some statistic T(X). Determine the precise form of T(X).
(b) Determine the form of the uniformly most powerful (UMP) test at level α = 0.05 of the above hypotheses in the case n = 10. The R output below may be useful:
> cbind(x,ppois(x,1),ppois(x,10))
[1,] 1 0.7357589 0.0004993992
[2,] 2 0.9196986 0.0027693957
[3,] 3 0.9810118 0.0103360507
[4,] 4 0.9963402 0.0292526881
[5,] 5 0.9994058 0.0670859629
[6,] 6 0.9999168 0.1301414209
[7,] 7 0.9999898 0.2202206466
[8,] 8 0.9999989 0.3328196788
[9,] 9 0.9999999 0.4579297145
[10,] 10 1.0000000 0.5830397502
5. Suppose X = (X1, . . . , Xn) consists of iid random variables with the geometric distribution given by
Pθ(X1 =x)=(1−θ)x−1θ forx=1,2,…
Consider the decision problem where the decision space D = [0,1] and the loss function is L(d|θ) = (d − θ)2. It is desired to minimise the weighted integral of the risk (i.e., the Bayes risk) using the weight function w(θ) ≡ 1.
(a) Determine the posterior distribution.
(b) Determine the Bayes estimator (i.e., the procedure that minimises the Bayes risk).
turn to page 4
Semester 2, 2019 6. Suppose X = (X1,…,Xn) consists of iid U[0,θ]
page 4 of 7
random variables. Let X(n) denote the CDF of X(n) is given by
the maximum of the Xis. It can be shown that 0
for x < 0,
for 0 ≤ x ≤ θ, for x > θ.
xn Pθ X(n)≤x=θ
(a) Derive a formula for Eθ Xk for positive integer k. Hence verify that the
ˆ∗ n+1 (n)
estimator θn = n X(n) is unbiased and write down its variance.
(b) Let θ ̃n(c, d) denote the Bayes procedure for estimating θ under squared error loss and using the weight function w(θ) = 1{c≤θ≤d} (the U[c,d] density). It
is known that for each 0 ≤ c < θ < d,
lim n2Eθ θ ̃n(c, d) − θ2 = θ2 .
By appealing to the Asymptotic Minimax Theorem, show that θˆ∗ from part
(a) above is asymptotically minimax over any interval 0 ≤ a < b < ∞, that is for any other estimator θˆn,
ˆ 2 lim max n2Eθ θn∗ − θ
≤ lim max n2Eθ n→∞ a≤θ≤b
ˆ 2 θn − θ .
turn to page 5
Probability distributions Discrete distributions
Useful Formulae
• Bernoulli X has a Bernoulli (p) distribution if P(X = 1) = p, P(X = 0) = 1−p. E(X) = p, V ar(X) = p(1 − p).
• Binomial X has a Binomial (n,p) distribution if for x = 0,1,...,n, P(X = x) = nxpx(1−p)n−x. E(X) = np, V ar(X) = np(1 − p).
• Poisson X has a Poisson (λ) distribution if for x = 0,1,..., P(X = x) = e−λλx/x!. E(X) = V ar(X) = λ.
Continuous distributions
• Uniform X ∼ U(a,b),b > a, then X has density fX(x) = 1/(b − a) for x ∈ (a,b), 0 otherwise.
E(X) = (a + b)/2, V ar(X) = (b − a)2/12.
• Normal X ∼ N(0,1), then X has density fX(x) = (2π)−1/2e−x2/2. E(X) = 0, Var(X) = 1.
Y ∼N(μ,σ2),then(Y −μ)/σ∼N(0,1).
• Gamma X ∼ Gamma(α, β), then X has density
fX(x) = 1 xα−1e−x/β for x > 0, βαΓ(α)
Γ(·) is the Gamma function, Γ(α) = (α−1)!, Γ(1) = 1. E(X) = αβ, Var(X) = αβ2. Here β is a scale parameter; 1/β is also called the rate parameter.
• Exponential X ∼ Exponential(β) is the same as X ∼ Gamma(1, β). Here the scale parameter β is also the mean.
• Inverse Gamma X has an Inverse Gamma(α, λ) distribution, then X has density λα e−λ/x
fX(x) = xα+1Γ(α) for x > 0.
Note then that Y = X−1 has an ordinary gamma distribution with shape α and rate λ; E(X) =
λ/(α − 1), V ar(X) = λ2/ (α − 1)2(α − 2). • Beta X∼Beta(α, β), X has density
xα−1(1 − x)β−1
fX(x) = B(α,β) for 0 < x < 1,
B(α,β)= Γ(α)Γ(β) isthebetafunction;E(X)=α/(α+β),Var(X)=αβ(α+β)2(α+β+1). Γ(α+β)
• Pareto X has a Pareto(α, m) distribution, then X has density αmα
fX(x) = xα+1 for x ≥ m, E(X)=αm/(α−1)forα>1(+∞otherwise),Var(X)=αm2(α−1)2(α−2) forα>2(+∞
for 1 < α ≤ 2, undefined otherwise).
Convergence
• Convergence in distribution: A sequence of random variables X1 , X2 , . . . is said to converge in distribution to the continuous CDF F if for any sequence xn → x and real x as n → ∞,
P(Xn ≤xn)→F(x).
If this holds then it also holds with ≤ replaced by <. If F(·) is the N(0,σ2) CDF we also write
Xn →d N(0,σ2).
• Central limit theorem: If X1,...,Xn are iid random variables with mean μ and variance σ2,
then as n → ∞,
• AsymptoticallyNormal: If√n(Xn−μ)→d N(0,σ2)thenwewriteXn ∼ANμ,σ2andsay
ni=1Xi−nμ d
√nσ2 → N(0,1).
the sequence {Xn} is asymptotically normal with asymptotic mean μ and asymptotic variance σ2 .
• One variable: Suppose X has density f(x), consider y = u(x) where u(·) is a differentiable and either strictly increasing or strictly decreasing function for all values within the range of X for which f(x) ̸= 0. Then we can find x = w(y), and the density of Y = u(X) is given by
g(y) = f(w(y)) · |w′(y)|
for all y with corresponding x such that f(x) ̸= 0, and 0 otherwise.
• Extension of one variable: Suppose (X1, X2) has joint density f(x1, x2), consider Y = u(X1, X2). If fixing x2, u(·, x2) satisfies the conditions in the one-variable case, then the joint density of (Y, X2)
• Delta Method If Xn ∼ AN μ, σ2 and the function g(·) has derivative g′(μ) at μ then n
g (Xn) ∼ AN g(μ), g′(μ)2σ2 . n
Transformation of random variables
is given by
where x1 needs to be expressed in terms of y and x2. Fixing x1 is similar.
g(y,x )=f(x ,x )·∂x1, 2 12∂y
Exponential family
• A one-parameter exponential family is a set of probability distributions whose density function or probability mass function can be written in the form
f(x;θ) = eη(θ)T(x)−ψ(θ)h(x)IA(x),
IA is an indicator for the support of the distribution, and A does not depend on θ.
Sufficient statistic
• Factorisation theorem: For random variables X = (X1 , . . . , Xn ), if their joint density function
can be written as
f (x; θ) = g(T (x); θ)h(x), where x = (x1, . . . , xn), then T (X) is a sufficient statistic.
Cram ́er- Bound
• If l(θ;X) is a log-likelihood depending on a parameter θ, then under regularity conditions the
variance of any unbiased estimator of θ is bounded below by
1 . Varθ ∂l(θ;X)
• Suppose that for a sequence {Ln(·|θ)} of loss functions and for any θ0 < θ1, the corresponding sequence of Bayes procedures {dn(·)} based on the U[θ0,θ1] prior is such that for each θ0 < θ < θ1,
lim E L d(X)θ=S(θ) n→∞ θ n n
for some continuous function S(·). Then for any other sequence of procedures {dn(·)}, and any a < b,
lim max Eθ [Ln (dn(X)|θ)] ≥ max S(θ) .
Asymptotic Minimax Lower Bound
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com