计算机代写 STAT3023, STAT3923 and STAT4023: Statistical Inference and Statistical Inf

Semester 2, 2020
Seat Number: …………………….. Last Name: ……………………….. Other Names: …………………….. SID: ……………………………….
The University of of Science
STAT3023, STAT3923 and STAT4023: Statistical Inference and Statistical Inference (Advanced)

Copyright By PowCoder代写 加微信 powcoder

Lecturers: and Time allowed: Two hours
This booklet contains 7 pages.
CONFIDENTIAL
There are 6 questions of equal value over pages 2, 3 and 4. Attempt all questions. Show all working. Pages 5, 6 and 7 have a list of useful formulae.

Semester 2, 2020 page 2 of 7 Let X be a Binomial(n,p) random variable and Xi (i = 1,…,n) be i.i.d.
Bernoulli(p) random variables.
(i) Find the moment􏰑generating function of Xi by direct calculation.
(ii) By writing X = ni=1 Xi, show that the moment generating function of X is
MX(t) = (1 − p + pet)n.
(b) The moment generating function of a random variable X is given by 􏰄 14 + 34 et 􏰅10 and that of a random variable Y by exp(2et − 2). X and Y are indepen- dent.
(i) Find the moment generating function of Z = 3X + 2Y .
(ii) Compute E (X Y ).
(iii) Compute P(XY = 0). Round your answer to 3 decimal places. [Hint: Use (a) and note the m.g.f. of a Poisson (λ) is exp(λ(et − 1)).]
Let X1, X2 be i.i.d. Exponential (λ) random variables (with p.d.f. fX (x) = λ1 e−x/λ for x > 0) and Y be an Exponential (θ) random variable, where θ
and λ are not necessarily equal.
(i) Find the p.d.f. of U = min(X1, Y ).
(ii) Find the p.d.f. of V = X1/X2 for λ = 1.
(b) Suppose Z is a geometric variable with distribution given by
P(Z = z) = (1 − p)z−1p (∗) for z = 1, 2, . . .. By changing parameters to θ = θ(p), write (∗) in the form
and by differentiating K(θ) show that E(Z) = 1/p and Var(Z) = (1−p)/p2.
Suppose X1, . . . , Xn are iid Poisson(θ).
(a) Write down the likelihood function.
(b) Determine and write down the score function.
(c) Derive the variance of the score function and thus the Cram ́er-Rao lower
bound for unbiased estimation of θ.
(d) Show that the sample mean X ̄ = n1 􏰑ni=1 Xi is minimum variance unbiased.
turn to page 3

Semester 2, 2020 page 3 of 7 4. Suppose X1, X2, . . . , Xn are iid binomial(m, θ) for some known positive integer m
and unknown 0 < θ < 1. (a) Write down the likelihood function and verify that this family of distribu- tions has the monotone likelihood ratio property in some statistic T(X). Determine the precise form of T(X). (b) Supposethatn=3andm=7andwewishtotestH0:θ=0.5against H1 : θ < 0.5. Derive the form of the uniformly most powerful test at level 0.05. You may find the R output below useful. > cbind(0:7,pbinom(0:7,7,0.5),pbinom(0:7,21,0.5))
[,1] [,2] [,3]
[1,] 0 0.0078125 4.768372e-07
[2,] 1 0.0625000 1.049042e-05
[3,] 2 0.2265625 1.106262e-04
[4,] 3 0.5000000 7.448196e-04
[5,] 4 0.7734375 3.598690e-03
[6,] 5 0.9375000 1.330185e-02
[7,] 6 0.9921875 3.917694e-02
[8,] 7 1.0000000 9.462357e-02
5. Suppose that Y1,…,Yn are iid U(0,θ) for some unknown θ > 0.
(a) Write down the likelihood function and explain why the maximum likelihood
estimator is the sample maximum Y(n) = maxi=1,…,n Yi.
(b) Determine the posterior distribution for any Bayes procedure that uses the
the “flat prior” w(θ) ≡ 1.
(c) Describe the Bayes procedure using the flat prior given above and the loss
for a given ε > 0.
L(d|θ)=􏰐1 if|d−θ|>ε 0 otherwise,
turn to page 4

Semester 2, 2020 page 4 of 7
6. Suppose X1, . . . , Xn are iid exponential random variables with mean θ. Consider the decision problem with decision space D = Θ = (0,∞) and loss function L(d|θ) = (d − θ)2. Let θ􏰘B denote the Bayes procedure based on the “flat prior” w(θ) ≡ 1.
(a) Write down the likelihood function and explain why the posterior distribu- tion associated with this prior is an inverse gamma distribution.
(b) Explain why the Bayes procedure is given by θ􏰘B = 􏰑ni=1 Xi .
(c) Write down the exact risk (expected loss)
E θ 􏰔 L 􏰊 θ􏰘 B 􏰈􏰈􏰈 θ 􏰋 􏰕 = E θ 􏰖 􏰊 θ􏰘 B − θ 􏰋 2 􏰗 as a function of θ and n.
(d) Show that for each θ > 0,
lim nEθ 􏰓􏰔L􏰊θ􏰘B􏰈􏰈􏰈θ􏰋􏰕 = θ2 .
(e) For any 0 < a < b < ∞, if we let θ(a,b) denote the Bayes procedure using the U(a,b) prior it can be shown that for all a < θ < b, 􏰔􏰊 􏰈􏰈􏰋􏰕 􏰔􏰊􏰈􏰈􏰋􏰕 limnELθ􏰓􏰘􏰈θ =limnELθ􏰘􏰈θ . n→∞ θ (a,b) n→∞ θ B By appealling to the Asymptotic Minimax Theorem, use this to explain why the original Bayes procedure θB is asymptotically minimax over any (closed) interval I ⊂ (0, ∞). That is to say, for any other procedure d(X), lim maxnEθ 􏰔L􏰊θ􏰘B􏰈􏰈􏰈θ􏰋􏰕 ≤ lim maxnEθ [L(d(X)|θ)] . n→∞ θ∈I n→∞ θ∈I turn to page 5 Probability distributions Discrete distributions Useful Formulae • Bernoulli X has a Bernoulli (p) distribution if P(X = 1) = p, P(X = 0) = 1−p. E(X) = p, V ar(X) = p(1 − p). • Binomial X has a Binomial (n,p) distribution if for x = 0,1,...,n, P(X = x) = 􏰄nx􏰅px(1−p)n−x. E(X) = np, V ar(X) = np(1 − p). • Poisson X has a Poisson (λ) distribution if for x = 0,1,..., P(X = x) = e−λλx/x!. E(X) = V ar(X) = λ. Continuous distributions • Uniform X ∼ U(a,b),b > a, then X has density fX(x) = 1/(b − a) for x ∈ (a,b), 0 otherwise.
E(X) = (a + b)/2, V ar(X) = (b − a)2/12.
• Normal X ∼ N(0,1), then X has density fX(x) = (2π)−1/2e−x2/2. E(X) = 0, Var(X) = 1.
Y ∼N(μ,σ2),then(Y −μ)/σ∼N(0,1).
• Gamma X ∼ Gamma(α, β), then X has density
fX(x) = 1 xα−1e−x/β for x > 0, βαΓ(α)
Γ(·) is the Gamma function, Γ(α) = (α−1)!, Γ(1) = 1. E(X) = αβ, Var(X) = αβ2. Here β is a scale parameter; 1/β is also called the rate parameter.
• Exponential X ∼ Exponential(β) is the same as X ∼ Gamma(1, β). Here the scale parameter β is also the mean.
• Inverse Gamma X has an Inverse Gamma(α, λ) distribution, then X has density λα e−λ/x
fX(x) = xα+1Γ(α) for x > 0.
Note then that Y = X−1 has an ordinary gamma distribution with shape α and rate λ; E(X) =
λ/(α − 1), V ar(X) = λ2/ 􏰆(α − 1)2(α − 2)􏰇. • Beta X∼Beta(α, β), X has density
xα−1(1 − x)β−1
fX(x) = B(α,β) for 0 < x < 1, B(α,β)= Γ(α)Γ(β) isthebetafunction;E(X)=α/(α+β),Var(X)=αβ􏰉􏰆(α+β)2(α+β+1)􏰇. Γ(α+β) • Pareto X has a Pareto(α, m) distribution, then X has density αmα fX(x) = xα+1 for x ≥ m, E(X)=αm/(α−1)forα>1(+∞otherwise),Var(X)=αm2􏰉􏰆(α−1)2(α−2)􏰇 forα>2(+∞
for 1 < α ≤ 2, undefined otherwise). Convergence • Convergence in distribution: A sequence of random variables X1 , X2 , . . . is said to converge in distribution to the continuous CDF F if for any sequence xn → x and real x as n → ∞, P(Xn ≤xn)→F(x). If this holds then it also holds with ≤ replaced by <. If F(·) is the N(0,σ2) CDF we also write Xn →d N(0,σ2). • Central limit theorem: If X1,...􏰑,Xn are iid random variables with mean μ and variance σ2, then as n → ∞, • AsymptoticallyNormal: If√n(Xn−μ)→d N(0,σ2)thenwewriteXn ∼AN􏰊μ,σ2􏰋andsay ni=1Xi−nμ d √nσ2 → N(0,1). the sequence {Xn} is asymptotically normal with asymptotic mean μ and asymptotic variance σ2 . • One variable: Suppose X has density f(x), consider y = u(x) where u(·) is a differentiable and either strictly increasing or strictly decreasing function for all values within the range of X for which f(x) ̸= 0. Then we can find x = w(y), and the density of Y = u(X) is given by g(y) = f(w(y)) · |w′(y)| for all y with corresponding x such that f(x) ̸= 0, and 0 otherwise. • Extension of one variable: Suppose (X1, X2) has joint density f(x1, x2), consider Y = u(X1, X2). If fixing x2, u(·, x2) satisfies the conditions in the one-variable case, then the joint density of (Y, X2) • Delta Method If Xn ∼ AN 􏰊μ, σ2 􏰋 and the function g(·) has derivative g′(μ) at μ then n g (Xn) ∼ AN 􏰌g(μ), g′(μ)2σ2 􏰍 . n Transformation of random variables is given by where x1 needs to be expressed in terms of y and x2. Fixing x1 is similar. g(y,x )=f(x ,x )·􏰈􏰈􏰈∂x1􏰈􏰈􏰈, 2 12􏰈∂y􏰈 Exponential family • A one-parameter exponential family is a set of probability distributions whose density function or probability mass function can be written in the form f(x;θ) = eη(θ)T(x)−ψ(θ)h(x)IA(x), IA is an indicator for the support of the distribution, and A does not depend on θ. Sufficient statistic • Factorisation theorem: For random variables X = (X1 , . . . , Xn ), if their joint density function can be written as f (x; θ) = g(T (x); θ)h(x), where x = (x1, . . . , xn), then T (X) is a sufficient statistic. Cram ́er- Bound • If l(θ;X) is a log-likelihood depending on a parameter θ, then under regularity conditions the variance of any unbiased estimator of θ is bounded below by 􏰔1 􏰕. Varθ ∂l(θ;X) • Suppose that for a sequence {􏰓Ln(·|θ)} of loss functions and for any a < b, the corresponding sequence of Bayes procedures {dn(·)} based on the U(a,b) prior is such that for each a < θ < b, lim E 􏰔L 􏰊d􏰓(X)􏰈􏰈􏰈θ􏰋􏰕=S(θ) n→∞ θ n n for some continuous function S(·). Then for any other sequence of procedures {dn(·)}, and any (closed) interval I, lim max Eθ [Ln (dn(X)|θ)] ≥ max S(θ) . Asymptotic Minimax Lower Bound 程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com