CS代考程序代写 Statistics 100B

Statistics 100B
Department of Statistics Order statistics – derivations
Instructor: Nicolas Christou
− 􏰈
(n − k)FX(x)k[1 − FX(x)]n−k−1fX(x)
(last term is zero when k = n)
= +

=
n!
(n − j)!j!
jfX (x)FX (x)j−1fX (x)[1 − FX (x)]n−j
University of California, Los Angeles
Let X1,X2,···,Xn denote independent continuous random variables with cdf F(x) and pdf f(x). We will denote the ordered random variables with X(1), X(2), · · · , X(n), where X(1) ≤ X(2) ≤ . . . ≤ X(n).
Probability density function of the jth order statistic.
gX (x) = n! [FX (x)]j−1 [1 − FX (x)]n−j fX (x).
Proof:
We will find the cdf of the jth order statistic and then the pdf by taking the derivative of the cdf. The cdf is denoted by FX(j)(x) = P(X(j) ≤ x). Now let’s introduce a discrete random variable Y that counts the number of variables less than or equal to x. The statement P(X(j) ≤ x) is the same as P(Y ≥ j). Why? If we call “success” the event Xi ≤ x then Y ∼ b(n,p) or Y ∼ b(n,FX(x)).
(j) (n − j)!(j − 1)!
FX(j)(x) = FX(j ) (x) =
Now the pdf:
gX(j ) (x) =
=

=
+
P(X(j) ≤x)=P(Y ≥j)= 􏰈n􏰄n􏰅k n−k
k p (1−p)
k FX(x) [1−FX(x)] dFX(j) (x)
k=j
(when k = j) 􏰈n􏰄n􏰅 k−1 n−k
k=j+1 n−1 􏰄n􏰅
k
k=j
n−1􏰄 n 􏰅 􏰈
k+1 k=j
n−1 􏰄n􏰅 􏰈
k
k=j
k kFX (x) fX (x)[1 − FX (x)]
􏰈n 􏰄 n 􏰅 k n − k
k=j
dx 􏰈n 􏰄 n 􏰅
k − 1 k kFX (x)
k=j
􏰈n 􏰄 n 􏰅
k=j 􏰄n􏰅
n − k fX (x)[1 − FX (x)]
k (n − k)FX(x) j−1
k
n − k − 1 [1 − FX(x)]
fX(x)
n−j j jfX(x)FX(x) [1 − FX(x)]
(k + 1)FX (x)k fX (x)[1 − FX (x)]n−k−1 (n − k)FX(x)k[1 − FX(x)]n−k−1fX(x)
n! [FX (x)]j−1 [1 − FX (x)]n−j fX (x). (n − j)!(j − 1)!
Note: 􏰁 n 􏰂(k + 1) = 􏰁n􏰂(n − k), so the last 2 terms before the last line cancel! k+1 k
1

An intuitive derivation of the density function of the jth order statistic. This intuitive derivation is based onthisresultP(y≤Y ≤y+dy)≈f(y)dy.
Consider the jth order statistic X(j). If X(j) is in the neighborhood of x then there are
j − 1 random variables less than x, each one with probability p1 = P (X ≤ x) = FX (x),
1 random variable near x, with probability p2 = P (x ≤ X ≤ x + dx) ≈ fX (x)dx, and
n − j random variables larger than x, with probability p3 = P (X > x) = 1 − P (X ≤ x) = 1 − FX (x).
Therefore,
P(x≤X(j) ≤x+dx) ≈ gX(j)(x)dx
􏰄
n
j − 1, 1, n − j
􏰅 j−1 1 n−j
p1 p2 p3 (multinomial distribution)
= =
Using this intuitive derivation we can now find the joint probability density function of X(i), X(j). Using the same approximation as above, P(u ≤ X(i) ≤ u+du,v ≤ X(j) ≤ v+dv) ≈ gX(i),X(j)(u,v)dudv. For u < v we need to have the following arrangement: Therefore, gX (x) = FX (x)j −1 fX (x)dx[1 − FX (x)]n−j . n! FX (x)j−1fX (x)[1 − FX (x)]n−j . (j −1)!(n−j )! n! (j − 1)!(n − j)! (j ) i − 1 1 j−1−i 1 Random variables less than u, each one with probability p1 = P (X ≤ u) = FX (u) Random variables near u with probability p2 = P (u ≤ X ≤ u + du) ≈ fX (u)du Randomvariablesbetweenuandvwithprobabilityp3 =P(u≤X≤v)=FX(v)−FX(u) Random variables near v with probability p4 = P (v ≤ X ≤ v + dv) ≈ fX (v)dv n − j Using the multinomial distribution we have: (i) (j ) i−1,1,j −1−i,1,n−j Random variables larger than v, each one with probability p5 = P (X > v) = 1 − FX (v) P(u≤X(i) ≤u+du,v≤X(j) ≤v+dv) ≈ gX(i),X(j)(u,v)dudv
Therefore,
gX ,X (u, v)dudv = 􏰁 n
or
gX ,X (u,v)= (i) (j )
=
􏰄 n 􏰅 i−1 1 j−1−i 1 n−j i − 1, 1, j − 1 − i, 1, n − j p1 p2p3 p4p5
􏰂FX (u)i−1fX (u)du[FX (v) − FX (u)]j−1−ifX (v)dv[1 − FX (v)]n−j
n! FX(u)i−1fX(u)[FX(v)−FX(u)]j−1−ifX(v)[1−FX(v)]n−j. (i−1)!(j −1−i)!(n−j )!
2