Week 1: Preliminary
Definition of stochastic process: a stochastic process is a collection of random variables
(Xt,t ∈ T) = (Xt(ω),t ∈ T,ω ∈ Ω).
The finite-dimensional distributions (fdds) of the stochastic pro- cess Xt determine many (but not all) important properties of a stochastic process, describing the relationships among the random variables, Xt, t ∈ T.
Copyright By PowCoder代写 加微信 powcoder
Important concepts: Stationarity, stationary increments, independent increments, …
We have introduced: conditional expectation E(X | Y) (conditional prob P(A | Y)); probability generating function (pgf); random sum and Wald’s identity.
Weeks 2-3: Markov Chain: basic concepts
1. Markov property: the probability distribution of future state of the process conditional on both the past and present states depends only on the present state. Note that (Adv tutorial 5):
P(Xn+1 = j|Xn = i,Xn−1 ∈ An−1,…) = P(Xn+1 = j|Xn = i) rather than
P(Xn+1 = j|Xn ∈ A,Xn−1 ∈ An−1,…) = P(Xn+1 = j|Xn ∈ A).
2. Transition matrix and state transition diagram: it is vital to understand the concepts for one step transition matrix and state tran- sition diagram. They have one to one relationship.
Example: A MC has a state transition diagram:
This MC must have the transition probability matrix:
1 1 1 424
P = 13 0 23 .
State transition diagram provides good techniques for solving some problems about Markov chains, particularly for finite state MC.
The following article: REPRESENTING MARKOV CHAINS WITH TRANSITION DIAGRAMS
https://thescipub.com/pdf/10.3844/jmssp.2013.149.154
provides useful ideas for how to use TRANSITION DIAGRAMS in calculating n-step transition probability, probability of visiting a state for the first time, and mean recurrence time, etc.
3. Chapman-Kolmogorov equation:
p(m+n) = P (Xm+n = j|X0 = i)
= p(m)p(n). k∈S ik kj
The n-step transition matrix is given by
P(n) = Pn, for n = 1,2,3,··· .
4. Communicating classes: for any MC Xk,k ≥ 0, its state space S can be partitioned into disjoint communicating classes.
Two states i and j are said to communicate, written as i ←→ j, if they are accessible from each other, i.e., there exist m ≥ 0 and n ≥ 0 such that
p(m) =P(Xm =j|X0 =i)>0, p(n) =P(Xn =i|X0 =j)>0. ij ji
Irreducible MC: the MC has only one class;
Closed class C: no state outside C can be reached from any state
in C, i.e. pij = 0 whenever i ∈ C and j ∈/ C. Absorbing state j: {j} is a closed class, i.e., pjj = 1.
Aperiodic (periodic) state, class and chain:
• a state j is called aperiodic (periodic) if dj = 1 (dj > 1), where
dj =gcd{n: p(n) >0}; jj
• states in a class share the same period, i.e., if i ←→ j, then di =dj;
• an irreducible MC is aperiodic if one of its states is aperiodic.
5. Recurrence (positive and null) and transience We make use of the following notation.
Ni = E(Ni|X0=i) =
p(0) ii f(1)
= P(Xn =j|X0 =i), pij =p(1), ij
= prob of return to i in n transitions not necessarily first return),
= 1, =p(1)=pij,
f (n) = ij
f (n) = ii
P(Xn =j,Xn−1 ̸=j,…,X1 ̸=j |X0 =i), n≥2,
the prob of first arrive at j from i in n transitions
P(Tj =n|X0 =i), where
Tj = min{n ≥ 1 : Xn = j}, the first visit time to state j ;
prob of return to i for the first time after n transitions,
the prob that the chain eventually arrives at state j from i,
E(Ti |X0 =i)=nf(n),
the mean recurrence (return) time, ∞
the average (total) number of returns to the state i.
• Positive recurrent: μi = ∞ n f (n) < ∞ and
E(Ni |X0 =i)=p(n) =∞;
• Null recurrent: μi = ∞ nf(n) = ∞ and n=1 ii
E(Ni |X0 =i)=p(n) =∞;
• Transient: μi = ∞ nf(n) = ∞ and n=1 ii
E(Ni |X0 =i)=p(n) = ii <∞.
ii 1−fii i=1
• For a MC with finite states, at least one state is recurrent, and all recurrent states are positive recurrent, and a class is a recurrent class iff it is closed.
• Infinite closed class can be either transient or recurrent.
• For an irreducible MC, it follows that
– all states are transient; or
– all states are positive recurrent; or – all states are null recurrent.
Week 4: Limit distribution, Stationary distribution, Absorp- tion probability and Mean Hitting Time
Let {Xn, n ≥ 0} be a MC with state space S and transition matrix P . 1. Stationary distribution: π = {πj,j ∈ S} is a probability distribu-
tion satisfying π = π P, i.e.,
πj = πkpkj, forallj∈S.
2. Limiting distribution: π = {πj,j ∈ S} is a probability distribution
πj = lim P(Xn = j|X0 = i) n→∞
for all i,j ∈ S.
3. MC with finite state space:
• has at least one stationary distribution; if the chain is irreducible (must be a positive recurrent chain), then the stationary distri- bution is unique;
• if the chain is irreducible and aperiodic, then a limit distribution π = {πj,j ∈ S} exists so that
πj = lim P(Xn =j)= lim p(n) =1/μj >0, n→∞ n→∞ ij
where μj = ∞ nf(n), the mean recurrence time of state j, and n=1 jj
the limit distribution π is the unique solution to the equations: πj =πkpkj, j∈S, πj=1,
k∈S j∈S orπ=πP andj∈Sπj =1.
Note: For an irreducible and aperiodic MC with finite state space, its limit distribution and stationary dist are the same, which is given byπ={πj,j∈S},whereπj =1/μj.
In this case, you can use the def of stationary distribution to find the mean recurrence time, the limit of Pn, etc .
4. Irreducible and aperiodic MC with infinite state space
• positive recurrent: stationary dist π = {πj,j ∈ S} exists and is unique; all mean recurrence time μi = 1/πi are finite and limn→∞ P (Xn = i) = πi (i.e., the limit dist exists);
• null recurrent: all mean recurrence time are infinite; no stationary dist exists; limn→∞ P (Xn = i) = 0;
• transient: any particular state is eventually never visited; no sta- tionary dist exists; limn→∞ P (Xn = i) = 0.
5. Absorption probability and Mean Hitting Time
If Cm = {m} is absorbing, the absorption probabilities ai(m) = P (absorption in m | X0 = i) satisfy the equations: ai(m) = 1 if i = m, and for i ∈ T where T is a set of transient states,
ai(m) = pim+pikak(m), i∈T. k∈T
More general result exists. See Lect 12.
For all i ∈ S, define the mean hitting time: ti(A) = E[TA|X0 = i],
whereTA =min{n≥0:Xn ∈A}.Notethatti(A)=0foralli∈A and
ti(A) = E[TA|X0 = i] = E[TA|X0 = i]
foralli̸∈A,whereTA =min{n≥1:Xn ∈A}.
The unknown values of ti(A)’s follows from the equations: ti(A) = 1 + k∈S tk(A)pik = 1 + k∈S−A tk(A)pik,
for i ∈ S − A.
Week 5: Random walks
Let Xk,k ≥ 1 be i.i.d r.vs only taking integer values. Let X0 = 0 or some i, Sn = nk=0 Xk. The process {Sn, n ≥ 1} is called a Random walk (RW).
Unrestricted random walk: provide an example of null recurrent MC; the distribution of the first passage time Tk0 from the origin 0 to point k ≥ 0, where
Tk0 =inf{n≥1:Sn =k}.
If k = 0, T0 is also called the first time of return to 0.
Random walk with barriers (Gambler’s ruin problem):
LetX0 =m>0andSn =nk=0Xk,whereXk,k≥1arei.i.dr.vswith
P(X1 =1)=p, P(X1 =−1)=1−p, where0
m.
{Sn,n ≥ 1} is a simple Random walk with S = {0,1,2,…,k}. The
absorbing barriers are {0, k}.
Define the time of ruin N from the original capital X0 = m :
N = min{n ≥ 0 : Sn = 0 or Sn = k}, Probability of a Gambler’s ruin: If p = q, we have
P(SN = 0) = P(the gambler is ruined) = (k − m)/k; P (SN = k) = P (the gambler reaches k$) = m/k.
If p ̸= q, we have
P(SN =0) = 1−(1−θm)/(1−θk);
P(SN =k) = (1−θm)/(1−θk), whereθ=q/p. The expected duration of the game:
EN = m(k−m), ifp=q;
EN = 1 m−k(1−θm)/(1−θk), whereθ=q/p. q−p
Week 6: Branching processes
Let ξjk, k ≥ 0, j ≥ 1, are iid random variables with distribution:
P(ξjk =i)=fi, i=0,1,2…, fi =1, 0