MAST30001 Stochastic Modelling
Tutorial Sheet 4
1. Let p ∈ (0, 1) and suppose that a time homogenous Markov chain on Z = {. . . ,−2,−1, 0, 1, 2, . . .}
has transition probabilities given by
pi,i+2 = p, pi,i−1 = 1− p and pi,j = 0 for all i ∈ Z, j 6= i+ 2, i− 1.
(a) Using Stirling’s approximation, (for n ≥ 1)
1 ≤
n!en
nn
√
2πn
≤ 2,
determine for what values of p the chain is recurrent.
The chain is irreducible. According to the theorem in the lectures, we need only
compute the expected number of visits to state 0 (starting from state 0 with
probability 1), which is given by
∞∑
n=0
P(S3n = 0).
Now since the chain either increases by 2 or decreases by 1, the only way that
it can be at 0 at time m is if it has taken twice as many -1 steps as +2 steps,
so in particular m has to be a multiple of 3. In 3n steps there are
(
3n
n
)
ways of
choosing n upward jumps (and 2n downward jumps), and each of these paths
has probability pn(1− p)2n = (p(1− p)2)n.
Therefore
P(S3n = 0) =
(
3n
n
)
(p(1− p)2)n.
Using Stirling’s approximation, we have that for n ≥ 1 this is bounded above
and below by a positive constant times
(3n)3n
√
n
nn(2n)2n
√
n
√
n
(p(1− p)2)n =
(27p(1− p)2/4)n
√
n
.
The term in brackets can be at most 1 since otherwise P(S3n = 0) diverges to
+∞. Thus these probabilities are summable if and only if 27p(1 − p)2/4 < 1,
i.e. if and only if p 6= 1/3.
So the chain is transient if p 6= 1/3 and is recurrent if p = 1/3.
(b) Compute the probability of reaching state -1, starting in state 0.
We have that h0,−1 = 1− p+ ph2,−1. But in order to get from state 2 to state
-1, the chain must pass through states 1 and 0 in that order (since the only
downward steps are of size 1). Thus
h2,−1 = h2,1h1,0h0,−1.
But all three terms on the right hand side are equal since the transitions are
identical from every state (i.e. the chain is “translation invariant”), so
h2,−1 = h
3
0,−1.
Thus, h0,−1 = 1−p+ph30,−1. This is a cubic equation of the form px3−x+1−p =
0. Clearly 1 is a solution, so we can write
px3 − x+ 1− p = (x− 1)(px2 + px− (1− p)) = 0.
The other two solutions are therefore
x =
−p±
√
p2 + 4p(1− p)
2p
.
Clearly one of these solutions is negative. The discriminant is larger than p2
for all p ∈ (0, 1) so the other is positive. This other one is less than 1 if√
p2 + 4p(1− p) < 3p.
Squaring both sides and rearranging, we get that this solution is less than 1 if
1 < 3p, i.e. p > 1/3.
Thus,
h0,−1 =
{
1, if p ≤ 1/3
−p+
√
p2+4p(1−p)
2p
if p > 1/3.
2. Consider a Markov chain with state space S = {1, 2, . . . , 6} and transition matrix
given by
P =
1
2
1
2
0 0 0 0
1 0 0 0 0 0
0 3
4
0 1
4
0 0
0 0 1
2
0 1
2
0
0 0 0 0 0 1
0 0 0 0 2
3
1
3
(a) Find all stationary distributions for this chain
The full balance equations include π3 = π4/2 and π4 = π3/4, so π3 = 0 = π4.
The remaining equations then become
π2 = π1/2, π1 = π1/2 + π2, π5 = 2π6/3, π6 = π6/3 + π5.
We also have that (π1 + π2) + (π3 + π4) = 1. Once we fix a = (π1 + π2) this
determines both π1 and π2. It also determines (π3 + π4) and hence π3 and π4.
Any vector of the form π = (2
3
a, 1
3
a, 0, 0, 2
5
(1 − a), 3
5
(1 − a)) with a ∈ [0, 1] is
therefore an equilibrium distribution for this chain (and there are no others).
(b) Find the probability that we ever reach state 1, starting from state 3
Note that h2,1 = 1 and h5,1 = 0 (it’s obvious, but you can also get this from the
complete set of equations),
h3,1 =
3
4
h2,1 +
1
4
h4,1 =
3
4
+
1
4
h4,1
h4,1 =
1
2
h5,1 +
1
2
h3,1 =
1
2
h3,1
Solving gives h3,1 =
6
7
, h4,1 =
3
7
.
(c) Find the expected time that we first reach the set of states A = {2, 5}, starting
from state 3.
m3,A = 1 +
1
4
m4,A
m4,A = 1 +
1
2
m3,A
Solving gives
m3,A =
10
7
.
(d) Starting from state 3, find the limiting proportion of time spent in state 1.
This is random. If the chain reaches state 5, (which has probability 1 − h3,1)
then the limiting proportion of time spent in state 1 is 0.
If the chain reaches state 1 (which has probability h3,1) then it moves between
states 1 and 2 only, visiting (on average) state 1 twice as much as state 2 (so
the limiting proportion of time spent in state 1 is 2/3 in this case. Therefore
the limit Y (1) is a random variable with
P(Y (1) = 2/3) = 6/7 = 1− P(Y (1) = 0).
(e) Starting from state 3, find the expected limiting proportion of time spent in
state 1.
From above this is E[Y (1)] = 4
7
.
(f) Repeat the above two questions, starting from state 4 instead of state 3.
Again Y (1) is random, we get the same as above but with h3,1 replaced with
h4,1 =
3
7
. In this case E[Y (1)] = 2/3 ∗ 3/7 = 2/7.
3. Assume that we have an infinite supply of a certain electrical component and that
the lifetimes (measured from the beginning of use) of the components are i.i.d.
N-valued random variables (so time is measured in discrete units) with
P(Ti = k) = qk > 0, k = 1, 2, . . . ,
and E[T1] <∞. At time n = 0, the first component begins use and when it fails (at time T1), it is immediately replaced by the second component, and when it fails (at time T1 +T2), it is replaced, and so on. Let Xn be the age of the component in use at time n and say Xn = 0 at times n where there is a failure. (a) Show that Xn is a Markov chain and write down its transition probabilities. Let F denote the cdf of T1. If Xn = j, then either Xn+1 = j + 1 or Xn+1 = 0 since the component either fails or goes another time unit. We compute: P(Xn+1 = j + 1|Xn = j,Xn−1 = xn−1, . . . , X0 = x0) = P(T1 > j + 1|T1 > j)
=
1− F (j + 1)
1− F (j)
=: pj,
which only depends on j, so this quantity equals P(Xn+1 = j + 1|Xn = j) and
the chain has the Markov property with the transition probabilities given by that
above plus
P(Xn+1 = 0|Xn = j) = 1− pj =
qj+1
1− F (j)
.
(b) Explain why this chain is positive recurrent.
The chain is irreducible. The mean return time to state 0 is the expected time
until the next component fails, which is E[T1] <∞. Therefore the mean return
time is finite. By irreducibility the mean return time to every state is finite,
and hence the chain is positive recurrent.
(c) Find all equilibrium distributions for this Markov chain.
This chain is (irreducible and) positive recurrent, so there is a unique stationary
distribution π. The mean return time to state 0 is E[T1], so π0 = 1/E[T1].
For i ≥ 0 the full balance equations can be written as
πi+1 = piπi.
So πi = π0
∏i−1
k=0 pk.