MAST30001 Stochastic Modelling
Tutorial Sheet 5
1. Fix p ∈ (0, 1). Consider the “Gambler’s ruin” chain on S = {0, 1, . . . , k} with
pi,i+1 = p and pi,i−1 = 1 − p for 1 ≤ i ≤ k − 1. Let mi := mi,{0,k} denote the
expected hitting time of {0, k} starting from state i.
(a) Show that if p = 1/2 then mi = i(k − i).
Let xi = i(k− i). We show that these xi do solve the equations (when p = 1/2).
Firstly x0 = xk = 0 is trivial. Next we want (for i 6= 0, k)
xi = 1 + pxi+1 + (1− p)xi−1.
Substituting in the values for xi (and using p = 1/2) the right hand side is
1 +
1
2
(i+ 1)(k − (i+ 1)) +
1
2
(i− 1)(k − (i− 1)).
Simplifying, we get i(k − i) which is equal to xi as required.
(b) Show that if p 6= 1/2 then
mi =
i
1− 2p
−
k
1− 2p
[
αi − 1
αk − 1
]
,
where α = (1− p)/p.
Again we show that xi =
i
1−2p −
k
1−2p
[
αi−1
αk−1
]
solve the equations. Clearly x0 =
xk = 0. For i 6= 0, k substituting in the values for xi (and using p 6= 1/2) the
right hand side of the equation
xi = 1 + pxi+1 + (1− p)xi−1
becomes
1+p
(
i+ 1
1− 2p
−
k
1− 2p
[
αi+1 − 1
αk − 1
])
+(1−p)
(
i− 1
1− 2p
−
k
1− 2p
[
αi−1 − 1
αk − 1
])
.
Using the fact that pαi+1 = (1 − p)αi and (1 − p)αi−1 = pαi, we can simplify
this to get
i
1− 2p
−
k
1− 2p
[
(1− p)αi − p+ pαi − (1− p)
αk − 1
]
,
which is equal to
i
1− 2p
−
k
1− 2p
[
αi − 1
αk − 1
]
,
as required.
2. In a tiny politically divided village there are 12 villagers living in a 3×4 grid as
shown below. On each border, 5 always support party 1, and 5 always support
party 0 as shown below.
1 1 1 1
1 ? ? 0
0 0 0 0
The opinion of each of the two interior villagers is influenced by their neighbours
and fluctuates over time as follows: at each discrete time n ∈ N, one of the two
interior voters (chosen uniformly at random) adopts the political preference of one
of its 4 immediate neighbours (also chosen uniformly at random). Let Ln and Rn
be the opinions carried by the left and right interior villagers (as in the picture)
respectively.
(a) What is the state-space of the Markov chain Xn = (Ln, Rn)?
S = {0, 1}2 = {(0, 0), (1, 0), (0, 1), (1, 1)}.
(b) Draw the transition diagram for the chain Xn.
(0, 0)
(0, 1)
(1, 0)
(1, 1)
1/8
2/8
1/8
2/8
5/8 5/8
2/8
4/8
3/8
3/8
2/8
2/8
(c) Is the chain Xn:
i. Irreducible?
Yes, since all states bi-communicate
ii. Aperiodic?
Yes, since there are self-loops
iii. Reversible?
Yes. The probability of traversing any loop is the same in the clockwise
and anticlockwise directions (there is essentially only one that you need to
check, that is going a full cycle from state i to state i, which has probability
2
8
3
8
2
8
2
8
whether you go in the clockwise or anticlockwise directions).
Alternatively we can simply solve the detailed balance equations:
1
8
π(1,1) =
3
8
π(0,1),
2
8
π(1,1) =
2
8
π(1,0),
2
8
π(0,0) =
2
8
π(1,0),
1
8
π(0,0) =
3
8
π(0,1).
This gives π(1,1) = π(1,0) = π(0,0) and π(1,1) = 3π(0,1). Setting the sum equal
to 1 gives π(1,1) = π(1,0) = π(0,0) =
3
10
and π(0,1) =
1
10
. So the equations are
solvable, hence the process is reversible.
(d) What is the limiting proportion of time that the left interior villager supports
party 1?
This is given by π(1,0) + π(1,1) where ~π is the (unique) stationary distribution.
Thus, from our previous answer above,
π(1,0) + π(1,1) =
6
10
.
(e) Suppose that at time n = 1000000 a vote is taken (and the whole village votes).
Estimate the probability that party 1 wins the majority of votes, explaining
your reasoning.
This is asking for the probability that the state at time 1000000 is (1, 1). This
is approximately equal to the limiting probability of being in state (1, 1) which
in this case (irreducible, aperiodic finite-state Markov chain) is the same as the
equilibrium distribution. Therefore an approximate answer is π(1,1) =
3
10
.
3. Each morning at 8am, it is raining with probability rm ∈ (0, 1), and each afternoon
at 4pm it is raining with probability ra ∈ (0, 1) (both are independent of all previous
weather conditions). Suppose that your MAST30001 lecturer has 2 umbrellas, and
that he departs home for work at 8am each day and departs from work to home at
4pm each day (with a very short commute). Whenever it is raining at departure
time s/he takes an umbrella on the trip if there was one available at the departure
point. Find the long run proportion of trips for which it is raining on his departure
but he has no umbrella available.
Let Nn denote the number of umbrellas at the lecturer’s current location, and Ln
denote the current location (Ln = 0 means at home, and Ln = 1 means at work).
Then (Nn, Ln) is a discrete-time (time homogeneous) Markov chain with transition
diagram:
(0, 0) (2, 1) (1, 0) (1, 1) (2, 0) (0, 1)
1 ra 1− rm ra 1− rm
1− ra rm 1− ra rm 1
This is a birth and death chain, so it is reversible. Therefore the (unique) stationary
distribution satisfies the detailed balance equations:
π(0,0) = (1− ra)π(2,1)
π(2,1) =
rm
ra
π(1,0)
π(1,0) =
1− ra
1− rm
π(1,1)
π(1,1) =
rm
ra
π(2,0)
π(2,0) =
1
1− rm
π(0,1).
Let a1 = 1/(1− rm), a2 = rm/ra, a3 = (1− ra)/(1− rm), a4 = a2, a5 = 1− ra.
Then π(0,1)[1 +
∑5
i=1
∏i
j=1 aj] = 1, so π(0,1) = 1/[1 +
∑5
i=1
∏i
j=1 aj].
We are looking for π(0,0)rm+π(0,1)ra, since these are the long run proportion of time
that we are at home with no umbrella and it rains on departure, and we are at work
with no umbrella and it rains on departure respectively.
Since π(0,0) =
∏5
j=1 ajπ(0,1) we get that
π(0,0)rm + π(0,1)ra =
1
1 +
∑5
i=0
∏i
j=0 aj
[
ra + rm
5∏
j=1
aj
]
.