CS计算机代考程序代写 chain MAST30001 Stochastic Modelling

MAST30001 Stochastic Modelling

Tutorial Sheet 3

1. Let Y1, Y2, . . . be i.i.d. random variables with probability mass function given by the
following table.

k 0 1 2 3
P(Y = k) 0.2 0.2 0.3 0.3

Set X0 = 0 and let Xn = max{Y1, . . . , Yn} be the largest Yi observed to time n.

(a) Explain why (Xn) is a Markov chain and determine its transition matrix P .
We can compute the conditional probabilities given the entire history of the
chain as

P(Xn+1 = k|Xn = j,Xn−1 = xn−1, . . . , X1 = x1, X0 = 0) =
{

P(Yn+1 = k), k > j,
P(Yn+1 ≤ j), k = j,

which only depend on j (and not the xi’s) so the chain is Markov and with
transitions given by the formula above:

P =




0.2 0.2 0.3 0.3
0 0.4 0.3 0.3
0 0 0.7 0.3
0 0 0 1


 .

(b) Find all communicating classes for this chain and all absorbing states.
Each state is its own communicating class, and 3 is an absorbing state.

(c) Describe the long run behavior of the chain. In particular can you determine
the matrix limn→∞ P

n?
The chain starts at 0 and increases, eventually landing at the absorbing state
3. Since P n is the n step transition matrix we expect

lim
n→∞

P n =




0 0 0 1
0 0 0 1
0 0 0 1
0 0 0 1


 .

(d) Find the expected time to reach state 3, starting from state 0.
The time it takes to get to see a 3 in the sequence is Geometric(.3), so the
expected time is 10/3

2. A game involves rolling a ball around a table, starting from a fixed position. Three
players, Alex, Bobby and Célia, take turns attempting to grab the ball as it passes
them. If Alex is unsuccessful then Bobby attempts to grab the ball and if he is also
unsuccessful then Célia attempts to grab the ball. If they are all unsuccessful, the
ball is rolled again from the starting position. The game stops as soon as any player
is successful (and that player wins!). Suppose that on each attempt, Alex, Bobby
and Célia have probabilities p1, p2 and p3 (all ∈ (0, 1), with p1 > 0) respectively, of
grabbing the ball, independent of previous attempts.

(a) Using Markov chain theory, find the probability that Alex wins the game.
Consider a 6-state Markov chain, with states S = {1, 2, 3, 4, 5, 6}, with state
i ∈ {1, 2, 3} representing which player’s turn it is and i+3 representing the state
where player i wins the game. Then we are looking for the hitting probability
h1,4. The equations are

h1,4 = p1 + (1− p1)h2,4
h2,4 = (1− p2)h3,4
h3,4 = (1− p3)h1,4.

Therefore, letting a = (1− p1)(1− p2)(1− p3) we see that

h1,4 = p1 + ah1,4,

and solving gives h1,4 =
p1
1−a .

Note that, like most problems, there is more than one way to solve this problem.
E.g. consider a different Markov chain with 4 states, which could roughly be
described as 1=“Alex wins”, 2=“Bobby wins”, 3=“Celia wins” and 0=“noone
wins” (representing what happens on each round). We start in state 0. Each
round Alex wins with probability p1 (transition 0→ 1), Bobby wins with proba-
bility (1−p1)p2 (transition 0→ 2), Celia wins with probability (1−p1)(1−p2)p3
(transition 0→ 3) and noone wins with probability (1−p1)(1−p2)(1−p3) (tran-
sition 0→ 0). Thus, p0,0 = (1− p1)(1− p2)(1− p3), p0,1 = p1 etc.

(b) What condition(s) do p1, p2 and p3 have to satisfy so that each player is equally
likely to win the game? (Include the largest possible values of p1,p2, and p3 in
your answer.)
Proceeding as in the answer to the first part of this question (or using that
answer and a rotational symmetry), the probabilities of Alex, Bobby and Célia

winning are p1
1−a ,

(1−p1)p2
1−a and

(1−p1)(1−p2)p3
1−a respectively.

These probabilities are equal if and only if p1 = (1− p1)p2 and p2 = (1− p2)p3,
i.e. p2 =

p1
1−p1

and p3 =
p2

1−p2
= p1

1−2p1
. In particular this last equation requires

(for p3 to be a probability) that
p1

1−2p1
≤ 1 which can be rearranged to get p1 ≤ 13 .

[This last inequality should be reduced to p1 < 1/3 if we want to keep p3 < 1 as stated in the preamble of the question]. 3. Two dogs named Fluffy and Duffy have m fleas distributed between them. At discrete time steps a flea (chosen uniformly at random) jumps to the other dog. Let Xn be the number of fleas on Fluffy after the nth time step. (a) Is Xn a Markov chain? What are the transition probabilities? Yes, the one step transitions only depend on the current state. The transition probabilities are for i = 0, . . . ,m pi,i+1 = 1− i m pi,i−1 = i m , and zero otherwise. (b) If X0 has a binomial distribution with parameters (m, 1/2) (meaning that each flea tosses a fair coin to determine on which dog it starts), what is the distri- bution of X10? [Hint: First compute the distribution of X1.] A calculation (do it!) shows that if x is the vector of binomial probabilities then xP = x. So inductively, xP n = x for all n = 0, 1, 2, . . . so the distribution of Xn is the binomial distribution above. (c) If m = 3, and P(X0 = 1) = 1, find the probability that Fluffy is free of fleas before Duffy. This is the hitting probability h1,0 for the chain with the following transition diagram 0 1 2 31 1 1/3 2/3 2/3 1/3 Note that by symmetry, h2,3 = h1,0, so h2,0 = 1 − h2,3 = 1 − h1,0. Also, h1,0 = 1 3 + 2 3 · h2,0. Solving gives h1,0 = 3/5. 4. Let (Xn)n≥1 be a Markov chain with state space {1, . . . , k} for some k ≥ 2. Show that if i and j communicate, then the probability that the chain started in state i reaches state j in fewer than k steps is greater than 0. Since i and j communicate, p (n) i,j is positive for some n. Let n0 be the smallest value of n for which p (n) i,j > 0. It is sufficient to prove that n0 < k. Since p (n0) i,j > 0 there exists a path i = i0 → i1 → · · · → in0−1 → in0 = j such

that pi0,i1pi1,i2 · · · pin0−1,in0 > 0. Suppose that n0 ≥ k. Since the above path has
n0 + 1 > k states appearing in it, and there are only k distinct states in S, there
must be a repeated state in the sequence. That means the path has a loop in it, and by
removing this loop we have a shorter path whose product of transition probabilities
is also positive (it is greater than the large product above). This contradicts the
definition of n0.