Week 6 Queue
Advanced Network Technologies
Queueing Theory
School of Computer Science
Dr. Wei Bao | Lecturer
Outline
› Markov Chain
› Queueing System and Little’s Theorem
› M/M/1 Queue foundations
› M/M/1 Queue
Markov Chain
› Markov Chain
Markov Chain
› A stochastic process
– X1, X2, X3, X4…
– {Xn, n = 1, 2, . . .}
– Xn takes on a finite or countable number of possible values.
– Xn ∈ {1,2, …, S}
– i: ith state
– Markov Property: The state of the system at time n+1
depends only on the state of the system at time n
[ ]
[ ] x | X x X
xXxXx | X x X
nnnn
nnnn
==
=====
++
++
11
112211
Pr
,,…,Pr
X1 X2 X3 X4 X5
• Stationary Assumption: Transition probabilities are independent of time (n)
[ ] abnn p a b | X X ===+1Pr
Markov Chain
An example
6
Weather:
• raining today 40% rain tomorrow
60% no rain tomorrow
• not raining today 20% rain tomorrow
80% no rain tomorrow
rain no rain
0.60.4 0.8
0.2
Stochastic FSM:
An example
7
Weather:
• raining today 40% rain tomorrow
60% no rain tomorrow
• not raining today 20% rain tomorrow
80% no rain tomorrow
Matrix:
÷÷
ø
ö
çç
è
æ
=
8.02.0
6.04.0
P
• Stochastic matrix:
Rows sum up to 1
Transition matrix
8
÷÷
÷
÷
÷
ø
ö
çç
ç
ç
ç
è
æ
=
SSSS
S
S
ppp
ppp
ppp
P
…
…
…
21
22221
11211
!”!
9
– Gambler starts with $10
– At each play we have one of the following:
• Gambler wins $1 with probability p
• Gambler looses $1 with probability 1-p
– Game ends when gambler goes broke, or gains a fortune of $100
(Both 0 and 100 are absorbing states)
0 1 2 99 100
p p p p
1-p 1-p 1-p 1-p
Start
(10$)
Gambler’s Example
10
– transient state
if, given that we start in state i, there is a non-zero
probability that we will never return to i
– recurrent state
Non-transient
– absorbing state
impossible to leave this state.
Gambler’s Example
11
• Given that a person’s last cola purchase was Coke,
there is a 90% chance that his next cola purchase will
also be Coke.
• If a person’s last cola purchase was Pepsi, there is
an 80% chance that his next cola purchase will also be
Pepsi.
coke pepsi
0.10.9 0.8
0.2
ú
û
ù
ê
ë
é
=
8.02.0
1.09.0
P
transition matrix:
Coke vs. Pepsi Example
state transition diagram:
12
Given that a person is currently a Pepsi purchaser,
what is the probability that he will purchase Coke two
purchases from now?
Pr[ Pepsià?àCoke ] =
Pr[ PepsiàCokeàCoke ] + Pr[ Pepsià Pepsi àCoke ] =
0.2 * 0.9 + 0.8 * 0.2 = 0.34
ú
û
ù
ê
ë
é
=ú
û
ù
ê
ë
é
ú
û
ù
ê
ë
é
=
66.034.0
17.083.0
8.02.0
1.09.0
8.02.0
1.09.02P
Pepsi à ? ? à Coke
ú
û
ù
ê
ë
é
=
8.02.0
1.09.0
P
Coke vs. Pepsi Example
13
Given that a person is currently a Coke purchaser,
what is the probability that he will purchase Pepsi
three purchases from now?
ú
û
ù
ê
ë
é
=ú
û
ù
ê
ë
é
ú
û
ù
ê
ë
é
=
562.0438.0
219.0781.0
66.034.0
17.083.0
8.02.0
1.09.03P
›
Coke vs. Pepsi Example
•Assume each person makes one cola purchase per week
•Suppose 60% of all people now drink Coke, and 40% drink Pepsi
•What fraction of people will be drinking Coke three weeks from now?
ú
û
ù
ê
ë
é
=
8.02.0
1.09.0
P ú
û
ù
ê
ë
é
=
562.0438.0
219.0781.03P
Pr[X3=Coke] = 0.6 * 0.781 + 0.4 * 0.438 = 0.6438
Qi – the distribution in week i
Q0=(0.6,0.4) – initial distribution
Q3= Q0 * P3 =(0.6438,0.3562)
Coke vs. Pepsi Example
15
Simulation:
Coke vs. Pepsi Example
week – i
Pr
[X
i
=
C
ok
e]
2/3
[ ] [ ]31323132 8.02.0
1.09.0
=ú
û
ù
ê
ë
é
stationary distribution
coke pepsi
0.10.9 0.8
0.2
Steady State and Stationary distribution
1π=
¥®
nlimP
n
P×= ππ
inn
iXP p==
¥®
)(lim
Steady State and Stationary distribution
ú
û
ù
ê
ë
é
=
8.02.0
1.09.0
P
ú
û
ù
ê
ë
é
=
0.35220.6478
0.32390.676110P
[ ] [ ]31323132 8.02.0
1.09.0
=ú
û
ù
ê
ë
é
[ ]3132=π
ú
û
ù
ê
ë
é
=
0.33330.6667
0.33330.6667100P
Steady State and Stationary distribution
PageRank: A Web surfer browses pages in a five-page Web
universe shown in figure. The surfer selects the next page to
view by selecting with equal probability from the pages
pointed to by the current page. If a page has no outgoing link
(e.g., page 2), then the surfer selects any of the pages in the
universe with equal probability. Find the probability that the
surfer views page i.
Steady State and Stationary distribution
Transition matrix P
Steady State and Stationary distribution
Stationary Distribution:
Solve the following equations:
1
5
1i
i =
×=
å
=
p
Pππ
(0.12195, 0.18293, 0.25610, 0.12195, 0.317072)=π
Search engineer. page rank: 5, 3, 2, 1, 4
Queueing System and Little’s Theorem
› Queueing System and Little’s Theorem
Queueing System
• Customers = Data packets
• Service Time = Packet Transmission Time (function of packet length and
transmission speed)
• Queueing delay = time spent in buffer before transmission
• Average number of customers in systems
– Typical number of customers either waiting in queue or undergoing service
• Average delay per customer
– Typical time a customer spends waiting in queue + service time
Queueing System
• W: average waiting time in queue
• X: average service time
• T: average time spent in system (T = W + X)
• NQ = average number of customers in queue
• ρ = utilization = average number of customers in service
• N = average number of customer in system (N = NQ + ρ)
• Want to show later: N = λT (Little’s theorem)
• λ Average arrival rate
NQ
W X
ρ N
T
Number
Time+
+
=
=
Little’s Theorem
α(t) = Number of customers who arrived in the interval [0, t]
β(t) = Number of customers who departed in the interval [0, t]
N(t) = Number of customers in the system at time t, N(t) = α(t) – β(t)
Ti = Time spent in the system by the i-th arriving customer
1
2
3
4
5
6
7
0
τ
α(τ)
β(τ)
N(τ)
Delay T1
Customer 1
Customer 3
Customer 2
N
um
be
r o
f d
ep
ar
tu
re
s
β(
τ)
N
um
be
r o
f a
rr
iv
al
s
α(
τ)
t
Little’s Theorem
tt dN
t
N
t
t ò= 0 )(
1
tt dN
t
N
t
t ò¥®= 0 )(
1
lim
Average # of customers until t Average # of customers in long-term
1
2
3
4
5
6
7
0
τ
α(τ)
β(τ)
N(τ)
Delay T1
Customer 1
Customer 3
Customer 2
N
um
be
r o
f d
ep
ar
tu
re
s
β(
τ)
N
um
be
r o
f a
rr
iv
al
s
α(
τ)
t
Little’s Theorem
t
t
t
)(a
l =
Average # arrival rate until t Average # arrival rate in long-term
t
t
t
)(
lim
a
l
¥®
=
1
2
3
4
5
6
7
0
τ
α(τ)
β(τ)
N(τ)
Delay T1
Customer 1
Customer 3
Customer 2
N
um
be
r o
f d
ep
ar
tu
re
s
β(
τ)
N
um
be
r o
f a
rr
iv
al
s
α(
τ)
t
Little’s Theorem
)(
)(
1i
t
T
T
t
i
t a
a
å
==
Average customer delay till t Average customer delay in long-term
)(
li
)(
1i
t
T
mT
t
i
t a
a
å
=
¥®
=
1
2
3
4
5
6
7
0
τ
α(τ)
β(τ)
N(τ)
Delay T1
Customer 1
Customer 3
Customer 2
N
um
be
r o
f d
ep
ar
tu
re
s
β(
τ)
N
um
be
r o
f a
rr
iv
al
s
α(
τ)
t
Little’s Theorem
Shaded area when the queue is empty: two ways to compute
tt dN
t
ò0 )( å
=
)(
1i
t
iT
a
=
1
2
3
4
5
6
7
0
τ
α(τ)
β(τ)
N(τ)
Delay T1
Customer 1
Customer 3
Customer 2
N
um
be
r o
f d
ep
ar
tu
re
s
β(
τ)
N
um
be
r o
f a
rr
iv
al
s
α(
τ)
t
Little’s Theorem
Shaded area when the queue is empty: two ways to compute
tt dN
t
ò0 )(t
1
å
=
)(
1it
1 t
iT
a
=
1
2
3
4
5
6
7
0
τ
α(τ)
β(τ)
N(τ)
Delay T1
Customer 1
Customer 3
Customer 2
N
um
be
r o
f d
ep
ar
tu
re
s
β(
τ)
N
um
be
r o
f a
rr
iv
al
s
α(
τ)
t
Little’s Theorem
Shaded area when the queue is empty: two ways to compute
tt dN
t
ò0 )(t
1
)(t
)(
)(
1i
t
T
t
t
i
a
a
a
å
=
=
tN tl t
T
1
2
3
4
5
6
7
0
τ
α(τ)
β(τ)
N(τ)
Delay T1
Customer 1
Customer 3
Customer 2
N
um
be
r o
f d
ep
ar
tu
re
s
β(
τ)
N
um
be
r o
f a
rr
iv
al
s
α(
τ)
t
Little’s Theorem
Shaded area when the queue is empty: two ways to compute
TN
TN tt
l
l
=
= t
1
2
3
4
5
6
7
0
τ
α(τ)
β(τ)
N(τ)
Delay T1
Customer 1
Customer 3
Customer 2
N
um
be
r o
f d
ep
ar
tu
re
s
β(
τ)
N
um
be
r o
f a
rr
iv
al
s
α(
τ)
t
Little’s Theorem
Note that the above Little’s Theorem is valid for any
service disciplines (e.g., first-in-first-out, last-in-first-
out), interarrival time distributions, and service time
distributions.
Little’s Theorem
• N = λT
• NQ = λW
• ρ = proportion of time that the server is busy = λX
• T = W + X
• N = NQ + ρ
NQ
W X
ρ N
T
Number
Time+
+
=
=
Rateλ λ λ
Rate Time Number´ =
M/M/1 Queue foundations
› M/M/1 Queue foundations
Exponential Distribution
› Exponential Distribution
› • The cumulative distribution function F(x) and probability
› density function f(x) are:
› F(x) =1−e−λ x f (x) =λ e−λ x x ≥ 0, λ > 0
The mean is equal to its standard deviation: E[X] = σX = 1/λ
Memoryless Property
› P(X > s + t| X > t) = P(X > s) for all s, t ≥ 0
› The only continuous distribution with this property
› Practice Q2 in Tutorial Week 4
Other Properties of Exponential Distribution
› Let X1 ,…, Xn be i.i.d. exponential r.v.s with mean 1/λ,
› then X1+X2 +…+ Xn (Practice Q1 in Tutorial Week 4)
› gamma distribution with parameters n and λ.
› Suppose X1 and X2 are independent exponential r.v.s with means
› 1/λ1 and 1/λ2, respectively, then
21
1
21 )( ll
l
+
=< XXP
Counting Process
› A stochastic process {N(t), t ≥ 0} is a counting process if N(t) represents the total
# of events that have occurred up to time t.
› 1. N(t) ≥ 0 and N(t) is integer valued.
› 2. If s < t, then N(s) ≤ N(t)
› 3. For s < t, N(t) – N(s) = # of events that have occurred in (s, t)
› • Examples:
› – # of people who have entered a particular store by time t
› – # of packets sent by a mobile phone
› • A counting process is said to be independent increment if # of events which
occur in disjoint time intervals are independent.
› • A counting process is said to be stationary increment if the distribution of # of
events which occur in any interval of time depends only on the length of the time
interval.
Poisson Process
› The counting process {N(t), t ≥ 0} is said to be a Poisson process having rate λ >
0, if
› 1. N(0) = 0
› 2. The process has independent increments (i.e., # of events which occur in
disjoint time intervals are independent)
› – for 0 < t1 < t2 < t3 < t4 , for all n , j ≥ 0 › – P{ N(t4) – N(t3) = n | N(t2) – N(t1) = j } = P{N(t4) – N(t3) = n } › 3. Number of events in any interval of length t is Poisson distributed with mean λt. That is, for all s, t ≥ 0 t n e n t nsNstNP l l -==-+ ! )( ))()(( tsNstNE l=-+ ))()(( Poisson Process t0 … N(t) Arrivals N(0)=0 Arrivals in [t1, t2] is independent of arrivals in [t3, t4] t1 t2 t3 t4 Mean arrival λ(t2-t1) Mean arrival λ(t4-t3) Poisson Process: Inter arrival time distribution t0 T1 T2 T3 P(T1 > t) = P(N (t) = 0) = e−λ t
P(T2 > t) = P{T2 > t | T1 =t1 } = e−λ t
Exponential distribution with parameter λ
(mean 1/ λ )
Number of arrivals in a short period of time
Number of arrival events in a very short period
P{N(t+h) – N(t) = 1} = λh + o(h); and
P{N(t+h) – N(t) ≥ 2} = o(h).
o(). Small o notation. The function f (.) is said to be o(h) if
0
)(
lim
0
=
® h
hf
h
Summery
Poisson process:
Independent increments
# of arrivals: Poisson distributed
# of arrivals in a small period of time h. 1 arrival, probability λh
Inter-arrival time distribution: exponential distribution
M/M/1 Queue
› M/M/1 Queue
Queues: Kendall’s notation
› Notations Used in Queueing Systems
› X/Y/Z
› – X refers to the distribution of the interarrival times
› – Y refers to the distribution of service times
› – Z refers to the number of servers
› Common distributions:
› – M = Memoryless = exponential distribution
› – D = Deterministic arrivals or fixed-length service
› – G = General distribution of interarrival times or service times
› M/M/1 refers to a single-server queuing model with exponential interarrival times (i.e.,
Poisson arrivals) and exponential service times.
› In all cases, successive interarrival times and service times are assumed to be
statistically independent of each other.
M/M/1 Queue
›Arrival:
›Poisson arrival with rate λ
›Service:
›Service time: exponential distribution with mean 1/μ
›μ: service rate,
›λ< μ: Incoming rate < outgoing rate
Markov Chain Formulation
δ: a small value
Nk = Number of customers in the system at time kδ
N0 N1 N2… is a Markov Chain!
Q: How to compute the transition probability?
Markov Chain Formulation
)()arrivescustomer2(
)()arrivescustomer1(
)(1)arrivescustomer0(
d
dld
dld
oP
oP
oP
=³
+=
+-=
Markov Chain Formulation
)()leavescustomer2(
00
1)(
)leavescustomer1(
)(1)leavescustomer0(
d
dµd
dµd
oP
i
io
P
oP
=³
î
í
ì
=
³+
=
+-=
No one in the system
Markov Chain Formulation
Aim to compute Pij = P{Nk +1 = j| Nk = i}
For example, P{Nk +1 = i| Nk = i}, i>0
)other(
)departscustomer1()arrivescustomer1(
)departscustomer0()arrivescustomer0(
P
PP
PP
+
+
)(1)](1)][(1[ dµdlddµddld ooo +–=+-+-
)()]()][([ ddµddld ooo =++
)()()( ddd ooo =
)(1:Result dµdld o+–
Markov Chain Formulation
Result:
ip Stationary distribution of state i
The probability that there are i units in the system
Stationary Distribution Derivation
How to derive ip balance equation satisfied
Incoming = outgoing
3122 µdpldpµdpldp +=+
2ldp1ldp
3µdp
2µdp
Stationary Distribution Derivation
How to derive ip
balance equation is performed at each state
10 µdpldp =
2011 µdpldpµdpldp +=+
21 µdpldp =
Stationary Distribution Derivation
How to derive ip
balance equation is performed at each state
10 µdpldp =
21 µdpldp =
3122 µdpldpµdpldp +=+ 32 µdpldp =
Stationary Distribution Derivation
How to derive ip
balance equation is performed at each state
10 µdpldp =
21 µdpldp =
32 µdpldp =
1ii += µdpldp For any i
Stationary Distribution Derivation
How to derive ip
balance equation is performed at each state
01 pµ
l
p =
0
2
2 pµ
l
p ÷÷
ø
ö
çç
è
æ
=
0i pµ
l
p
i
÷÷
ø
ö
çç
è
æ
=
…
1
0i
i =å
¥
=
p Sum of geometric sequence
Stationary Distribution Derivation
How to derive ip
balance equation is performed at each state
01 rpp = ( ) 0
2
2 prp = ( ) 0i prp
i=
…
1
0i
i =å
¥
=
p Sum of geometric sequence
1<=
µ
l
r
Stationary Distribution Derivation
How to derive ip
balance equation is performed at each state
r
p
r
rp
-
=
-
-
¥® 11
)1(
li 00
N
N
m
Sum of geometric sequence
1=
irrp
rp
)-1(
-1
i
0
=
=
Average number of user
Average number of users in the system
r
r
r
r
r
rr
r
r
rr
rrr
rr
-
=
¶
ú
û
ù
ê
ë
é
-
¶
-=
¶
ú
û
ù
ê
ë
é
¶
-=
-=
-=
å
å
å
¥
=
-
¥
=
¥
=
1
1
)1(
)1(
)1(
)1()(
0
1
0
0
n
n
n
n
n
n
n
nNE
Average waiting time
Average waiting time
lµl -
1)(
)( ==
NE
TE
Little’s Theorem
Stationary Distribution Derivation
Stationary Distribution Derivation
balance equation is performed at each state
10 µplp =
2011 µplpµplp +=+
...
Stationary Distribution Derivation
balance equation is performed at each state
Following the same step, derive the same result
Stationary Distribution Derivation
Queueing delay goes to infinity when arrival rate approaches service rate!
M/M/m Queue
M/M/m Queue
›Arrival:
›Poisson arrival with rate λ
›Service:
›Service time for one server: exponential distribution
with mean 1/μ
›service rate is i μ, if there are i
system
M/M/m Queue
i1-i iµplp =
i1-i mµplp =
m£i
m>i
( )
ï
ï
î
ïï
í
ì
>
£
=
mn
m
m
mn
n
m
nm
n
!
!
0
0
n r
p
r
p
p
Then, 0p can be solved1<=
µ
l
r
m
M/M/∞ Queue
M/M/m/m Queue
M/M/m/n Queue
Buffer size is n-m
Arrivals will dropped if
there are n users in the
system.
How do you derive its stationary distribution?
Practice
› Analyze M/M/ ∞, M/M/m/n queues
- Draw the state transition diagrams
- Derive their stationary distributions
- For M/M/m/n queue, calculate the probability that an incoming user is dropped.
Calculate the probability that the queue is empty (i.e., all users are served in the
servers or there are no users at all.)