Advanced Network Technologies Queueing Theory
Dr. Wei Bao | Lecturer School of Computer Science
Outline
› Markov Chain
› Queueing System and Little’s Theorem › M/M/1 Queue foundations
› M/M/1 Queue
Markov Chain
› Markov Chain
Markov Chain
› A stochastic process – X1, X2, X3, X4…
– {Xn, n = 1, 2, . . .}
-X ∈ {1,2,…,S} n
– Xn takes on a finite or countable number of possible values.
– i: ith state
– Markov Property: The state of the system at time n+1 depends only on the state of the system at time n
Pr[X = x | X = x ,…, X = x , X = x ] = n+1 n+1 n n 2 2 1 1
Pr[Xn+1 =xn+1|Xn =xn]
Markov Chain
X1 X2 X3 X4 X5
• Stationary Assumption: Transition probabilities are independent of time (n) Pr[Xn+1 = b | Xn = a] = pab
An example
Weather:
• raining today
• not raining today
Stochastic FSM:
40% rain tomorrow 60% no rain tomorrow
20% rain tomorrow 80% no rain tomorrow
0.6
0.4
rain
0.8 no rain
0.2
6
An example
Weather:
• raining today
• not raining today
Matrix:
40% rain tomorrow 60% no rain tomorrow
20% rain tomorrow 80% no rain tomorrow
• Stochastic matrix:
Rows sum up to 1
0.4 0.6
P=0.2 0.8
7
Transition matrix
p p …p 11 12 1S
P=p21 p22 … p2S
pS1 pS2 … pSS
8
9
Gambler’s Example
– Gambler starts with $10
– At each play we have one of the following:
• Gambler wins $1 with probability p
• Gambler looses $1 with probability 1-p
– Game ends when gambler goes broke, or gains a fortune of $100 (Both 0 and 100 are absorbing states)
p
0 1 2
ppp
99 100
1-p
1-p
1-p
1-p
Start
(10$)
10
Gambler’s Example
– transient state
if, given that we start in state i, there is a non-zero
probability that we will never return to i – recurrent state
Non-transient
– absorbing state
impossible to leave this state.
11
Coke vs. Pepsi Example
• Given that a person’s last cola purchase was Coke, there is a 90% chance that his next cola purchase will also be Coke.
• If a person’s last cola purchase was Pepsi, there is an 80% chance that his next cola purchase will also be
Pepsi.
transition matrix:
state transition diagram:
0.9 0.1 0.8
coke pepsi 0.2
0.9 0.1
P= 0.2 0.8
12
Coke vs. Pepsi Example
Given that a person is currently a Pepsi purchaser, what is the probability that he will purchase Coke two purchases from now?
Pr[ Pepsi?Coke ] =
Pr[ PepsiCokeCoke ] + Pr[ Pepsi Pepsi Coke ] =
0.2* 0.9 + 0.8 * 0.2 =0.34
2
00.9.9 00.1.1
P ==
00.2.2 00.8.8
0.9 0.1 0.83 0.17
= 0.2 0.8 0.34 0.66
Pepsi ?
?Coke
› Coke vs. Pepsi Example
Given that a person is currently a Coke purchaser, what is the probability that he will purchase Pepsi three purchases from now?
0.9 0.1 0.83 0.17 0.781 0.219 P3 = =
0.2 0.8 0.34 0.66 0.438 0.562
13
Coke vs. Pepsi Example
•Assume each person makes one cola purchase per week
•Suppose 60% of all people now drink Coke, and 40% drink Pepsi •What fraction of people will be drinking Coke three weeks from now?
P = 0.9 0.1 0.2 0.8
P3 = 0.781 0.219 0.438 0.562
Pr[X3=Coke] = 0.6 * 0.781 + 0.4 * 0.438 = 0.6438
Qi – the distribution in week i Q0=(0.6,0.4) – initial distribution Q3= Q0 * P3 =(0.6438,0.3562)
Coke vs. Pepsi Example
Simulation:
0.9 0.1 []=[]
2/3
23 13 0.2 0.8 23 13
0.9
stationary distribution
0.1
0.2
0.8
coke
pepsi
week – i
15
Pr[Xi = Coke]
Steady State and Stationary distribution
limP(Xn =i)=πi n→∞
limPn =1π n→∞
π = π⋅P
Steady State and Stationary distribution
0.9 0.1
P=
0.2 0.8
π=[ ] 23 13
P10 = 0.6761 0.3239 0.6478 0.3522
P100 = 0.6667 0.3333 0.6667 0.3333
0.9 0.1 []=[]
23 13 0.2 0.8 23 13
Steady State and Stationary distribution
PageRank: A Web surfer browses pages in a five-page Web universe shown in figure. The surfer selects the next page to view by selecting with equal probability from the pages pointed to by the current page. If a page has no outgoing link (e.g., page 2), then the surfer selects any of the pages in the universe with equal probability. Find the probability that the surfer views page i.
Steady State and Stationary distribution
Transition matrix P
Steady State and Stationary distribution
Stationary Distribution: Solve the following equations:
π = π⋅P 5
∑π =1
i i=1
(0.12195, 0.18293, 0.25610, 0.12195, 0.317072) π=
Search engineer. page rank: 5, 3, 2, 1, 4
Queueing System and Little’s Theorem
› Queueing System and Little’s Theorem
Queueing System
• Customers = Data packets
• Service Time = Packet Transmission Time (function of packet length and transmission speed)
• Queueing delay = time spent in buffer before transmission
• Average number of customers in systems
– Typical number of customers either waiting in queue or undergoing service • Average delay per customer
– Typical time a customer spends waiting in queue + service time
Queueing System
NQ +ρ= N Number W +X= T Time
• W: average waiting time in queue
• X: average service time
• T: average time spent in system (T = W + X)
• NQ = average number of customers in queue
• ρ = utilization = average number of customers in service • N = average number of customer in system (N = NQ + ρ) • Want to show later: N = λT (Little’s theorem)
• λ Average arrival rate
Little’s Theorem
α(t) = Number of customers who arrived in the interval [0, t]
β(t) = Number of customers who departed in the interval [0, t]
N(t) = Number of customers in the system at time t, N(t) = α(t) – β(t) Ti = Time spent in the system by the i-th arriving customer
Little’s Theorem
Average # of customers until t Average # of customers in long-term
Nt =1∫tN(τ)dτ t0
N =lim1∫t N(τ)dτ t→∞ t 0
Little’s Theorem
Average # arrival rate until t Average # arrival rate in long-term
λt =α(t) t
λ = limα(t) t→∞ t
Little’s Theorem
Average customer delay till t
Average customer delay in long-term
α(t) ∑T
T= t
i=1 α(t)
i
i=1
T = lim ∑T
t→∞
α(t)
α(t)
i
Little’s Theorem
t’
Shaded area when the queue is empty: two ways to compute
=
α(t) ∑T
i=1
∫t N(τ)dτ 0
i
Little’s Theorem
t’
Shaded area when the queue is empty: two ways to compute
=
1t ∫ t N ( τ ) d τ 0
1α(t)
t ∑T
i=1
i
Little’s Theorem
Nt
Shaded area when the queue is empty: two ways to compute
= λt
t’
Tt
α(t)
t α(t)
α(t) ∑T
i=1
i
1t ∫ t N ( τ ) d τ 0
Little’s Theorem
t’
N =λT ttt
N = λT
Shaded area when the queue is empty: two ways to compute
Little’s Theorem
Note that the above Little’s Theorem is valid for any service disciplines (e.g., first-in-first-out, last-in-first- out), interarrival time distributions, and service time distributions.
Little’s Theorem
NQ +ρ= N Number W +X= T Time
λ λ λ Rate Rate × Time = Number
• N = λT
• NQ = λW
• ρ = proportion of time that the server is busy = λX •T=W+X
• N = NQ + ρ
M/M/1 Queue foundations
› M/M/1 Queue foundations
Exponential Distribution
› Exponential Distribution
› • The cumulative distribution function F(x) and probability › density function f(x) are:
› F(x) =1−e−λ x f (x) =λ e−λ x x ≥ 0, λ > 0
The mean is equal to its standard deviation: E[X] = σX = 1/λ
Memoryless Property
› P(X > s + t| X > t) = P(X > s) for all s, t ≥ 0
› The only continuous distribution with this property › Practice Q2 in Tutorial Week 4
Other Properties of Exponential Distribution
› Let X1 ,…, Xn be i.i.d. exponential r.v.s with mean 1/λ, › then X1+X2 +…+ Xn (Practice Q2 in Tutorial Week 4)
› gamma distribution with parameters n and λ.
› Suppose X1 and X2 are independent exponential r.v.s with means › 1/λ1 and 1/λ2, respectively, then λ
P(X1
› 1. N(0) = 0
› 2. The process has independent increments (i.e., # of events which occur in
disjoint time intervals are independent)
› –for0
T1
0t
T2 T3
Poisson Process: Inter arrival time distribution
Given that an event arrives now, what is the distribution of T, where T is the time duration between now and next arrival event?
Exponentially distributed with parameter λ ?
now
next t
Poisson Process: Inter arrival time distribution
Given that an packet event arrives at t0 time ago, what is the distribution of T, where T is the time duration between now and next arrival event?
Exponentially distributed with parameter λ ?
now-t0 now t Reason: Memoryless!
Number of arrivals in a short period of time
Number of arrival events in a very short period P{N(t+h) – N(t) = 1} = λh + o(h); and
P{N(t+h) – N(t) ≥ 2} = o(h).
o(). Small o notation. The function f (.) is said to be o(h) if
lim f (h) = 0 h→0 h
Summery
Poisson process:
Independent increments
# of arrivals: Poisson distributed
# of arrivals in a small period of time h. 1 arrival, probability λh Inter-arrival time distribution: exponential distribution
M/M/1 Queue
› M/M/1 Queue
Queues: Kendall’s notation
› Notations Used in Queueing Systems
› X/Y/Z
› – X refers to the distribution of the interarrival times
› – Y refers to the distribution of service times
› – Z refers to the number of servers
› Common distributions:
› – M = Memoryless = exponential distribution
› – D = Deterministic arrivals or fixed-length service
› – G = General distribution of interarrival times or service times
› M/M/1 refers to a single-server queuing model with exponential interarrival times (i.e., Poisson arrivals) and exponential service times.
› In all cases, successive interarrival times and service times are assumed to be statistically independent of each other.
M/M/1 Queue
› Arrival:
›Poisson arrival with rate λ
› Service:
›Service time: exponential distribution with mean 1/μ ›μ: service rate,
›λ< μ: Incoming rate < outgoing rate
Markov Chain Formulation
δ: a small value
Nk = Number of customers in the system at time kδ
N0 N1 N2... is a Markov Chain!
Q: How to compute the transition probability?
Markov Chain Formulation
P(0 customer arrives)=1−λδ+o(δ) P(1 customer arrives) = λδ + o(δ ) P(≥ 2 customer arrives) = o(δ )
Markov Chain Formulation
P(0 customer leaves)=1−μδ+o(δ)
P(1 customer leaves)=μδ+o(δ) i≥1
0i=0 P(≥ 2 customer leaves) = o(δ )
No one in the system
Markov Chain Formulation
AimtocomputePij =P{Nk +1=j|Nk =i} For example, P{Nk +1 = i| Nk = i}, i>0
P(0 customer arrives)P(0 customer departs) + P(1 customer arrives)P(1 customer departs)
+ P(other)
[1−λδ +o(δ)][1−μδ +o(δ)]=1−λδ −μδ +o(δ)
[λδ +o(δ)][μδ +o(δ)]=o(δ) o(δ )o(δ ) = o(δ )
Result:1−λδ −μδ +o(δ)
Markov Chain Formulation
Result:
πi Stationary distribution of state i
The probability that there are i units in the system
Stationary Distribution Derivation
How to derive πi balance equation satisfied λδπ1 λδπ2
μδπ 3 Incoming = outgoing
μδπ 2
λδπ2 +μδπ2 =λδπ1 +μδπ3
Stationary Distribution Derivation
How to derive
πi
balance equation is performed at each state
λδπ0 =μδπ1
λδπ1 +μδπ1 =λδπ0 +μδπ2
λδπ1 =μδπ2
Stationary Distribution Derivation
How to derive
πi
balance equation is performed at each state
λδπ0 =μδπ1 λδπ1 =μδπ2
λδπ2 +μδπ2 =λδπ1 +μδπ3
λδπ2 =μδπ3
Stationary Distribution Derivation
How to derive
πi
balance equation is performed at each state
λδπ0 =μδπ1
λδπ1 = μδπ2 λδπi = μδπi+1
For any i
λδπ2 =μδπ3
Stationary Distribution Derivation
How to derive
πi
balance equation is performed at each state
…
Sum of geometric sequence
π 1 = μλ π 0
λ 2 π2 =μπ0
λ i
πi =μπ0
∑∞ π i = 1 i=0
Stationary Distribution Derivation
How to derive
πi
balance equation is performed at each state
π1 = ρπ0 … ρ = μλ <1 Sum of geometric sequence
π =(ρ)π
2
20
π =(ρ)π
i0
i
∑∞ π i = 1 i=0
Stationary Distribution Derivation
How to derive
πi
balance equation is performed at each state
=1
Sum of geometric sequence
limπ0(1−ρN)= π0
1−ρ 1−ρ
N→∞
π0 =1-ρ
πi =(1-ρ)ρi
Average number of user
Average number of users in the system
∞
E ( N ) = ∑ n (1 − ρ ) ρ n
n=0
= ρ(1− ρ)∑nρn−1
∂∑ρn =ρ(1−ρ) n=0
∞
n=0 ∞
∂ρ ∂ ρ
=ρ(1−ρ) 1−ρ= ρ ∂ρ 1−ρ
Average waiting time
Average waiting time Little’s Theorem
E(T)=E(N)= 1 λ μ-λ
Stationary Distribution Derivation
Stationary Distribution Derivation
balance equation is performed at each state
λπ0 =μπ1
λπ1 +μπ1 =λπ0 +μπ2
...
Stationary Distribution Derivation
balance equation is performed at each state Following the same step, derive the same result
Stationary Distribution Derivation
Queueing delay goes to infinity when arrival rate approaches service rate!
M/M/m Queue
M/M/m Queue
› Arrival:
›Poisson arrival with rate λ
› Service:
›Service time for one server: exponential distribution with mean 1/μ
›service rate is i μ, if there are i
M/M/m Queue
λπi-1 =iμπi i≤m
λπi-1 =mμπi
i>m
Then, π0 can be solved
n (mρ)
π n≤m π=0 n!
n mmρn
π n>m 0
m!
ρ= λ <1 mμ
M/M/∞ Queue
M/M/m/m Queue
M/M/m/n Queue
Arrivals will dropped if there are n users in the system.
Buffer size is n-m
How do you derive its stationary distribution?
Practice
› Analyze M/M/ ∞, M/M/m/n queues
- Draw the state transition diagrams
- Derive their stationary distributions
- For M/M/m/n queue, calculate the probability that an incoming user is dropped. Calculate the probability that the queue is empty (i.e., all users are served in the servers or there are no users at all.)