PowerPoint Presentation
Computer Systems
Process Scheduling
Dr. Mian M. Hamayun
m.m. .uk
Credits to:
Dr.
Slide #2 of 39
Lecture Outline
Basic Concepts of CPU Scheduling
Scheduling Criteria
Scheduling Algorithms
FCFS – First Come First Serve
SJF – Shortest Job First
SRTF – Shortest Remaining Time First
Priority and Round-
Slide #3 of 39
Basic Concepts
Single processor system = single process can run at
a time
Others must wait until the CPU is free and can be
rescheduled
Objective of multi-programming is to have some
process running at all times
Maximize CPU utilization
This is a fundamental OS function
Slide #4 of 39
Scheduling
CPU Scheduling
Decides which process to execute next – is the activity of
selecting the next process in the Ready queue for execution on
a processor
Slide #5 of 39
When does Scheduling happen?
A scheduling decision takes place when an event occurs
that interrupts the execution of a process
Possible events
Clock Interrupts (i.e. process running → ready state)
I/O Interrupts (i.e. process waiting → ready state)
Operating System Calls (read, write, process ready → waiting)
Signals (e.g. Semaphores)
Slide #6 of 39
Characteristics of Process Execution
Batch Processing: CPU Bound
Interactive Systems: I/O Bound
Slide #7 of 39
CPU-I/O Burst Cycle
Process execution consists of a
cycle of CPU execution and I/O wait
Processes alternate between these
two states
Process execution begins with a CPU
burst, this is followed by an I/O
burst. This pattern repeats!
Scheduler can schedule another
process during I/O burst.
Eventually final CPU burst ends with
a system request to terminate
execution
Slide #8 of 39
Histogram of CPU Bursts
This histogram shows that most of the processes have
many very short CPU bursts e.g. Interactive systems
Slide #9 of 39
CPU Scheduler
Recall: whenever the CPU becomes idle, the OS must
select one of the processes in the ready queue to be
executed next
This is carried out by the short-term scheduler from the
processes in the ready queue
The queue is not necessarily first-in, first-out (FIFO)
It may be a priority queue, tree or an unordered linked list
Conceptually, all processes are waiting for a chance to
run on the CPU
Slide #10 of 39
Dispatcher
The dispatcher is the module that gives control of the
CPU to the process selected by the short-term
scheduler. This involves:
Switching context
Switching to user mode
Jumping to the proper location in the user program to restart that
program
Dispatcher should be as fast as possible
It is invoked during every process switch
Time taken by dispatcher to stop one process and start another
one is known as dispatch latency.
Slide #11 of 39
Levels of Scheduling
Slide #12 of 39
Non-preemptive Scheduling
Once a process is scheduled, it continues to execute on
the CPU, until:
it is finished (terminates)
it releases the CPU voluntarily (cooperative scheduling)
It blocks due to an event such as I/O interrupts, waits for another
process etc.
Slide #13 of 39
Preemptive Scheduling
The operating system interrupts (intervenes) processes
A scheduled process executes, until its time slice is used up
Clock interrupt returns control of CPU to scheduler (time slice expired)
► Current process is suspended and placed in the Ready queue
► New process is selected from Ready queue and executed on CPU
When a process with higher priority becomes ready
Slide #14 of 39
Scheduling Criteria
Different CPU-scheduling algorithms have different properties. One
may favour one class of processes over another.
Many criteria suggested, including
CPU Utilization – keep that CPU as busy as possible. 0-100%, but realistically,
40-90%
Throughput – Number of processes that complete their execution per time unit
Turnaround time – Total amount of time to execute one process to its
completion.
Waiting time – Total amount of time a process has been waiting in the Ready
queue
Response time – The time it takes from when a request was submitted until the
first response is produced by the operating system (latency).
Others include meeting deadlines, predictability, fairness, balance and enforcing
priorities.
Slide #15 of 39
Optimization Criteria
Maximize (operating system concerns)
CPU Utilization
Throughput
Minimize (user concerns)
Turnaround time
Waiting time
Response time
Slide #16 of 39
First Come, First Served Scheduling (FCFS)
Simplest CPU-scheduling algorithm
The process that requests the CPU first is allocated the
CPU first
Managed with a FIFO queue, process enters the ready
queue
PCB linked onto tail of queue
When CPU free, it is allocated to the process at the
head of the queue
Running process then removed from queue
Slide #17 of 39
FCFS Example
Suppose the processes arrive in
the order: P1 , P2 , P3. as follows:
Waiting time for P1 = 0; P2 = 24; P3 = 27
Average waiting time: (0 + 24 + 27)/3 = 17ms
Process Burst Time
P1 24
P2 3
P3 3
Slide #18 of 39
FCFS Example – 2
Suppose now that the
processes arrive in the order:
P
2
, P
3
, P
1
. for
schedule:
Waiting time for P1 = 6; P2 = 0; P3 = 3
Average waiting time: (6 + 0 + 3)/3 = 3ms
Better, but shows that waiting time under an FCFS
policy may vary substantially
Process Burst Time
P1 24
P2 3
P3 3
Slide #19 of 39
Shortest-Job First (SJF) Scheduling
Associate with each process the length of the process’s
next CPU burst
When the CPU is available it is assigned to the process
that has the smallest next CPU burst
If next CPU bursts of two processes are the same, FCFS is used
to break the tie
Better name shortest-next-CPU-burst algorithm –
scheduling depends on the length of the next CPU burst
of a process, rather than total length
Slide #20 of 39
SJF Example Process Burst Time
P1 6
P2 8
P3 7
P4 3
Average waiting time: (3 + 16 + 9 + 0)/4 = 7ms
In comparison, FCFS would be 10.25ms
SJF scheduling chart
Slide #21 of 39
Optimality of SJF
Provably optimal – it gives the minimum average waiting
time for a given set of processes
Moving a short process before a long one decreases
the waiting time of the short process more than it
increases the waiting time of the long process – hence
average time decreases
Slide #22 of 39
Difficulty of SJF
Determining the length of the next CPU request is
difficult!
Could ask the user (if they know)?
For this reason, SJF is difficult to implement at the level
of short-term CPU scheduling
There is no way to know the length of the next CPU
burst
Approximation is a possible solution
Slide #23 of 39
Determining the Length of the next CPU burst
Exponential averaging of the measured lengths of
previous CPU bursts:
Commonly, set to 0.5𝝰
Slide #24 of 39
Prediction of the Length of the next CPU Burst
Slide #25 of 39
Shortest Remaining Time First Scheduling
SJF is either preemptive or nonpreemptive
Choice arises when a new process arrives at the ready
queue while a previous process still executing
Next CPU burst of the newly arrived process may be shorter
than what is left of the currently executing process
Preemptive SJF will preempt the currently executing process
Nonpreemptive will allow the currently running process to
finish its CPU burst
Preemptive SJF scheduling called Shortest-Remaining-
Time-First (SRTF) scheduling
Slide #26 of 39
SRTF Scheduling – Example
Preemptive SJF :
Process Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Average waiting time = ((10-1)+(1-1)+(17-2)+(5-3))/4
= 26/4 = 6.5ms
Slide #27 of 39
Priority Scheduling
SJF special case of general priority-scheduling
algorithm
Priority associated with each process
CPU allocated to process with highest priority
Equal priority scheduled using FCFS
Slide #28 of 39
Priority Scheduling
Low vs. high priority
Priorities indicated by fixed range of numbers
Low numbers = high priority
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Slide #29 of 39
Priority Scheduling
Priorities can be defined internally or externally
Internal criteria = measurable quantity such as time
limits, memory requirements, number of open files
External criteria = Outside OS such as process
importance, type and amount of funds paid for
computation, department sponsoring the work, political
factors
Slide #30 of 39
Priority Scheduling – Preemption
Preemptive vs. Non-preemptive
Preemptive will preempt the CPU if the priority of the
newly arrived process in the ready queue is higher than
the priority of the currently running process
Non-preemptive priority scheduling algorithm will put the
new process at the head of the ready queue
Slide #31 of 39
Priority Scheduling – Problems
Indefinite blocking or starvation
A process ready to run but waiting for CPU is blocked
Low priority processes can be blocked indefinitely
A heavily loaded computer system with a steady stream of
high priority processes can prevent a low-priority process
from ever getting the CPU
Two things will probably happen:
Process will eventually run (at 2 A.M Sunday)
Computer will eventually crash and lose these unfinished, low priority
processes
Solution: importance increases with age (aging)
Increase importance by a point every 15 minutes
Slide #32 of 39
Round-Robin (RR) Scheduling
Specially designed for time-sharing systems
Similar to FCFS, but with preemption to enable the
switching of processes
Small unit of time used: time quantum or time slice
10 to 100 milliseconds in length
Ready queue is circular
CPU goes around the ready queue, allocating CPU time
to each process for a time interval or up to 1 time
quantum
Slide #33 of 39
Round-Robin (RR) Scheduling
Implementation treats ready queue as a FIFO queue of
processes
CPU scheduler picks first process from ready queue,
sets a timer to interrupt after 1 time quantum, and
dispatches the process
Two things could happen:
Process may have CPU burst of less than 1 quantum, so process
will release CPU voluntarily
If process CPU burst greater than 1 time quantum, timer goes off
and causes an interrupt to the OS, resulting in a context switch
and process goes to the tail of ready queue
Slide #34 of 39
Round-Robin (RR) Scheduling – Example
Time quantum = 4 milliseconds
Resulting RR schedule as follows:
Process Burst Time
P1 24
P2 3
P3 3
Average waiting time = ((10 – 4) + 4 + 7)/3
= 17/3 = 5.66ms
Slide #35 of 39
Round-Robin (RR) Scheduling
If there are n processes in the ready queue and the time
quantum is q, each process gets 1/n of the CPU time in
chunks of at most q time units
Each process must wait no longer than (n – 1) x q time
units until its next time quantum
Example:
5 processes
q = 20 milliseconds
Each process gets up to 20 milliseconds every 100 milliseconds
Slide #36 of 39
RR Scheduling Performance
Depends heavily on size of time quantum
If extremely large, RR is same as FCFS policy
If extremely small, RR can result in a large number of context
switches
Slide #37 of 39
RR – Time Quantum vs. Context Switch Time
Ideally, time quantum should be large with respect to the
context-switch time
If context-switch is approximately 10% of time quantum,
then ~10% of CPU time will be spent context-switching
In reality, most modern systems have time quanta
ranging from 10 – 100 milliseconds
In comparison, context-switch is typically less than 10
microseconds
Slide #38 of 39
Summary
Basics of CPU Scheduling
CPU-I/O Burst Cycle
Preemption
Dispatcher
Scheduling Criteria
Scheduling Algorithms
FCFS, SJF, SRTF, Priority, Round- #39 of 39
References / Links
Chapter # 6: CPU Scheduling, Operating System
Concepts (9th
edition) by Silberschatz, Galvin &
1
Slide 2
Slide 3
Slide 4
Slide 5
Slide 6
Slide 7
Slide 8
Slide 9
Slide 10
Slide 11
Slide 12
Slide 13
Slide 14
Slide 15
Slide 16
Slide 17
Slide 18
Slide 19
Slide 20
Slide 21
Slide 22
Slide 23
Slide 24
Slide 25
Slide 26
Slide 27
Slide 28
Slide 29
Slide 30
Slide 31
Slide 32
Slide 33
Slide 34
Slide 35
Slide 36
Slide 37
Slide 38
Slide 39