CS代考 ECE391- Computer System

ECE391- Computer System
Engineering

Lecture 17

Copyright By PowCoder代写 加微信 powcoder

Scheduling

University of Illinois at Urbana-

Announcements

– Checkpoint 2: 5:59PM on Tuesday 10/26
– CP2 Demo: 10/26 Tuesday, 6:30pm: all teams

– 10/28/2021, 7-9 PM, ECEB 1002
– Conflicts by 10/22/2021
– Review Session in class 10/26/2021
– No Class 10/28/2021

SCHEDULING
Basic Philosophy

What is scheduling?

Role of Dispatcher vs. Scheduler

• Dispatching (Last Time)
– Low-level mechanism
– Responsibility: context switch

• Save previous process state in PCB
• Load next process state from PCB to registers
• Change scheduling state of process (running, ready, or

• Migrate processes between different scheduling queues
• Switch from kernel to user mode

• Scheduler (Today)
– High-level policy
– Responsibility: Deciding which process to run

http://www.cs.columbia.edu/~junfeng/09sp-w4118/syllabus.html

What kinds of things must be
scheduled?

Scheduling Philosophy

• Interactive
–examples: editors, GUIs
–driven by human interaction (e.g., keystrokes,

mouse clicks)
– little or no work to do after each event
–but important to respond quickly

Scheduling Philosophy

–examples: compilation, simulation
–usually only time to completion matters
–want a fair share of CPU computation

Scheduling Philosophy (cont.)

• Real-time
–examples: music, video, teleconferencing
–periodic deadlines (e.g., 30 frames per second of

–work often only useful if finished on time

• alternative taxonomy: I/O-bound vs. compute-

• Goal of scheduling:
efficient, fair, and responsive (all at once!)

How do we evaluate scheduling

Goal of Scheduling

• Efficient
• Responsive
• (And a bag of chips[1])

[1] Adj. something or someone which possesses all desired qualities PLUS unimagined or
unforseen bonuses.

SCHEDULING
Algorithms and Designs

6.4 Scheduling Basics

• Schedulers come in two basic flavors
– Preemptive
– Non-preemptive

• Basic scheduler steps
1. Grab the attention of the processor.
2. Save the state of the currently running process.
3. Select a new process to run.
4. Dispatch the newly selected process to run on

the processor.

https://www.cc.gatech.edu/~rama/CS2200-External/ppt-slides/Chapter06.pptx

6.6 Non-preemptive Scheduling Algorithms

• Non-preemptive means that once a process is
running it will continue to do so until it
relinquishes control of the CPU. This would be
because it terminates, voluntarily yields the
CPU to some other process (waits) or requests
some service from the operating system.

https://www.cc.gatech.edu/~rama/CS2200-External/ppt-slides/Chapter06.pptx

6.7 Preemptive Scheduling Algorithms

• Two simultaneously implications.
– Scheduler is able to assume control of the

processor anytime unbeknownst to the currently
running process.

– Scheduler is able to save the state of the currently
running process for proper resumption from the
point of preemption.

• Any of the Non-preemptive algorithms can be
made Preemptive

https://www.cc.gatech.edu/~rama/CS2200-External/ppt-slides/Chapter06.pptx

First-Come, First-Served (FCFS)

• Simplest CPU scheduling algorithm
– First job that requests the CPU gets the CPU
– Nonpreemptive

• Advantage: simple implementation with FIFO

• Disadvantage: waiting time depends on arrival
– short jobs that arrive later have to wait long

http://www.cs.columbia.edu/~junfeng/09sp-w4118/syllabus.html

Example of FCFS

Process Burst Time

• Suppose that the processes arrive in the order: P1 , P2 , P3
• The Gantt Chart for the schedule is:

• Waiting time for P1 = 0; P2 = 24; P3 = 27
• Average waiting time: (0 + 24 + 27)/3 = 17

http://www.cs.columbia.edu/~junfeng/09sp-w4118/syllabus.html

Example of FCFS (Cont.)
Suppose that the processes arrive in the order

P2 , P3 , P1
• The Gantt chart for the schedule is:

• Waiting time for P1 = 6; P2 = 0; P3 = 3
• Average waiting time: (6 + 0 + 3)/3 = 3
• Much better than previous case
• Convoy effect: short process stuck waiting for long process

– Also called head of the line blocking

http://www.cs.columbia.edu/~junfeng/09sp-w4118/syllabus.html

Shortest Job First (SJF)
• Schedule the process with the shortest time

– FCFS if same time

• Advantage: minimize average wait time
– provably optimal
– Moving shorter job before longer job decreases waiting time of short

job more than increases waiting time of long job
– Reduces average wait time

• Disadvantage
– Not practical: difficult to predict burst time

• Possible: past predicts future
– May starve long jobs

http://www.cs.columbia.edu/~junfeng/09sp-w4118/syllabus.html

Shortest Remaining Time First (SRTF)

• SRTF == SJF with preemption
– New process arrives w/ shorter CPU burst than

the remaining for current process
– SJF without preemption: let current process run
– SRTF: preempt current process
– Reduces average wait time

http://www.cs.columbia.edu/~junfeng/09sp-w4118/syllabus.html

Process Arrival Time Burst Time

• Average waiting time = (0 + 6 + 3 + 7)/4 = 4

Example of Nonpreemptive SJF

http://www.cs.columbia.edu/~junfeng/09sp-w4118/syllabus.html

Example of Preemptive SJF (SRTF)
Process Arrival Time Burst Time

• SJF (preemptive)

• Average waiting time = (9 + 1 + 0 +2)/4 = 3

http://www.cs.columbia.edu/~junfeng/09sp-w4118/syllabus.html

Round-Robin (RR)

• Practical approach to support time-sharing
– Run process for a time slice, then move to back of

FIFO queue
– Preempted if still running at end of time-slice

• Advantages
– Fair allocation of CPU across processes
– Low average waiting time when job lengths vary

http://www.cs.columbia.edu/~junfeng/09sp-w4118/syllabus.html

Example of RR with Time Slice = 20

Process Arrival Time Burst Time
P1 0 400
P2 20 60
P3 20 60

• The Gantt chart is:

• Average waiting time:
• Compare to FCFS and SJF:

P1 P2 P3 P1 P2 P3 P1 P2 P3 P1

0 20 40 60 80 100 120 140 160 180 200

http://www.cs.columbia.edu/~junfeng/09sp-w4118/syllabus.html

Disadvantage of Round-Robin

• Poor average waiting time when jobs have similar
– Imagine N jobs each requiring T time slices
– RR: all complete roughly around time N*T
– Average waiting time is even worse than FCFS!

• Performance depends on length of time slice
– Too high è degenerate to FCFS
– Too low è too many context switch, costly
– How to set time-slice length?

http://www.cs.columbia.edu/~junfeng/09sp-w4118/syllabus.html

Time slice and Context Switch Time

http://www.cs.columbia.edu/~junfeng/09sp-w4118/syllabus.html

Priorities
• A priority is associated with each process

– Run highest priority ready job (some may be blocked)
– Round-robin among processes of equal priority
– Can be preemptive or nonpreemptive

• Representing priorities
– Typically an integer
– The larger the higher or the lower?

• Solaris: 0-59, 59 is the highest
• Linux: 0-139, 0 is the highest

• Question: what data structure to use for priority scheduling? One
queue for all processes?

http://www.cs.columbia.edu/~junfeng/09sp-w4118/syllabus.html

Setting Priorities

• Priority can be statically assigned
– Some processes always have higher priority than

– Problem: starvation

• Priority can be dynamically changed by OS
– Aging: increase the priority of processes that wait

in the ready queue for a long time

http://www.cs.columbia.edu/~junfeng/09sp-w4118/syllabus.html

Priority Inversion

• High priority process depends on low priority
process (e.g. to release a lock)
– What if another process with in-between priority

• Solution: priority inheritance
– inherit priority of highest process waiting for a

– Must be able to chain multiple inheritances
– Must ensure that priority reverts to original value

http://www.cs.columbia.edu/~junfeng/09sp-w4118/syllabus.html

SCHEDULING

General strategy

• break time into slices
• allow interactive jobs to preempt the current job

based on interrupts
– typically, a job becomes runnable when an interrupt

occurs (e.g., a key is pressed)
– Linux checks for rescheduling after each interrupt,

system call, and exception
• Linux 2.4 does NOT preempt code running in

kernel (code may yield)
• Linux 2.6 DOES preempt code running in kernel to

support real-time

• each task given a quantum of time (in ticks of
10 milliseconds)

• run until no runnable task has time left
• then start a new epoch

Time broken into epochs

• Real-time jobs
– always given priority over non-real-time jobs
– prioritized amongst themselves

• Static and dynamic priorities used for non-
real-time jobs

General Scheduling Algorithm
Used by Linux

• heuristic estimate job interactiveness
• an interactive job
– can continue to run after running out of time
– takes turns with other interactive jobs

• philosophy is that they don’t usually use up

• heuristic ensures that job can’t use lots of
CPU and still be “interactive”

Interactive jobs handled with
heuristics

• State field can be one of
– TASK_RUNNING

• task is executing currently or waiting to execute
• task is in a run queue on some processor

– TASK_INTERRUPTIBLE
• task is sleeping on a semaphore/condition/signal
• task is in a wait queue
• can be made runnable by delivery of signal

– TASK_UNINTERRUPTIBLE
• task is busy with something that can’t be stopped
• e.g., a device that will stay in unrecoverable state without

further task interaction
• cannot be made runnable by delivery of signal

Task State (fields of task_struct)

– TASK_STOPPED
• task is stopped
• task is not in a queue; must be woken by signal

– TASK_ZOMBIE
• task has terminated
• task state retained until parent collects exit status information
• task is not in a queue

• The swapper process is always runnable (it’s an idle

Task State (fields of task_t) (cont.)

Scheduling Data Structures
• Each processor has a run queue

(struct runqueue in sched.c)
– each run queue has two priority

– arrays are lists of tasks of each

– they are double-buffered to

implement epochs

# runnable

# switches

two pointers used to
implement double-buffering
for priority arrays

for interactive jobs

• 100 real-time priorities
• 40 regular priorities
• One list per priority and a bitmap to make

finding non-empty list fast

Priority Array Structure
(struct prio_array in sched.c)

140 doubly-linked lists of tasks
(using task’s run_list field)

• Task can change (called a context switch) when
– current task yields by calling schedule (sched_yield at user

– current task runs out of time

Rescheduling and Yielding

• Other places where a task may yield implicitly
– semaphores
– wake_up_process
– copy_to/from_user

Rescheduling and Yielding

• At every timer tick (interrupt, generated every 10 milliseconds)
– run a function (scheduler_tick) to reduce current

task’s time
– executes with IF=0
– replace it with another task if appropriate
– interrupt is IRQ0

Rescheduling and Scheduler
scheduler_tick and task_running_tick

(kernel/sched.c)
update_cpu_clock tracks nanoseconds
since last tick/context switch
for all but idle task, call task_running_tick

idle_cpu returns 1 if
CPU is running the idle task

Take care of task which already expired

Handle real-time tasks
Critical section begins

Critical section ends

if a task’s time slice expires
dequeue and reschedule the task

breaks long time slices into pieces

• idle_cpu returns 1 if CPU is running the idle

• update_cpu_clock tracks nanoseconds
since last tick/context switch

• For all but idle task, call
task_running_tick

Comments on scheduler_tick function

• If the task has already expired
– make sure that it gets rescheduled on return from

– by setting TIF_NEED_RESCHED

• The remainder is a critical section for the run queue

• Next handle real-time tasks
– real-time round-robin tasks take turns by placing

themselves at the end of the list of their priority
– otherwise real-time tasks just keep rescheduling

(until yield or preemption by higher priority)

Comments on task_running_tick

• If a task’s time slice expires
– take it out of the run queue
– mark it as needing to be removed from processor
– give it a new time slice
– if it’s an interactive job, and expired tasks are not

being starved by interactive ones, put it back into run
queue (at end)

– normal tasks go into expired queue
• Last block
– breaks long time slices into pieces
– round-robin between tasks at same priority

Comments on task_running_tick

Linux Scheduler

schedule function (kernel/sched.c)

Critical section begins
Sleep timestamp records time at which task leaves CPU

if the task is trying to go to sleep check for receipt of signal

default case removes task from run queue

no tasks are runnable

nothing is left in the active run queue

find the next task to run
– find first no-zero bit in the bitmask of the active set
– take the first task at the list indicataed by that bit

• Schedule() function called when processor needs to
change to new task
– time has expired for current task, or
– task has voluntarily yielded (by calling this function)

• Sleep timestamp records time at which task leaves

• Remainder is run queue critical section
• If the task is trying to go to sleep
– check for receipt of signal
– handles race condition with sleeping in wait queue

• Default case removes task from run queue

Comments on schedule function

• If no tasks are runnable
– run the idle task
– reset forced expiration of interactive tasks

• If nothing is left in the active run queue
– the epoch is over
– swap the priority arrays
– and reset forced expiration of interactive tasks

• Find the next task to run
– find first bit selects the priority
– then take the first task at that priority (linked list)

Comments on schedule function

Linux Scheduler (Part2)
schedule function (kernel/sched.c)

Critical section ends

clear need_resched flag for next time current task is scheduled

Switch if newly selected task is not the current one

call architecture-dependent switch function

• This portion of the code implements the actual switch
• First clears need_resched flag for next time current

task is scheduled

• Switch only occurs if newly selected task is not the
current one

• Switch does two things
– does some accounting
– calls architecture-dependent switch function

(context_switch)

Comments on schedule function

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com