Recall Two Uses of Semaphores
Mutual Exclusion (initial value = 1)
• Also called “Binary Semaphore” or “mutex”.
• Can be used for mutual exclusion, just like a lock:
Copyright By PowCoder代写 加微信 powcoder
semaP(&mysem);
// Critical section goes here
semaV(&mysem);
Scheduling Constraints (initial value = 0)
• Allow thread 1 to wait for a signal from thread 2
– thread 2 schedules thread 1 when a given event occurs
• Example: suppose you had to implement ThreadJoin which must wait for thread to terminate:
Initial value of semaphore = 0
ThreadJoin {
semaP(&mysem);
ThreadFinish {
semaV(&mysem);
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Recall Full Solution to Bounded Buffer (coke machine)
Semaphore fullSlots = 0; // Initially, no coke
Semaphore emptySlots = bufSize;
// Initially, num empty slots
Semaphore mutex = 1; //
Producer(item) { semaP(&emptySlots); // se
semaV(&fullSlots); //
No one using machine
Wait until space
Tell consumers there is
fullSlots signals coke Check if there’s a coke
tell producer need more
maP(&mutex); // Wait until machine free
queue(item);
maV(&mutex);
// more coke
Critical sections using mutex protect integrity of the queue
Consumer() { semaP(&fullSlots); // se
se semaV(&emptySlots); // return item;
maP(&mutex); // Wait until machine free
em = Dequeue();
maV(&mutex);
emptySlots
signals space
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Recall: Monitors and Condition Variables
• Monitor: a lock and zero or more condition variables for managing concurrent access to shared data
– Use of Monitors is a programming paradigm
– Some languages like Java provide monitors in the language
• Condition Variable: a queue of threads waiting for something inside a critical section
– Key idea: allow sleeping inside critical section by atomically releasing lock at time we go to sleep
– Contrast to semaphores: Can’t wait inside critical section • Operations:
– Wait(&lock): Atomically release lock and go to sleep. Re-acquire lock later, before returning.
– Signal():Wake up one waiter, if any
– Broadcast():Wake up all waiters
• Rule: Must hold lock when doing condition variable ops! Joseph & Kubiatowicz CS162 © UCB Spring 2022
Recall: Structure of Mesa Monitor Program
• Monitors represent the synchronization logic of the program
– Wait if necessary
– Signal when change something so any waiting threads can proceed
• Basic structure of mesa monitor-based program:
while (need to wait) {
condvar.wait();
Check and/or update state variables Wait if necessary
do something so no need to wait
condvar.signal();
Check and/or update state variables
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Recall: Readers/Writers Problem
• Motivation: Consider a shared database
– Two classes of users:
» Readers – never modify database
» Writers – read and modify database
– Is using a single lock on the whole database sufficient? » Like to have many readers at the same time
» Only one writer at a time
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Recall: Code for a Reader
Reader() {
// First check self into system
acquire(&lock);
while ((AW + WW) > 0) { // Is it safe to read?
WR++; // No. Writers exist
cond_wait(&okToRead,&lock);// Sleep on cond var
WR–; // No longer waiting
AR++; // Now we are active!
release(&lock);
// Perform actual read-only access
AccessDatabase(ReadOnly);
// Now, check out of system
acquire(&lock);
AR–; // No longer active
if (AR == 0 && WW > 0) // No other active readers
cond_signal(&okToWrite);// Wake up one writer
release(&lock);
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Recall: Code for a Writer
Writer() {
// First check self into system
acquire(&lock);
while ((AW + AR) > 0) { // Is it safe to write?
WW++; // No. Active users exist
cond_wait(&okToWrite,&lock); // Sleep on cond var
WW–; // No longer waiting
AW++; // Now we are active!
release(&lock);
// Perform actual read/write access
AccessDatabase(ReadWrite);
// Now, check out of system
acquire(&lock);
AW–; // No longer active
if (WW > 0){// Give priority to writers
cond_signal(&okToWrite);// Wake up one writer
} else if (WR > 0) { // Otherwise, wake reader
cond_broadcast(&okToRead); // Wake all readers
release(&lock);
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Can readers starve? Consider Reader() entry code:
while ((AW + WW) > 0) { // Is it safe to read? WR++; // No. Writers exist cond_wait(&okToRead,&lock);// Sleep on cond var WR–; // No longer waiting
AR++; // Now we are active!
What if we erase the condition check in Reader exit?
AR–; // No longer active
if (AR == 0 && WW > 0) // No other active readers
cond_signal(&okToWrite);// Wake up one writer Further, what if we turn the signal() into broadcast()
AR–; // No longer active
cond_broadcast(&okToWrite); // Wake up sleepers
Finally, what if we use only one condition variable (call it “okContinue”) instead of two separate ones?
– Both readers and writers sleep on this variable – Must use broadcast() instead of signal()
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Use of Single CV: okContinue
Reader() {
// check into system
acquire(&lock);
while ((AW + WW) > 0) {
Writer() {
// check into system
acquire(&lock);
while ((AW + AR) > 0) {
release(&lock);
// read-only access
AccessDbase(ReadOnly);
// check out of system
acquire(&lock);
if (AR == 0 && WW > 0)
cond_signal(&okContinue);
release(&lock);
release(&lock);
// read/write access
AccessDbase(ReadWrite);
// check out of system
acquire(&lock);
if (WW > 0){
cond_signal(&okContinue);
} else if (WR > 0) {
cond_broadcast(&okContinue);
release(&lock);
cond_wait(&okContinue,&lock);
cond_wait(&okContinue,&lock);
What if we turn okToWrite and okToRead into okContinue
(i.e. use only one condition variable instead of two)?
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Use of Single CV: okContinue
Reader() {
// check into system
acquire(&lock);
while ((AW + WW) > 0) {
Writer() {
// check into system
acquire(&lock);
while ((AW + AR) > 0) {
release(&lock);
// read-only access
AccessDbase(ReadOnly);
// check out of system
acquire(&lock);
if (AR == 0 && WW > 0)
cond_signal(&okContinue);
release(&lock);
release(&lock);
// read/write access
AccessDbase(ReadWrite);
// check out of system
acquire(&lock);
if (WW > 0){
cond_signal(&okContinue);
} else if (WR > 0) {
cond_broadcast(&okContinue);
release(&lock);
cond_wait(&okContinue,&lock);
cond_wait(&okContinue,&lock);
• Assume R1’s signal is delivered to R2 (not W1) Joseph & Kubiatowicz CS162 © UCB Spring 2022
Consider this scenario:
• R1 arrives
• W1, R2 arrive while R1 still reading 🡪 W1 and R2 wait for R1 to finish
Use of Single CV: okContinue
Reader() {
// check into system
acquire(&lock);
while ((AW + WW) > 0) {
cond_wait(&okContinue,&lock);
Writer() {
// check into system
acquire(&lock);
while ((AW + AR) > 0) {
cond_wait(&okContinue,&lock);
release(&lock);
// read-only access
AccessDbase(ReadOnly);
// check out of system
acquire(&lock);
if (AR == 0 && WW > 0)
cond_broadcast(&okContinue);
release(&lock); }}
Need to change to broadcast()!
release(&lock);
// read/write access
AccessDbase(ReadWrite);
// check out of system
acquire(&lock);
if (WW > 0 || WR > 0){
cond_broadcast(&okContinue); }
release(&lock);
Must broadcast() to sort things out!
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Can we construct Monitors from Semaphores?
• Locking aspect is easy: Just use a mutex
• Can we implement condition variables this way?
Wait(Semaphore *thesema) { semaP(thesema); }
Signal(Semaphore *thesema) { semaV(thesema); }
• Does this work better?
Wait(Lock *thelock, Semaphore *thesema) {
release(thelock);
semaP(thesema);
acquire(thelock);
Signal(Semaphore *thesema) {
semaV(thesema);
– Doesn’t work:Wait() may sleep with lock held
– No: Condition vars have no history, semaphores have history: » What if thread signals and no one is waiting? NO-OP
» What if thread later waits? Thread Waits
» What if thread V’s and noone is waiting? Increment
» What if thread later does P? Decrement and continue
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Construction of Monitors from Semaphores (con’t)
• Problem with previous try:
– P and V are commutative – result is the same no matter what order they occur
– Condition variables are NOT commutative • Does this fix the problem?
Wait(Lock *thelock, Semaphore *thesema) {
release(thelock);
semaP(thesema);
acquire(thelock);
Signal(Semaphore *thesema) {
if semaphore queue is not empty
semaV(thesema);
– Not legal to look at contents of semaphore queue
– There is a race condition – signaler can slip in after lock release and before waiter executes semaphore.P()
• It is actually possible to do this correctly
– Complex solution for Hoare scheduling in book
– Can you come up with simpler Mesa-scheduled solution?
Joseph & Kubiatowicz CS162 © UCB Spring 2022
C-Language Support for Synchronization
• C language: Pretty straightforward synchronization
– Just make sure you know all the code paths out of a critical section
int Rtn() {
acquire(&lock);
if (exception) {
release(&lock);
return errReturnCode;
release(&lock);
return OK; }
– Watch out for setjmp/longjmp!
» Can cause a non-local jump out of procedure
» In example, procedure E calls longjmp, poping stack back to procedure B » If Procedure C had lock.acquire, problem!
Calls setjmp Proc C
acquire(&lock )
Proc E Calls longjmp
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Stack growth
Concurrency and Synchronization in C
• Harder with more locks
• Is goto a solution???
void Rtn() {
lock1.acquire(); ……
if (error) {
lock1.release();
lock2.acquire();
if (error) {
lock2.release()
lock1.release();
lock2.release();
lock1.release();
if (error) {
goto release_lock1_and_return;
lock2.acquire();
if (error) {
goto release_both_and_return;
release_both_and_return:
lock2.release();
release_lock1_and_return:
} lock1.release();
void Rtn() {
lock1.acquire();
Joseph & Kubiatowicz CS162 © UCB Spring 2022
C++ Language Support for Synchronization
• Languages with exceptions like C++
– Languages that support exceptions are problematic (easy to make a non-local exit without releasing lock)
– Consider:
void Rtn() { lock.acquire(); …
… lock.release();
void DoFoo() {
if (exception) throw errException;
– Notice that an exception in DoFoo() will exit without releasing the lock!
Joseph & Kubiatowicz CS162 © UCB Spring 2022
C++ Language Support for Synchronization (con’t)
• Must catch all exceptions in critical sections
– Catch exceptions, release lock, and re-throw exception:
void Rtn() {
lock.acquire();
} catch (…) {
lock.release(); // release lock throw; // re-throw the exception
lock.release();
void DoFoo() {
if (exception) throw errException;
// catch exception
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Much better: C++ Lock Guards
#include
int global_i = 0;
std::mutex global_mutex;
void safe_increment() { std::lock_guard
global_i++;
// Mutex released when ‘lock’ goes out of scope
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Python with Keyword
• More versatile than we show here (can be used to close files, database
connections, etc.)
lock = threading.Lock()
with lock: # Automatically calls acquire() some_var += 1
# release() called however we leave block
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Java synchronized Keyword
• Every Java object has an associated lock:
– Lock is acquired on entry and released on exit from a synchronized method – Lock is properly released if exception occurs inside a synchronized method – Mutex execution of synchronized methods (beware deadlock)
class Account {
private int balance;
// object constructor
public Account (int initialBalance) { } balance = initialBalance;
public synchronized int getBalance() { } return balance;
public synchronized void deposit(int amount) {
balance += amount;
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Java Support for Monitors
• Along with a lock, every object has a single condition variable associated with it
• To wait inside a synchronized method:
– void wait();
– void wait(long timeout);
• To signal while in a synchronized method:
– void notify();
– void notifyAll();
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Goal for Today
if ( readyThreads(TCBs) ) {
nextTCB = selectThread(TCBs);
run( nextTCB );
run_idle_thread();
• Discussion of Scheduling:
– Which thread should run on the CPU next?
• Scheduling goals, policies
• Look at a number of different schedulers
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Recall: Scheduling
• Question: How is the OS to decide which of several tasks to take off a queue?
• Scheduling: deciding which threads are given access to resources from moment to moment
– Often, we think in terms of CPU time, but could also think about access to resources like network BW or disk access
Joseph & Kubiatowicz CS162 © UCB Spring 2022 Lec 10.23
Scheduling:All About Queues
2/22/22 Joseph & Kubiatowicz CS162 © UCB Spring 2022 Lec 10.24
Scheduling Assumptions
CPU scheduling big area of research in early 70’s
Many implicit assumptions for CPU scheduling: – One program per user
– One thread per program
– Programs are independent
Clearly, these are unrealistic but they simplify the problem so it can be solved
– For instance: is “fair” about fairness among users or programs?
» If I run one compilation job and you run five, you get five times as much CPU on many operating systems
The high-level goal: Dole out CPU time to optimize some desired parameters of system
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Assumption: CPU Bursts
Weighted toward small bursts
• Execution model: programs alternate between bursts of CPU and I/O
– Program typically uses the CPU for some period of time, then does I/O, then uses CPU again
– Each scheduling decision is about which job to give to the CPU for use by its next CPU burst
– With timeslicing, thread may be forced to give up CPU before finishing current CPU burst
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Scheduling Policy Goals/Criteria
• Minimize Response Time
– Minimize elapsed time to do an operation (or job)
– Response time is what the user sees:
» Time to echo a keystroke in editor
» Time to compile a program
» Real-time Tasks: Must meet deadlines imposed by World
• MaximizeThroughput
– Maximize operations (or jobs) per second
– Throughput related to response time, but not identical:
» Minimizing response time will lead to more context switching than if you only maximized throughput
– Two parts to maximizing throughput
» Minimize overhead (for example, context-switching) » Efficient use of resources (CPU, disk, memory, etc)
• Fairness
– Share CPU among users in some equitable way
– Fairness is not minimizing average response time:
» Better average response time by making system less fair
Joseph & Kubiatowicz CS162 © UCB Spring 2022
First-Come, First-Served (FCFS) Scheduling
• First-Come, First-Served (FCFS)
– Also “First In, First Out” (FIFO) or “Run until done”
» In early systems, FCFS meant one program scheduled until done (including I/O)
» Now, means keep CPU until thread blocks
• Example: Process BurstTime P1 24
– Suppose processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is:
0 24 27 30
– Waiting time for P1 = 0;P2 = 24;P3 = 27
– Average waiting time: (0 + 24 + 27)/3 = 17
– Average Completion time: (24 + 27 + 30)/3 = 27
Convoy effect: short process stuck behind long process Joseph & Kubiatowicz CS162 © UCB Spring 2022
Convoy effect
Scheduled Task (process, thread)
• With FCFS non-preemptive scheduling, convoys of small tasks tend to build up when a large one is running.
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Scheduling queue
FCFS Scheduling (Cont.)
• Example continued:
– Suppose that processes arrive in order: P2 , P3 , P1 Now, the Gantt chart for the schedule is:
– Waiting time for P1 = 6; P2 = 0; P3 = 3
– Average waiting time: (6 + 0 + 3)/3 = 3
– Average Completion time: (3 + 6 + 30)/3 = 13
• In second case:
– Average waiting time is much better (before it was 17) – Average completion time is better (before it was 27)
• FIFO Pros and Cons: – Simple (+)
– Short jobs get stuck behind long ones (-)
» Safeway: Getting milk, always stuck behind cart full of items! Upside: get to read about Space Aliens!
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Round Robin (RR) Scheduling
• FCFS Scheme: Potentially bad for short jobs!
– Depends on submit order
– If you are first in line at supermarket with milk, you don’t care who is behind you, on the other hand…
• Round Robin Scheme: Preemption!
– Each process gets a small unit of CPU time
(time quantum), usually 10-100 milliseconds
– After quantum expires, the process is preempted and added to the end of the ready queue.
– n processes in ready queue and time quantum is q ⇒ » Each process gets 1/n of the CPU time
» In chunks of at most q time units
» No process waits more than (n-1)q time units
Joseph & Kubiatowicz CS162 © UCB Spring 2022
RR Scheduling (Cont.)
• Performance
– q large ⇒ FCFS
– q small ⇒ Interleaved (really small ⇒ hyperthreading?)
– q must be large with respect to context switch, otherwise overhead is too high (all overhead)
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Example of RR with Time Quantum = 20
Example: P1 53 P2 8
Burst Time
The Gantt chart is:
Thus, Round- and Cons:
Waiting time for P2=(20-0)=20
0 20 28 48 68 88 108 112125 145153
P1=(68-20)+(112-88)=72
Better for short jobs, Fair (+) Context-switching time adds up for long jobs (-)
P3=(28-0)+(88-48)+(125-108)=85 P4=(48-0)+(108-68)=88
Average waiting time = (72+20+85+88)/4=661⁄4
Average completion time = (125+28+153+112)/4 = 1041⁄2
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Decrease Response Time
• T1: Burst Length 10 • T2: Burst Length 1
– Average Response Time = (10 + 11)/2 = 10.5
– Average Response Time = (6
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com