CSCI 340
Lecturer: Dr. Simina Fluture
Topics:
Readings:
Threads
Thread Concept
Java Threads
Thread state diagram Operations on threads
Web and Class notes Textbook: related topics
A traditional process (also named a heavyweight process) is a single task with one single thread. A thread is also called a lightweight process (LWP), and may consist of a program counter, register set and a stack space. All threads in a process share the same address space.
A task consists of a collection of resources like: main memory (code section, data
section), I/O devices, files.
Multithreading refers to the ability of an operating system to support multiple threads within a single process.
Some operating systems support threads internally (kernel level threads, or in Unix terminology Bound threads) through system calls, while others (user level threads, or in Unix terminology Unbound threads) support them above the kernel, using library calls.
Java Threads
Java provides support for threads at the language level. Java provides a set of APIs to manage threads.
State Diagram, Operations on threads
Creation of a thread: brings the thread into the new state.
New: an object for the thread is created.
no system resources have been allocated yet.
Start a thread. Make a thread runnable.
Resources are allocated to the thread; the thread goes into the Runnable state.
Two ways of providing the run( ) method for a thread:
Subclassing the thread class and overriding the run( ) method.
Class A extends Thread {
Public void run( ) {
//code }
}
CSCI 340
Lecturer: Dr. Simina Fluture
Implementing the Runnable interface.
Class B implements Runnable { Public void run( ) {
//code }
}
Blocked state: (not runnable) Reasons
waits for an event (for a specific condition to be True). For example calls a join method on another thread object whose thread has not yet terminated.
waits for the completion of an I/O.
waits for the lock on a synchronized method. waits for a fixed amount of time to elapse.
Methods
suspend( ) suspends execution of the currently running thread. (the method is deprecated, deadlock for monitors)
join( ) waits for this thread to die.
System call for an I/O wait( ) on an object.
sleep( ) puts the currently running thread to sleep for a specified amount of time (milliseconds)
For the wait( ) and sleep( ) methods, if the thread that is interrupted is blocked, the method that blocked the thread throws an InterruptException object.
Dead state: the thread exits (terminates).
If the thread terminates normally – the run method terminates. If the thread terminates abnormally – stop( ) (deprecated)
system.exit(0) indicates that system terminated normally system.exit(1) there is an error
isAlive( ) returns a Boolean value that determines if a Thread is in the Dead state
or not. Scheduling in Java
Java uses a preemptive priority CPU scheduling algorithm. MIN_PRIORITY(1)
MAX_PRIORITY(10)
DEFAULT_PRIORITY(5)
t.setPriority( )
t.getPriority( )
Windows95/NT (of JDK 1.1) implements time slicing, while Solaris 2.x (of JDK 1.1) does not.
CSCI 340
Lecturer: Dr. Simina Fluture
Topics:
The Critical – Section Problem
Two Process-Software Incomplete Solutions Peterson Solution for two-process Software
Two Process-Software Solution
Note: the order of the algorithms given in the notes is different than the one in the textbook.
No support of the hardware, operating system or programming language level is assumed.
1st Attempt: (similar with algorithm 1 of the textbook) Processes communicate using
common variable turn. allowed to execute. Initially, turn = 0; (or 1)
If turn = i then Pi is
P0
While (true) {
while (turn != 0) do no-op; do no-op; CS
P1
while (true) {
while (turn != 1) CS
turn = 1;
remainder; }}
Processes wait for access to CS by busy waiting. Mutual Exclusion – Satisfied
Negative aspects (for the first attempt):
Processes must strictly alternate in the use of their CS. => the pace of the execution is dictated by the slower of the two processes. This violates the Progress Condition.
Failing possibility
If one process fails, the other process is permanently blocked.
Note: in general when a solution to a Critical – Section problem is discussed, the failingpossibilityisnot considered.
Second Attempt:
Each process should have its own key to the CS so that if one process is executing outside the CS the other one is able to get in to its CS.
replace the turn variable with a shared global variable initialized to false. Boolean[] flag = new Boolean[2];
turn = 0; remainder;
flag is initialized to ‘false’
P0 while(true) {
while (flag[1]) do no-op; flag[0] = true;
CS
flag[0] = false; remainder section;
P1 while(true){
while (flag[0]) do no-op; flag[1] = true;
CS
flag[1] = false; remainder section;
}}
CSCI 340
Lecturer: Dr. Simina Fluture
Mutual Exclusion is violated (why? Prove it) No Starvation condition is violated
3rd Attempt (similar with 2nd algorithm of the textbook)
We might try to fix the 2nd attempt by changing the first 2 lines.
First the process sets its flag to true to signal that is ready to enter the CS. Next, the process checks if the other process is not also ready to get in the CS.
flag is initialized to ‘false’
P0 P1 while (true) {
flag[0] = true;
while (flag[1]) do no-op;
CS
flag[0] = false; remainder section;
while (true) { flag[1] = true;
while (flag[0]) do no-op CS
flag[1] = false; remainder section;
}}
Mutual Exclusion condition – Satisfied (why? Prove it)
Progress condition is not satisfied (why? Prove it)
Correct Solution (Peterson Solution) (similar with the 3rd attempt of the
textbook) The state of both processes is provided by the global array
variable ‘flag’.
Also we need to impose some order on the activities of the two processes. The variable ‘turn’ solves the simultaneity conflicts.
Peterson’s solution
turn = 0; flag[0] = false; flag[1] = false;
P0
while(true){
flag[0] = true;
turn =1;
while (flag[1] and turn ==1) do no-op;
CS
flag[0] = false;
remainder section; }}
P1 while(true){
flag[1] = true;
turn = 0;
while (flag[0] and turn ==0) do no-op;
CS
flag[1] = false; remainder section;
CSCI 340
Lecturer: Dr. Simina Fluture
Topics: Synchronization using hardware support
Test-and-Set (TS)
In a uniprocessor environment, the CS problem could be simply solved if we could disallow interrupts while a shared variable is being modified. This solution is not feasible in a multiprocessorenvironment. Manymachinesprovidespecialhardwareinstructionsthat allow to test and modify the content of a word, or to swap the content of two words, atomically (without interruption).
TS of a memory location causes the content of the specified memory location to be loaded intotheCPU registerandthememorytobewrittenwithavalueofTrue.
Definition of the Test-and-Set instruction: boolean TS (target:boolean) {
TS = target; target = true;
}
Mutual-Exclusion implementation with TS:
Mutual Exclusion is satisfied:
CSCI 340
Lecturer: Dr. Simina Fluture
Topics: CPU – Scheduling algorithms
NonPreemptive algorithms (FCFS, SJF, Priority) Preemptive Algorithms (Priority, Shortest Remaining Time)
Criteria for making scheduling decisions:
I/O boundness of a process. CPU boundness of a process. process priority
past history of a process
– how often the process has been preempted
– elapsed time since its creation.
– predicted future behavior (the process will get blocked or not)
Waiting Time: the amount of time that the process spends waiting in the ready queue to use the processor.
Turnaround Time: the amount of the elapsed time between when a process is created and the moment that the process exits the running state by terminating.
Response Time: the amount of time it takes to start responding but not the time that it takes to output that response.
Goals:
– maximize the throughput. (# jobs/unit of time)
– maximize I/O utilization.
– maximize CPU utilization.
– minimize the response time.
– minimize turnaround time Modes of scheduling:
nonpreemptive-processesareallowedtoremainontheCPUuntiltheyterminate, block themselves
(FCFS, the nonpreemptive SJF, priority scheduling)
preemptive – the scheduler can forcibly remove a process from the CPU (SRT, Round Robin, Multilevel Queens Feed Back)
CSCI 340
Lecturer: Dr. Simina Fluture
Nonpreemptive Scheduling Algorithms
First-Come First-Served (FCFS)
The ready queue contains the PCB of the process and it is treated like a FIFO queue. Newly created processes and those becoming ready are added to the rear of the queue. Thenextprocess toberunistheoneinfrontofthequeue.
+) easy to understand and implement
-) the waiting time might be too long possibility of
the convoy effect
PID Service Time Arrival Time 130 262 344 456 528
Example:
Shortest-Job-First-Scheduling (SJF)
Suppose that the service time is known in advance.
The processes in the ready list, at the time when the CPU becomes available, are sorted inincreasing orderbytheservicetime(nextCPUburst)andscheduledfortheCPUin
that order.
The SJF may be either preemptive or non preemptive. A nonpreemptive algorithm will allow the currently running process to finish its CPU burst.
Example:
+) the SJF is proved to be the optimal scheduling algorithm with respect to the average waiting time of the ready processes.
SJF algorithm can be used for long term scheduling in a batch system by using as the length of the next CPU request the process’ time limit that a user specifies when the job is submited.
-) SJF may penalize processes with high service time requests – starvation
SJF cannot be implemented at the level of short-term CPU scheduling. It is hard to predict the length of the next CPU burst.
One approach is to approximate SJF scheduling by predicting the value of the next CPU burst time.
Priority scheduling algorithms (6.3.3 from the book)
The processes are allocated to the CPU on the basis of their assigned priorities. Priority scheduling can be either preemptive on nonpreemptive.
-) indefinite blocking or starvation.
Solution for the starvation of low-priority processes is aging.
CSCI 340
Lecturer: Dr. Simina Fluture
Preemptive Scheduling Algorithms:
In the discussion of nonpreemptive algorithms, the cost of context switching between processes is ignored. In preemptive scheduling systems, the cost of context switching canbecomeasignificantfactor incomputingoptimalschedules.
SJF as a preemptive scheduling policy:
Preemptive SJF scheduling is sometimes called shortest remaining time first. The process in the ready list whose remaining time to completion is the shortest is scheduled next.
Example:
Round-Robin Scheduling
A fixed quantum of time Q is set. A process is not allowed to use the CPU for a time larger than Q. If the CPU burst of a process exceeds Q, the process will be preempted. Ready processes are maintained in a FIFO queue. When a process is preempted, it is placed at the rear of the queue. The process in front is scheduled next.
The basic premise is to equally distribute the processing time among all processes that request it.
Turnaround time depends on the size of the time quantum.
The average waiting time under the RR policy, is often quite long.
Setting the value of Q
I/O bounded processes CPU bounded processes
Mix of CPU bounded processes with I/O bounded processes:
Multilevel Queue Scheduling
A multilevel queue-scheduling algorithm partitions the ready queue into
separate queues. Each separate queue has its own policy.
In addition there is a scheduling algorithm between queues implemented as a priority preemptive algorithm. Each queue has priority over lower-priority queues.
Anotherpossibilityistotimeslicebetweenthequeues. Eachqueuegetsaspecific portionoftheCPU time.
We can consider the interactive or the I/O processes having a higher priority than the CPU bounded processes.
Example: higher priority system processes
interactive processes I/O bounded process
lower priority batch processes (CPU bounded processes)
CSCI 340
Lecturer: Dr. Simina Fluture
Multilevel Feedback Queue Scheduling
allows a process to move between queues. It can be moved to a lower-priority queue (if the process uses too much of the CPU) or to a higher-priority (if the process has a large waiting time).
The following multilevel scheduling algorithm establishes a preference for shorter jobs. Scheduling is done on a preemptive basis, and a dynamic priority mechanism is used.
When a process first enters the system, it is placed in q0. When it returns to the Ready list, it is placed in ql. After each subsequent execution it is demoted to the next lower priority queue.
A short process will complete without migrating very far downward. Newer, shorter processesare favoredoverolder,longerprocesses.
Within each queue, except the lowest priority one, a Round Robin policy is used. The lowest-priority queue is treated in a FCFS fashion.
There are many flavors that can be used in implementing a Multilevel Feedback Queue Scheduling algorithm.
Let’s first define our policy:
We will consider a version that uses preemption in the same fashion as the Round Robin,atperiodic intervalsoftime,andthepreemptiontimesvaryaccordingtothe queue.
When a process first enters the system, it is placed in q0. When it returns to the Readylist,itisplaced inql.Aftereachsubsequentexecutionitisdemotedtothe next lower-priority queue.
Only when q0 is empty will the scheduler execute processes in ql.
If a new process arrives in q0 while a process is running in ql, the process in ql will be preempted ONLY AFTER it finishes its time quantum.
CSCI 340
Lecturer: Dr. Simina Fluture
Topics:
Semaphores
We have seen how Mutual Exclusion can be implemented with no operating system support.
Semaphores provide a primitive yet powerful tool for enforcing mutual exclusion and for coordinating processes. Semaphores need operating system support.
Apart from initialization, two atomic operations can be performed on semaphores: wait and signal.
Initial definition of wait and signal (P and V)
wait(S): while S <= 0 do no-op; S--;
signal(S): S++;
Mutual Exclusion implementation: n processes share a semaphore, named mutex.
Initially
While(true) {
wait(mutex); //
P(mutex) CS signal(mutex); // V(mutex) remainder section;
}
While a process is in its CS, any other process that attempts to enter its CS, must loop continuously in the entry code.
The process is busy waiting, wasting CPU cycles. This kind of semaphore is also called a spinlock semaphore.
Overcoming the need for busy waiting:
Rather then busy waiting, the process can block itself.
The Concept of Semaphores Operations on Semaphores
Types of Semaphores
Implementation of Mutual Exclusion using semaphores Implementation of Operation - Serialization and process Synchronization The Active V operation
CSCI 340
Lecturer: Dr. Simina Fluture
Definition of wait and signal for Counting semaphores
Note: this is the definition that will be considered from now on in the implementation of any problem.
There are two fields associated with the semaphore: semaphore value (integer) and semaphore queue (which contains a list of processes blocked on the semaphore).
P(S) {
}
V(S){
S.value ++;
if (S.value <= 0) {
remove a process P from S.queue; wakeup(P);
}
}
S .value --;
if (S.value < 0){
add this process to
S.queue; block; }
There are two kinds of semaphores:
Counting semaphores: can assume (initialized) nonnegative integer values. Wait - resource has been allocated.
Signal - resource has been returned to the pool.
Binary semaphores: can assume only values 1 and 0.
Lecture Homework
What will be the outcome of changing the Signal definition for Binary Semaphores to:
V(S) {
s.value = 1;
remove a process P from
s.queue; wakeup(P); }
Using semaphores for solving synchronization problems (completed in class)
Consider two concurrently running threads. T1 executes statement S1 and T2 executes statement P2.
Using semaphores, synchronize the execution of the two threads such that S2 will be executed only after S1 has been completed.
Serialization between read and write operations (completed in class)
Each of the threads below reads or writes a specific variable. The read operation should be done only after the variable has been updated (written).
// release //