Concurrency
Mitchell Chapter 14
Concurrent Programs Two or more sequences of events occur in parallel
• Multiprogramming
• Multiprocessors
– Twoormoreprocessorsmay
– Asinglecomputerruns several programs at the same time
be connected
– Eachprogramproceeds sequentially
– Programsononeprocessor communicatewithprograms on another
– Actionsofoneprogrammay occur between two steps of another
– Actionsmayhappen simultaneously
process, thread, task:
sequential program running on a processor
2
Concurrency
• Allows different tasks to proceed at different speeds.
• Multiprogramming
– Allows one program to do useful work while another is waiting
for input
– More efficient use of a single processor
• Multiprocessing
– More raw processing power available
– Introduces new issues
• Reliability of network
• Some processor(s) proceeding while another crashes
• Interaction between separate sequential programs raises programming challenges (in both multiprogramming and multiprocessing)
Most of the concepts discussed apply to either kind of concurrency.
3
Concurrent Programming Languages
• Provide abstractions and control structures defined specifically for concurrent programming.
• Can provide “light-weight” processes that are less costly than operating system processes
• Can provide portability across operating systems.
– Historically, concurrent systems written in languages that did not support concurrency. For example, using system calls specific to a particular operating system.
• We study basic concepts using particular example languages. – Concurrent ML (as an extension of Standard ML)
– Java
4
Basic Concepts in Concurrency
• ExecutionOrderandNondeterminism
• Communication,Coordination,andAtomicity • MutualExclusionandLocking
• Semaphores
• Monitors
5
• Example x := 0;
coend; print(x);
x := 0
x := 1 x := 2
x := x+1 x := x+1
print(x)
Execution Order and Nondeterminism
• An early limited concurrency primitive in Concurrent Pascal: cobegin…coend statement
cobegin
begin x := 1; x := x+1 end; begin x := 2; x := x+1 end;
execute sequential blocks in parallel
Example showing execution on a single processor
6
x := 0
x := 1 x := 2
x := x+1 x := x+1
print(x)
Execution Order and Nondeterminism
• Each assignment statement executes atomically, which means that there may be more than one low-level machine step, but one assignment is fully completed before allowing another process to execute.
• Some possible orderings between statements: x:=0;x:=1;x:=2;x:=x+1;x:=x+1;print(x) Output:4 x:=0;x:=2;x:=1;x:=x+1;x:=x+1;print(x) Output:3 x:=0;x:=1;x:=x+1;x:=2;x:=x+1;print(x) Output:3 x:=0;x:=2;x:=x+1;x:=1;x:=x+1;print(x) Output:2
Illustrates problem of nondeterminism
7
Nondeterminism
• Aprogramisdeterministicif,foreachsequenceof program inputs, there is one sequence of program actions and resulting outputs.
• Aprogramisnondeterministicifthereismorethan one possible sequent of actions corresponding to a given input sequence.
– Several possible execution orders
– Different results for different runs, even on the same
computer
– Difficult to design and debug programs
8
Example Illustrating Problem of Nondeterminism
• Cachecoherenceprotocolsinmultiprocessors – A set of processors share memory
– Access to memory is slow, can be bottleneck
– Each processor maintains a memory cache
– The job of the cache coherence protocol is to maintain the processor caches, and to guarantee that the values returned by every load/store sequence generated by the multiprocessor are consistent with the memory model.
9
• PEA reads location x
– Copy of x put in PEA’s
cache.
• PEB also reads x
– Copy of x put in PEB’s cache too.
Cache filled by read
10
• PEA adds1tox
– x is in PEA’s cache, so
there’s a cache hit
• If PEB reads x from cache, may be wrong
– OK if program semantics allows PEB read before PEA write
• Need protocol to avoid using stale values
Cache modified by write
11
Communication, Coordination, and Atomicity • Mechanisms in programming languages for explicit
concurrency:
– Mechanism to initiate and terminate individual sequential processes (all concurrent programming languages provide this capability)
– Communication between processes
• Bufferedcommunicationchannels
• Synchronouscommunicationchannels
• Broadcast
• Shared variables or objects (as in Concurrent Pascal example)
– Coordination between processes
• May explicitly or implicitly cause one process to wait for another
before continuing
– Atomicity (mentioned earlier)
• Affects both interaction between processes and handling of errors 12
Interprocess Communication
• Sharedvariablesismostelementaryform
• Canalsohaveshareddatastructuresorfiles
• Anotherformismessagepassingwithavarietyof mechanisms (see next page)
13
Message Passing
– Buffering
• Ifcommunicationisbuffered,theneverydataitemthatissent
remains available until it is received
• Inunbufferedcommunication,adataitemsentbeforethereceiver is ready to accept may be lost
– Synchronicity
• Insynchronouscommunication,thesendercannottransmitdata
unless the receiver is ready to receive it.
• Inasynchronouscommunication,thesendingprocessmay transmit a data item and continue executing even if the receiver is not ready to receive the data.
– Message Order
• A communication mechanism may preserve order of transmission
of messages, or it may not.
• If so, messages will be received in the order they are sent.
14
Coordination
• Coordinationmechanismsallowoneprocesstowait for another or notify a waiting process that it may proceed.
– Concurrent Pascal cobegin…coend: all processes started at the same cobegin must finish before the statement following coend may proceed
– Locking
– Semaphores
15
Mutual Exclusion and Locking • Issue: maintaining consistency of shared data
• Sample action
procedure sign_up(person)
end;
begin
number := number + 1; list[number] := person;
• Problem with parallel execution cobegin
sign_up(fred);
sign_up(julie); end;
bob
16
Mutual Exclusion and Locking • Issue: maintaining consistency of shared data
• Sample action
procedure sign_up(person)
end;
begin
number := number + 1; list[number] := person;
• Problem with parallel execution cobegin
sign_up(fred);
sign_up(julie); end;
bob
17
Mutual Exclusion and Locking • Issue: maintaining consistency of shared data
• Sample action
procedure sign_up(person)
end;
begin
number := number + 1; list[number] := person;
• Problem with parallel execution cobegin
sign_up(fred);
sign_up(julie); end;
bob
18
Mutual Exclusion and Locking • Issue: maintaining consistency of shared data
• Sample action
procedure sign_up(person)
end;
begin
number := number + 1; list[number] := person;
• Problem with parallel execution cobegin
sign_up(fred);
sign_up(julie); end;
bob fred
19
Mutual Exclusion and Locking • Issue: maintaining consistency of shared data
• Sample action
procedure sign_up(person)
end;
begin
number := number + 1; list[number] := person;
• Problem with parallel execution cobegin
sign_up(fred);
sign_up(julie); end;
julie bob fred
20
cobegin
n := n+1 n := n+1
list[n]:=fred list[n]:=julie
coend
Incorrect Ordering
n := n+1; n := n+1; list[n] := fred; list[n] := julie;
Results in an inconsistent state because it is not consistent with intended behaviour.
21
Mutual Exclusion
• Critical Section—a section of a program that accesses shared resources
– Two processes may access shared resource
– Inconsistent behavior if two actions are interleaved
• Mutual Exclusion
– One process at a time may be in its critical section
– Progress: If no processes are in their critical section and some process wants to enter a critical section, it becomes possible for one waiting process to enter its critical section.
– Bounded waiting: If one process is waiting, there must be a bound on the number of times that other processes are allowed to enter their critical sections before this one.
22
Locks and Busy Waiting • Example using wait and signal actions
cobegin begin
end; begin
end; end;
Need atomic operations to implement wait
sign_up(fred); // critical section
sign_up(julie);
// critical section
23
lock := 0;
cobegin begin
end;
end; begin
Lock and Signal Implemented as Integers
while lock=1 do end; // wait until lock is 0 lock := 1; // set lock to enter critical section sign_up(fred); // critical section
lock := 0; // release lock
while lock=1 do end; // wait until lock is 0 lock := 1; // set lock to enter critical section sign_up(julie); // critical section
lock := 0; // release lock end;
24
Lock and Signal Implemented as Integers
while lock=1 do end; // wait until lock is 0 lock := 1; // set lock to enter critical section sign_up(…); // critical section
• Using a loop to wait is called busy waiting.
• Problem with using a shared variable for mutual exclusion: operation that reads the value of the variable is different from the operation that sets it.
• It is possible for one process to test the variable and see that the lock is open (lock=0), but before the process can lock other processes out by setting it. (lock := 1), another process can also see the lock is open and set the variable first.
• Two processes can then call signup at once.
25
Atomic Test-and-Set Lock (TSL)
• Instructionatomicallyreadsandwritessomelocation
• Commonhardwareinstruction
• Combinewithbusy-waitinglooptoimplement mutual exclusion.
26
• Deadlock occurs when no process can proceed
• Example:
– Process 1 sets Lock 1 and waits for Lock 2 – Process 2 sets Lock 2 and waits for Lock 1
• Possible solution: 2-phase locking
Deadlock
• Process may hold some locks while awaiting others
– A process is viewed as a sequence of independent tasks.
– Locking phase: For each task, the process must acquire all the locks that could be needed.
– Release phase: A process must release all locks before proceeding to the next task.
– There must be an ordering on locks.
27
– Avoid busy-waiting loop
– Keep queue of waiting processes
Semaphore
– A standard semaphore is represented by an integer value: the maximum number of processes that may enter a critical section at the same time
– Scheduler has access to semaphore; process sleeps
– Disable interrupts during semaphore operations • OK since operations are short
– Processes call wait, and then must call signal when finished, otherwise deadlock could occur.
28
procedure wait (s:Semaphore) begin
end;
Semaphore Wait
// This procedure must be executed atomically.
if s.value > 0 then
s.value := s.value – 1; // Enter section and
else
suspend_on (s.queue); // Wait for other
// decrement counter
// processes to finish
29
procedure signal (s:Semaphore) begin
end;
Semaphore Signal
// This procedure must be executed atomically.
if length(s.queue) = 0 then
s.value := s.value + 1; // Increase count
else
allow_one_process (s.queue);
// allowing other
// processes to enter
// Wake up one
// suspended process
30
Monitor • Synchronized access to private data
• Responsibility for synchronization placed on operations that access data
• Combines:
– private data
– set of procedures (methods) – synchronization policy
• At most one process may execute a monitor procedure at a time; this process is said to be in the monitor.
• If one process is in the monitor, any other process that calls a monitor procedure will be delayed.
• Terminology: synchronized object
31
Concurrent Language Examples
• Language Examples
– Cobegin/coend
– Actors (not covered in this class) – Concurrent ML
– Java
• Main Features to Compare – Threads
– Communication – Synchronization – Atomicity
32
Properties of cobegin/coend Concurrent Pascal
• Advantages
– Create concurrent processes
– Communication: shared variables
• Limitations
– Mutual exclusion: none
– Atomicity: not much (an assignment statement is atomic)
– Number of processes is fixed by program structure
– Cannot abort processes
• All must complete before parent process can go on
33
Concurrent ML An Extension to Standard ML
• Threads
– New type of entity
• Communication
– Synchronous channels
– Communication and coordination combined
• Synchronization – Channels
– Events (allows users to define their own communication and synchronization abstractions)
• Atomicity
– No specific language support
34
• Thread creation
– spawn : (unit → unit) → thread_id
• Example code
CIO.print “begin parent\n”;
spawn (fn () => (CIO.print “child 1\n”;));
spawn (fn () => (CIO.print “child 2\n”;)); CIO.print “end parent\n”
• Result
begin parent
child 1 child 2
end parent
Threads
35
• Thread creation
– spawn : (unit → unit) → thread_id
• Example code
CIO.print “begin parent\n”;
No restriction on when to terminate parent or child.
spawn (fn () => (CIO.print “child 1\n”;));
spawn (fn () => (CIO.print “child 2\n”;));
CIO.print “end parent\n”
affecting the other. Prints “begin parent” first, and then prints the other 3 in any order.
• Result
begin parent
child 1 child 2
end parent
Threads
Either can terminate before the other without
36
forever: ‘a → (‘a → ‘a) → unit • (forever x0 f) computes:
Infinite Threads
x1 =(fx0) x2 =(fx1) x3 =(fx2) …
• The values x1, x2, x3,… are discarded
• f may communicate with other threads
• This thread can be terminated by other CML primitives, otherwise it loops forever.
37
• Communication
– recv : ‘a chan → ‘a
– send : (‘a chan * ‘a) → unit
• Message passing in synchronous
Channels
• Channel creation (for communicating values of type ‘a) – channel : unit → ‘a chan
– Both sender and receiver must be ready to communicate.
– If one thread executes a send and no thread is ready to execute recv on the same channel, the sending thread blocks (stops and waits) for a thread to execute recv.
– Similarly, if recv is executed, it can block if there is no send. • Example
– If c is an int chan, then send(c,3) sends the integer 3 on channel c. Result type is unit like an assignment.
38
• Example
ch = channel();
• Result
spawn (fn()=> … … send(ch,0); … …);
The synchronous communication causes A and C to execute before B and D.
spawn (fn()=> …
Channels
send/recv
39
and c1 = channel() in
end;
numbers(c1); square(c1, outCh); outCh
Sample CML Program
• Function to create squaring process fun square (inCh, outCh) =
forever () (fn () => send (outCh, square (recv inCh)));
• Put processes together (assuming that numbers creates a thread that outputs numbers on its argument channel)
fun mkSquares () = let
val outCh = channel()
• If a thread has the name of a channel, it can send messages, receive messages, or both.
• If a channel is passed to more than one thread, each can send
and receive messages on the channel.
40
• Communication
– shared variables
• Mutual exclusion and synchronization
Java Concurrency
• Threads
– Create process by creating thread object
– method calls, e.g., one object calls an enqueue method and another calls a dequeue method
– Not provided by communication through a shared object
– Has a semaphore primitive (maintains a queue of waiting processes)
– Supports monitors directly in the form of synchronized objects
• synchronized object: objects that allow only one thread to invoke a method at a time
41
Synchronization Example • Objects may have synchronized methods
• Can be used for mutual exclusion
– Two threads may share an object.
– If one calls a synchronized method, this locks object.
– If the other calls a synchronized method on same object, this thread blocks until object is unlocked.
46
Synchronized Methods • Markedbykeyword
public synchronized void commitTransaction(…) {…}
• Providesmutualexclusion
– At most one synchronized method can be active – Unsynchronized methods can still be called
• Programmer must be careful because this allows interaction through shared variables
47
public LinkedCell (double v, LinkedCell t) { value = v; next = t;
}
public synchronized double getValue() { return value;
}
public synchronized void setValue(double v) {
}
value = v;
// assignment not atomic
// no synchronization needed
public LinkedCell next() { return next;
}
Example
class LinkedCell { // Lisp-style cons cell containing protected double value; // value and link to next cell
protected LinkedCell next;
48