CS计算机代考程序代写 data structure jvm database Java concurrency cache assembly algorithm Java Concurrency

Java Concurrency

Deadlock, Reader-Writer problem and Condition synchronization
Michelle Kuttel

Serial versus concurrent
Sequential correctness is mostly concerned with safety properties:
ensuing that a program transforms each before-state to the correct after-state.

Concurrent correctness is also concerned with safety, but the problem is much, much harder:
safety must be ensured despite the vast number of ways steps of concurrent threads can be be interleaved.

Also,concurrent correctness encompasses a variety of liveness properties that have no counterparts in the sequential world.

Concurrent correctness
There are two types of correctness properties:

Safety properties
The property must always be true.

Liveness properties
The property must eventually become true.

Deadlock versus livelock
3

Java Deadlocks
We use locking to ensure safety
but locks are inherently vulnerable to deadlock

indiscriminate locking can cause lock-ordering deadlocks

Database systems are designed to detect and recover from deadlock – by searching the is-waiting0-for graph doe cycles
4

Dining philosophers
Classic problem used to illustrate deadlock
proposed by Dijkstra in 1965
a table with five silent philosophers, five plates, five forks (or chopsticks) and a big bowl of spaghetti (or rice).
Each philosopher must alternately think and eat.
Eating is not limited by the amount of spaghetti left: assume an infinite supply.
However, a philosophers need two forks to eat
A fork is placed between each pair of adjacent philosophers.

unrealistic, unsanitary and interesting

Dining philosophers
Basic philosopher loop:
while True:
think()
get_forks()
eat()
put_forks()
The problem is how to design a concurrent algorithm such that each philosopher won’t starve, i.e. can forever continue to alternate between eating and thinking.
Some algorithms result in some or all of the philosophers dying of hunger…. deadlock

Dining philosophers in Java
class Philosopher extends Thread {
int identity;
Chopstick left; Chopstick right;
Philosopher(Chopstick left,Chopstick right){
this.left = left; this.right = right;
}
public void run() {
while (true) {
try {
sleep(…); // thinking
right.get(); left.get(); // hungry
sleep(…) ; // eating
right.put(); left.put();
} catch (InterruptedException e) {}
}
}
}
potential for deadlock

Chopstick Monitor
class Chopstick {
Boolean taken=false;
synchronized void put() {
taken=false;
notify();
}
synchronized void get() throws
InterruptedException
{
while (taken) wait();
taken=true;
}
}

Applet for diners
for (int i =0; i this.value.length)
this.expand(…);
sb.getChars(0,len,this.value,this.count);
}
synchronized getChars(int x, int, y,
char[] a, int z) {
“copy this.value[x..y] into a starting at z”
}
}

19

Two problems
Problem #1: The lock for sb is not held between calls to sb.length and sb.getChars
So sb could get longer
Would cause append to throw an ArrayBoundsException

Problem #2: Deadlock potential if two threads try to append in opposite directions, just like in the bank-account first example

Not easy to fix both problems without extra copying:
Do not want unique ids on every StringBuffer
Do not want one lock for all StringBuffer objects

Actual Java library: fixed neither (left code as is; changed javadoc)
Up to clients to avoid such situations with own protocols
20
Sophomoric Parallelism & Concurrency, Lecture 6

20

Perspective
Code like account-transfer and string-buffer append are difficult to deal with for deadlock

Easier case: different types of objects
Can document a fixed order among types
Example: “When moving an item from the hashtable to the work queue, never try to acquire the queue lock while holding the hashtable lock”

Easier case: objects are in an acyclic structure
Can use the data structure to determine a fixed order
Example: “If holding a tree node’s lock, do not acquire other tree nodes’ locks unless they are children in the tree”

21
Sophomoric Parallelism & Concurrency, Lecture 6

21

Why are Thread.suspend and Thread.resume deprecated?

Thread.suspend is inherently deadlock-prone.
If the target thread holds a lock on the monitor protecting a critical system resource when it is suspended, no thread can access this resource until the target thread is resumed.
If the thread that would resume the target thread attempts to lock this monitor prior to calling resume, deadlock results.
Such deadlocks typically manifest themselves as “frozen” processes.

Checkpoint
The BirdsSpotted2 class is thread safe. Is it also deadlock free?
public final class BirdsSpotted2 {
private long CapeStarling = 0;
private long SacredIbis = 0;
private long CapeRobinChat = 0;

public synchronized long getStarling() { returnCapeStarling;}
public synchronized long getIbis() { returnSacredIbis;}
public synchronized long getRobin() { returnCapeRobinChat;}

public synchronized long spottedStarling() {return ++CapeStarling;}
public synchronized long spottedIbis() { return ++SacredIbis;}
public synchronized long spottedRobin() { return ++CapeRobinChat;}
}

Why do the getters have to be synchronized? – mutual exlusion
Why do the setters have to be sychronized? – vsibiity
23

Checkpoint
why can we have 2 separate locks here?
why is it desirable?
public class MsLunch {
private long orc = 0;
private long dragon = 0;
private Object orcLock = new Object();
private Object dragonLock = new Object();

public void inc1() {
synchronized(orcLock) {
orc++;
}
}

public void inc2() {
synchronized(dragonLock) {
dragon++;
}
}
}

Synchronization is built around an internal entity known as the intrinsic lock or monitor lock. (The API specification often refers to this entity simply as a “monitor.”) Intrinsic locks play a role in both aspects of synchronization: enforcing exclusive access to an object’s state and establishing happens-before relationships that are essential to visibility.

Synchronized statements are also useful for improving concurrency with fine-grained synchronization. Suppose, for example, class MsLunch has two instance fields, c1 and c2, that are never used together. All updates of these fields must be synchronized, but there’s no reason to prevent an update of c1 from being interleaved with an update of c2 — and doing so reduces concurrency by creating unnecessary blocking. Instead of using synchronized methods or otherwise using the lock associated with this, we create two objects solely to provide locks.
24

Checkpoint
why can we have 2 separate locks here?
why is it desirable?
public class MsLunch {
private long orc = 0;
private long dragon = 0;
private Object orcLock = new Object();
private Object dragonLock = new Object();

public void inc1() {
synchronized(orcLock) {
orc++;
}
}

public void inc2() {
synchronized(dragonLock) {
dragon++;
}
}
}
Advantage of this using private lock:
lock is encapsulated so client code cannot acquire it
clients incorrectly using lock can cause liveness problems
verifying that a publically accessible lock is used properly requires examining the entire program, compared to a single class for a private one

Synchronization is built around an internal entity known as the intrinsic lock or monitor lock. (The API specification often refers to this entity simply as a “monitor.”) Intrinsic locks play a role in both aspects of synchronization: enforcing exclusive access to an object’s state and establishing happens-before relationships that are essential to visibility.

Synchronized statements are also useful for improving concurrency with fine-grained synchronization. Suppose, for example, class MsLunch has two instance fields, c1 and c2, that are never used together. All updates of these fields must be synchronized, but there’s no reason to prevent an update of c1 from being interleaved with an update of c2 — and doing so reduces concurrency by creating unnecessary blocking. Instead of using synchronized methods or otherwise using the lock associated with this, we create two objects solely to provide locks.
25

Progress Conditions
Deadlock-free: some thread trying to acquire the lock eventually succeeds.

Starvation-free: every thread trying to acquire the lock eventually succeeds.

Art of Multiprocessor Programming
26

The above are definitions of progress conditions we have used and will use in the coming lectures.
We give here informal definitions of progress conditions…formal ones need to talk about fair histories which is beyond the scope of this lecture…for the above conditions:

A method implementation is deadlock-free if it guarantees min-
imal progress in every fair history, and maximal progress in some fair history.

A method implementation is starvation-free if it guarantees
maximal progress in every fair history.

A method implementation is lock-free if it guarantees minimal
progress in every history and maximal progress in some history.

A method implementation is wait-free if it guarantees maximal
progress in every history.

26

Starvation
much less common a problem than deadlock

situation where a thread is unable to gain regular access to shared resources and is unable to make progress.
most commonly starved resource is CPU cycles
happens when shared resources are made unavailable for long periods by “greedy” threads.
For example:
suppose an object provides a synchronized method that often takes a long time to return.
If one thread invokes this method frequently, other threads that also need frequent synchronized access to the same object will often be blocked.

Starvation
In Java can be caused by inappropriate use of thread priorities
or indefinite loops or resource waits that do not terminate where a lock is held

Livelock
A thread often acts in response to the action of another thread.
If the other thread’s action is also a response to the action of another thread, then livelock may result.
As with deadlock, livelocked threads are unable to make further progress.
Process is in a livelock if it is spinning while waiting for a condition that will never become true (busy wait deadlock)
comparable to two people attempting to pass each other in a corridor: Alphonse moves to his left to let Gaston pass, while Gaston moves to his right to let Alphonse pass.
Seeing that they are still blocking each other, Alphone moves to his right, while Gaston moves to his left. They’re still blocking each other, so…

Readers/writer locks

30

30

Reading vs. writing
Recall:
Multiple concurrent reads of same memory: Not a problem
Multiple concurrent writes of same memory: Problem
Multiple concurrent read & write of same memory: Problem

So far:
If concurrent write/write or read/write might occur, use synchronization to ensure one-thread-at-a-time

But this is unnecessarily conservative:
Could still allow multiple simultaneous readers!
31
Sophomoric Parallelism & Concurrency, Lecture 6

31

Readers and writers problem
variant of the mutual exclusion problem where there are two classes of processes:
writers which need exclusive access to resources
readers which need not exclude each other

Readers/Writers
Easy to solve with mutual exclusion
But mutual exclusion requires waiting
One waits for the other
Everyone executes sequentially

Performance hit!

33
Its also easy with producer-consumer interrupt bit based solution if we have one producer and one consumer.
Using Mutex for large chunks of memory introduces performance problems. The surprising thing is that we can actually provide a “snapshot” of memory by reading memory locations one at a time, and while others are continuously writing it, all this WITHOUT mutual exclusion. Stay tuned to see how we do this.

Example
Consider a hashtable with one coarse-grained lock
So only one thread can perform operations at a time

But suppose:
There are many simultaneous lookup operations
insert operations are very rare

Note: Important that lookup doesn’t actually mutate shared memory, like a move-to-front list operation would
34
Sophomoric Parallelism & Concurrency, Lecture 6

34

Readers/writer locks
A new synchronization ADT: The readers/writer lock

A lock’s states fall into three categories:
“not held”
“held for writing” by one thread
“held for reading” by one or more threads

new: make a new lock, initially “not held”
acquire_write: block if currently “held for reading” or “held for writing”, else make “held for writing”
release_write: make “not held”
acquire_read: block if currently “held for writing”, else make/keep “held for reading” and increment readers count
release_read: decrement readers count, if 0, make “not held”

35
Sophomoric Parallelism & Concurrency, Lecture 6
0  writers  1
0  readers
writers*readers==0

35

Pseudocode example (not Java)
36
Sophomoric Parallelism & Concurrency, Lecture 6
class Hashtable {

// coarse-grained, one lock for table
RWLock lk = new RWLock();
V lookup(K key) {
int bucket = hasher(key);
lk.acquire_read();
… read array[bucket] …
lk.release_read();
}
void insert(K key, V val) {
int bucket = hasher(key);
lk.acquire_write();
… write array[bucket] …
lk.release_write();
}
}

36

Readers/writer lock details
A readers/writer lock implementation (“not our problem”) usually gives priority to writers:
Once a writer blocks, no readers arriving later will get the lock before the writer
Otherwise an insert could starve

Re-entrant? Mostly an orthogonal issue
But some libraries support upgrading from reader to writer

Why not use readers/writer locks with more fine-grained locking, like on each bucket?
Not wrong, but likely not worth it due to low contention

37
Sophomoric Parallelism & Concurrency, Lecture 6

37

In Java

Java’s synchronized statement does not support readers/writer

Instead, library
java.util.concurrent.locks.ReentrantReadWriteLock

Different interface: methods readLock and writeLock return objects that themselves have lock and unlock methods

Does not have writer priority or reader-to-writer upgrading
Always read the documentation
38
Sophomoric Parallelism & Concurrency, Lecture 6

38

Condition variables

39

39

Condition variables: Producer-Consumer synchronization problem
In multithreaded programs there is often a division of labor between threads.
In one common pattern, some threads are producers and some are consumers.
Producers create items of some kind and add them to a data structure;
consumers remove the items and process them
a new coordination problem: Producer-Consumer

analogy: washing and drying-uo
40

Producer-Consumer
canonical example of a bounded buffer for sharing work among threads

Bounded buffer: A queue with a fixed size
(Unbounded still needs a condition variable, but 1 instead of 2)

For sharing work – think an assembly line:
Producer thread(s) do some work and enqueue result objects
Consumer thread(s) dequeue objects and do next stage
Must synchronize access to the queue

41
Sophomoric Parallelism & Concurrency, Lecture 6

f
e
d
c

buffer
back
front
producer(s)
enqueue
consumer(s)
dequeue

41

The little book of semaphores, by Downey
42
Producer-consumer problem
Event-driven programs are a good example.
Whenever an event occurs, a producer thread creates an event object and adds it to the event buffer.
Concurrently, consumer threads take events out of the buffer and process them.

42
No-Starvation: if Bob is always willing to feed, and
the pets are always famished, then the pets will eat infinitely
often.

43
Producer-consumer problem
For this to work correctly:
Producers must not produce when the buffer is full – must wait till there is a gap.
Consumers must not consume when the buffer is empty – must wait till it is filled.

43
No-Starvation: if Bob is always willing to feed, and
the pets are always famished, then the pets will eat infinitely
often.

Code, attempt 1
44
Sophomoric Parallelism & Concurrency, Lecture 6
class Buffer {
E[] array = (E[])new Object[SIZE];
… // front, back fields, isEmpty, isFull methods
synchronized void enqueue(E elt) {
if(isFull())
???
else
… add to array and adjust back …
}
synchronized E dequeue()
if(isEmpty())
???
else
… take from array and adjust front …
}
}

44

Waiting
enqueue to a full buffer should not raise an exception
Wait until there is room

dequeue from an empty buffer should not raise an exception
Wait until there is data

Bad approach is to spin (wasted work and keep grabbing lock)

45
Sophomoric Parallelism & Concurrency, Lecture 6
void enqueue(E elt) {
while(true) {
synchronized(this) {
if(isFull()) continue;
… add to array and adjust back …
return;
}}}
// dequeue similar

45

What we want
Better would be for a thread to wait until it can proceed
Be notified when it should try again
In the meantime, let other threads run

Like locks, not something you can implement on your own
Language or library gives it to you, typically implemented with operating-system support

An ADT that supports this: condition variable
Informs waiter(s) when the condition that causes it/them to wait has varied

Terminology not completely standard; will mostly stick with Java
46
Sophomoric Parallelism & Concurrency, Lecture 6

46

Java approach: not quite right
47
Sophomoric Parallelism & Concurrency, Lecture 6
class Buffer {

synchronized void enqueue(E elt) {
if(isFull())
this.wait(); // releases lock and waits
add to array and adjust back
if(buffer was empty)
this.notify(); // wake somebody up
}
synchronized E dequeue() {
if(isEmpty())
this.wait(); // releases lock and waits
take from array and adjust front
if(buffer was full)
this.notify(); // wake somebody up
}
}

47

Key ideas
Java weirdness: every object “is” a condition variable (and a lock)
other languages/libraries often make them separate

wait:
“register” running thread as interested in being woken up
then atomically: release the lock and block
when execution resumes, thread again holds the lock

notify:
pick one waiting thread and wake it up
no guarantee woken up thread runs next, just that it is no longer blocked on the condition – now waiting for the lock
if no thread is waiting, then do nothing

48
Sophomoric Parallelism & Concurrency, Lecture 6

48

Bug #1
Between the time a thread is notified and it re-acquires the lock, the condition can become false again!

49
Sophomoric Parallelism & Concurrency, Lecture 6
synchronized void enqueue(E elt){
if(isFull())
this.wait();
add to array and adjust back

}
if(isFull())
this.wait();

add to array
Time
Thread 2 (dequeue)
Thread 1 (enqueue)

take from array
if(was full) this.notify();

make full again
Thread 3 (enqueue)

49

Bug fix #1
Guideline: Always re-check the condition after re-gaining the lock
In fact, for obscure reasons, Java is technically allowed to notify a thread spuriously (i.e., for no reason)
50
Sophomoric Parallelism & Concurrency, Lecture 6
synchronized void enqueue(E elt) {
while(isFull())
this.wait();

}
synchronized E dequeue() {
while(isEmpty())
this.wait();

}

50

Bug #2
If multiple threads are waiting, we wake up only one
Sure only one can do work now, but can’t forget the others!
51
Sophomoric Parallelism & Concurrency, Lecture 6
while(isFull())
this.wait();


Time
Thread 2 (enqueue)
Thread 1 (enqueue)

// dequeue #1
if(buffer was full)
this.notify();

// dequeue #2
if(buffer was full)
this.notify();

Thread 3 (dequeues)
while(isFull())
this.wait();

51

Bug fix #2
notifyAll wakes up all current waiters on the condition variable

Guideline: If in any doubt, use notifyAll
Wasteful waking is better than never waking up

So why does notify exist?
Well, it is faster when correct…
52
Sophomoric Parallelism & Concurrency, Lecture 6
synchronized void enqueue(E elt) {

if(buffer was empty)
this.notifyAll(); // wake everybody up
}
synchronized E dequeue() {

if(buffer was full)
this.notifyAll(); // wake everybody up
}

52

A new liveness hazard: missed signals
A missed signal occurs when a thread must wait for a specific condition that is already true, but fails to check before waiting
notifyAll is almost always better than notify, because it is less prone to missed signals

notifyAll is almost always better than notify.
Thread a A could be waiting for PA and PB, while thread B could be waiting for PA. Thread A woken up first, but PB is not true.
53

Alternate approach
An alternative is to call notify (not notifyAll) on every enqueue / dequeue, not just when the buffer was empty / full
Easy: just remove the if statement

Alas, makes our code subtly wrong since it’s technically possible that an enqueue and a dequeue are both waiting.
See notes for the step-by-step details of how this can happen

Works fine if buffer is unbounded since then only dequeuers wait
54
Sophomoric Parallelism & Concurrency, Lecture 6

54

Alternate approach fixed
The alternate approach works if the enqueuers and dequeuers wait on different condition variables
But for mutual exclusion both condition variables must be associated with the same lock

Java’s “everything is a lock / condition variable” doesn’t support this: each condition variable is associated with itself

Instead, Java has classes in java.util.concurrent.locks for when you want multiple conditions with one lock
class ReentrantLock has a method newCondition that returns a new Condition object associate with the lock
See the documentation if curious
55
Sophomoric Parallelism & Concurrency, Lecture 6

55

Last condition-variable comments
notify/notifyAll often called signal/broadcast, also called pulse/pulseAll

Condition variables are subtle and harder to use than locks

But when you need them, you need them
Spinning and other work-arounds don’t work well

Fortunately, like most things in a data-structures course, the common use-cases are provided in libraries written by experts
Example: java.util.concurrent.ArrayBlockingQueue
All uses of condition variables hidden in the library; client just calls put and take

56
Sophomoric Parallelism & Concurrency, Lecture 6

56

Condition synchronization
Java has built-in mechanisms for waiting for a condition to become true:
wait() and notify()

They are tightly bound to intrinsic locking and can be difficult to use correctly
Often easier to use existing synchronizer classes:
coordinate control flow of cooperating threads
e.g. BlockingQueue and Semaphore

Java Blocking queues and the producer-consumer design pattern
BlockingQueue extends Queue with blocking insertion and retrieval operations
put and take methods
timed equivalents: offer and poll
If queue is empty, a retrieval (take) blocks until an element is available
If queue is full (for bounded queues), insertion (put) blocks until there is space available

use a bounded queue to avoid running put of memory
58

Producer-consumer design pattern
separates identification of work to be done from execution of that work
work items are placed on “to do” list for later processing
removes code dependencies between producers and consumers
Most common design is a thread pool coupled with a work queue

A thread pool consists of a collection of threads, called workers, that are used to process work. Each worker looks for new work to be done. When it finds work to do, it does it, and when finished, it goes back to get more work.
59

Several implementations of blocking queue
LinkedBlockingQueue, ArrayBlockingQueue:
FIFO queues
Priority blocking queue
SynchronousQueue:
queued THREADS

Synchronous queue reduces latency in work handoff.
60

The Executor Framework and Thread Pools
usually the easiest way to implement a producer-consumer design is to use a thread pool implementation as part of the Executor framework
public interface Executor {
void execute(Runnable command);
}
An Executor object typically creates and manages a group of threads called a thread pool
threads execute the Runnable objects passed to the execute method

Concurrency summary
Access to shared resources introduces new kinds of bugs
Data races
Critical sections too small
Critical sections use wrong locks
Deadlocks

Requires synchronization
Locks for mutual exclusion (common, various flavors)
Condition variables for signaling others (less common)

Guidelines for correct use help avoid common pitfalls

Not clear shared-memory is worth the pain
But other models (e.g., message passing) not a panacea
62
Sophomoric Parallelism & Concurrency, Lecture 6

62

Java synchronizers
A synchronizer is any object that coordinates the control flow of threads based on its state. Java has:
Blocking Queues
Semphores
Barriers
Latches

Java synchronizers
All synchronizers:
determine whether arriving threads should be allowed to pass or be forced to wait based on encapsulated state
provide methods to manipulate state
provide methods to wait efficiently for the synchronizers to enter the desired state

Latches
Acts as a gate: no thread can pass until the gate opens, and then all can pass
delays progress of threads until it enters terminal state
cannot then change state again (open forever)
For example, can be used to wait until all parties involved in an activity are ready to proceed:
like all players in a multi-player game

CountDownLatch
CountDownLatch allows one or more threads to wait for a set of events to occur
Latch state is a counter initialized to a positive number, representing number of elements to wait for

Semaphores
Counting semaphores are used to control the number of activities that can access a certain resource or perform a given action at the same time
like a set of virtual permits
activities can acquire permits and release then when they are done with them
Useful for implementing resource pools, such as database connection pools.

Barriers
Similar to latches – block a group of threads until an event has occurred – but:
latches wait for events
barriers wait for other threads

Everyone wait there until everypne shows up
68

CyclicBarrier
Allows a fixed number of parties to rendezvous repeatedly at a barrier point
Threads call await when they reach the barrier point and await blocks until all threads have reached the barrier point.
Once all threads are there, the barrier is passed, all threads are released and the barrier is reset.

CyclicBarrier
Useful in parallel iterative algorithms that break down a problem into a fixed number of independent subproblems:
In many simulations, the work done in one step can be done in parallel, but all work in one step must be completed before the next step begins…

Conway’s game of life
Conway’s game of life is a cellular automaton first proposed by the British mathematician John Horton Conway in 1970.
The game is a simulation on a two-dimensional grid of cells. Each cell starts off as either alive or dead. The state of the cell changes depending on the state of its 7 neighbours in the grid. At each time-step, we update the state of each cell according to the following four rules.
A live cell with fewer than two live neighbors dies due to underpopulation.
A live cell with more than three live neighbors dies due to overpopulation.
A live cell with two or three live neighbors survives to the next generation.
A dead cell with exactly three live neighbors becomes a live cell due to breeding.

Multithreaded Conway’s game of life
Parallel program generates threads equal to the number of cells,
or, better, a part of the grid –
and updates the status of each cell independently.
Before proceeding to the next time step, it is necessary that all the grids have been updated.
This requirement can be ensured by using a global barrier for all threads.

Causes of Efficiency Problems in Java
Too much locking
• Cost of using synchronized
• Cost of blocking waiting for locks
• Cost of thread cache flushes and reloads
Too many threads
• Cost of starting up new threads
• Cost of context switching and scheduling
• Cost of inter-CPU communication, cache misses
Too much coordination
• Cost of guarded waits and notification messages
• Cost of layered concurrency control
Too many objects
• Cost of using objects to represent state, messages, etc

/docProps/thumbnail.jpeg