CS代写 FIT3143 – LECTURE WEEK 6 SYNCHRONIZATION, MUTEX, DEADLOCKS

Information Technology
FIT3143 – LECTURE WEEK 6 SYNCHRONIZATION, MUTEX, DEADLOCKS

▪ Real time Clock Synchronization Methods ▪ Logical Clock Synchronization Techniques ▪ Mutual Exclusion Approaches

Copyright By PowCoder代写 加微信 powcoder

▪ Deadlock detection and handling
Learning outcome(s) related to this topic
• Explain the fundamental principles of parallel computing architectures and algorithms (LO1)
• Compare and contrast different parallel computing architectures, algorithms and communication schemes using research-based knowledge and methods (LO2)
FIT3143 Parallel Computing 2

Synchronisation in Distributed Systems
▪ A Distributed System consists of a collection of distinct processes that are spatially separated and run concurrently
▪ In systems with multiple concurrent processes, it is economical to share the system resources.
▪ Sharing may be cooperative or competitive
▪ Both competitive and cooperative sharing require adherence to certain rules of
behavior that guarantee that correct interaction occurs.
▪ The rules of enforcing correct interaction are implemented in the form of synchronization mechanisms.
FIT3143 Parallel Computing 3

Issues implementing synchronization in DS
▪ In single CPU systems, synchronization problems such as mutual exclusion can be solved using semaphores and monitors. These methods rely on the existence of shared memory.
▪ We cannot use semaphores and monitors in distributed systems since two processes running on different machines cannot expect to have shared memory.
▪ Even simple matters such as determining one event happened before the other event requires careful thought.
FIT3143 Parallel Computing 4

Issues implementing synchronization in DS
▪ In distributed systems, it is usually not possible or desirable to collect all the information about the system in one place and synchronization among processes is difficult due to the following features of distributed systems:
– The relevant information is scattered among multiple machines.
– Processes make decisions based only on local information.
– A single point of failure in the system should be avoided.
– No common clock or other precise global time source exists.
FIT3143 Parallel Computing 5

Time and Distribution: Why?
▪ External reasons: We often want to measure time accurately
◼ For billings: How long was computer X used?
◼ For legal reasons: When was credit card W charged?
◼ For traceability: When did this attack occurred? Who did it?
– System must be in sync with an external time reference
• Usually the world time reference: UTC (Coordinated Universal Time)
▪ Internal reasons: many distributed algorithms use time
◼ Kerberos (authentication server) uses time-stamps
◼ This can be used to serialise transactions in databases
◼ This can be used to minimise updates when replicating data – Systemmustbeinsyncinternally
• No need to be synchronised on an external time reference
FIT3143 Parallel Computing 6

Clock Synchronization
▪ Time is unambiguous in a centralized system.
▪ A process can just make a system call to know the time.
– If process A asks for the current time, and a little later process B asks for the time, the value of B_time > A_time.
▪ In a distributed system, if process A and B are on different machines, B_time may not be greater than A_time.
FIT3143 Parallel Computing 7

Imperfect Clocks
▪ Human-made clocks are imperfect
– They run slower or faster than “real” physical time
– How much faster or slower is called the drift
– A drift of 1% (i.e. 1/100=10-2) means the clock adds or looses a second every 100 seconds
▪ Suppose, when the real time is t the time value of a clock p is Cp(t). If the maximum drift rate allowable is ρ, a clock is said to be non-faulty if the following condition holds –
1−  dC 1+ dt
FIT3143 Parallel Computing 8

12:05 12:11
Clock Synchronisation
Let’s synchronise. I’ve got 12:07.
I’ve got 12:11 Let’s agree on 12:09.
OK? OK. Done.
Done.  FIT3143 Parallel Computing 9

Cristian’s Algorithm
▪ This algorithm synchronizes clocks of all other machines to the clock of one
machine, time server.
▪ If the clock of the time server is adjusted to the real time, all the other machines
are synchronized to the real time.
▪ Every machine requests the current time to the time server.
▪ The time server responds to the request as soon as possible.
▪ The requesting machine sets its clock to Cs +(T1 −T0 −I)/2. In order to avoid clocks moving backward, clock adjustment must be introduced gradually.
FIT3143 Parallel Computing 10

The Berkeley Algorithm ▪ Developed by Gusella and Zatti.
▪ Unlike Cristian’s Algorithm the server process in Berkeley algorithm, called the master periodically polls other slave process.
▪ Generally speaking the algorithm is as follows:
– A master is chosen with a ring based election algorithm (Chang and Roberts
algorithm).
– The master polls the slaves who reply with their time in a similar way to Cristian’s algorithm
– The master observes the round-trip time (RTT) of the messages and estimates the time of each slave and its own.
– The master then averages the clock times, ignoring any values it receives far outside the values of the others.
– Instead of sending the updated current time back to the other process, the master then sends out the amount (positive or negative) that each slave must adjust its clock. This avoids further uncertainty due to RTT at the slave processes.
– Everybody adjust the time.
FIT3143 Parallel Computing 11

The Berkeley Algorithm
▪ With this method the average cancels out individual clock’s tendencies to drift.
▪ Computer systems normally avoid rewinding their clock when they receive a negative clock alteration from the master. Doing so would break the property of monotonic time, which is a fundamental assumption in certain algorithms in the system itself or in some programs.
▪ A simple solution to this problem is to halt the clock for the duration specified by the master, but this simplistic solution can also cause problems, although they are less severe. For minor corrections, most systems slow the clock (known as “clock slew”), applying the correction over a longer period of time.
FIT3143 Parallel Computing 12

Averaging Algorithm
▪ Both Cristian ’s algorithm and the Berkeley algorithm are centralized algorithms with the disadvantages such as the existence of the single point of failure and high traffic volume around the server.
▪ “Averaging algorithm” is a decentralized algorithm.
▪ This algorithm divides time into resynchronization intervals with a fixed length
▪ Every machine broadcasts the current time at the beginning of each interval according to its clock.
▪ A machine collects all other broadcasts for a certain interval and sets the local clock by the average of their arrival times.
FIT3143 Parallel Computing 13

Averaging Algorithm
▪ Clock on processor P0 should be advanced by Δt0 as below t0 = t0 +t1 +t2 +t3 −t0
FIT3143 Parallel Computing 14

Logical Clock and Physical Clock
▪ Lamport showed
– Clocksynchronizationneednotbeabsolute
– If two processes do not interact their clocks need not be synchronized.
– What matters is they agree on the order in which events occur.
▪ For many purposes, it is sufficient that all interacting machines agree on the same time. It is not essential that this time is the real time.
– The clocks that agree among certain computers but not necessarily with the real clock are logical clocks.
▪ Clocks that agree on time and within a certain time limit, are physical clocks.
FIT3143 Parallel Computing 15

Lamport’s Synchronization Algorithm
▪ This algorithm only determines event order, but does not synchronize clocks.
▪ “Happens-before” relation:
– “A →B” is read “A happens before B”: This means that all processes agree
that event A occurs before event B.
▪ The happens-before relation can be observed directly in two situations:
– If A and B are events in the same process, and A occurs before B, then A→B.
– If A is the event of a message being sent by one process, and B is the event of the message being received by another process, then A→B
▪ “Happens-before” is a transitive relation.
FIT3143 Parallel Computing 16

Lamport’s Synchronization Algorithm
▪ If two events, X and Y happen in different processes that do not exchange messages (not even indirectly via third parties), then neither X →Y nor Y →X is true. These events are “concurrent. ”
▪ What we need is a way to assign a time value C(A) on which all processes agree for every event A. The time value must have the following properties:
– If A →B, then C(A) < C(B). – The clock time must always go forward, never backward. ▪ Suppose there are three processes which run on different machines as in the following figure. Each processor has its own local clock. The rates of the local clocks are different. FIT3143 Parallel Computing 17 Lamport’s Synchronization Algorithm ▪ By setting the clocks as below, we can define total ordering of all events. ▪ If event A happens before event B within the same process, C(A) < C(B) is satisfied. ▪ If event A and event B represent the sending and receiving of a message, the clock of the receiving side is set so that C(A) < C(B). ▪ For all events, the clock is increased at least by 1 between two events. FIT3143 Parallel Computing 18 Shortcoming of Lamport’s clock ▪ Lamport’s clock observes if a→b then C(a) C3(E32) but (2, 4, 1) NOT> (0, 0, 2) and thus E32 NOT→E24
FIT3143 Parallel Computing 21

Mutual exclusion in DS
▪ When multiple processes access shared resources, using the concept of critical sections is a relatively easy way to program access of the shared resources.
– Critical section is a section in a program that accesses shared resources.
– A process enters a critical section before accessing the shared resource to ensure that no other process will use the shared resource at the same time.
• Criticalsectionsareprotectedusingsemaphoresandmonitorsinsingle- processor systems. We cannot use these mechanisms in distributed systems.
FIT3143 Parallel Computing 22

A Centralized Algorithm
▪ This algorithm simulates mutual exclusion in single processor systems.
▪ One process is elected as the coordinator.
▪ When a process wants to enter a critical section, it sends a request to the coordinator stating which critical section it wants to enter.
▪ If no other process is currently in that critical section, the coordinator returns a reply granting permission.
▪ If a different process is already in the critical section, the coordinator queues the request.
▪ When the process exits the critical section, the process sends a message to the coordinator releasing its exclusive access.
▪ The coordinator takes the first item off the queue of deferred request and sends that process a grant message.
FIT3143 Parallel Computing 23

A Centralized Algorithm
FIT3143 Parallel Computing 24

A Centralized Algorithm
Advantages
– Since the service policy is first-come first-serve, it is fair and no process waits forever.
– It is easy to implement.
– It requires only three messages, request, grant, and release, per use of a critical section.
Disadvantages
– If the coordinator crashes, the entire system may go down.
– Processes cannot distinguish a dead coordinator from “permission denied.”
– A single coordinator may become a performance bottleneck.
FIT3143 Parallel Computing 25

A Distributed Algorithm
▪ The distributed algorithm proposed by Ricart and Agrawala requires ordering of all events in the system. We can use the Lamport’s algorithm for the ordering.
▪ When a process wants to enter a critical section, the process sends a request message to all other processes. The request message includes
– Name of the critical section
– Process number
– Current time
▪ The other processes receive the request message.
– If the process is not in the requested critical section and also has not sent a request message for the same critical section, it returns an OK message to the requesting process.
– If the process is in the critical section, it does not return any response and puts the request to the end of a queue.
– If the process has sent out a request message for the same critical section, it compares the time stamps of the sent request message and the received message.
FIT3143 Parallel Computing 26

A Distributed Algorithm
▪ If the time stamp of the received message is smaller than the one of the sent message, the process returns an OK message.
▪ If the time stamp of the received message is larger than the one of the sent message, the request message is put into the queue.
▪ The requesting process waits until all processes return OK messages.
▪ When the requesting process receives all OK messages, the process enters the
critical section.
▪ When a process exits from a critical section, it returns OK messages to all requests in the queue corresponding to the critical section and removes the requests from the queue.
▪ Processes enter a critical section in time stamp order using this algorithm.
FIT3143 Parallel Computing 27

A Distributed Algorithm
FIT3143 Parallel Computing 28

A Distributed Algorithm
▪ Advantage
– Nodeadlockorstarvationhappens.
▪ Disadvantages
– 2(n − 1) messages are required to enter a critical section. Here n is the
number of processes.
– If one of the processes fails, the process does not respond to the request. It means no process can enter the critical section.
– The coordinator is the bottleneck in a centralized system. Since all processes send requests to all other processes in this distributed algorithm, all processes are bottlenecks in this algorithm.
FIT3143 Parallel Computing 29

Token Ring Algorithm
▪ We can construct a virtual ring by assigning a sequence number to processes.
– Processes can form a virtual ring even if the processes are not physically connected in a ring shape. The process #0 receives a token when the ring is initialized.
▪ The token is passed to the process with the next sequence number.
▪ A process can enter the critical section only if the process holds the corresponding token. The process passes the token to the next process when it is done.
▪ A process passes the received token if it needs not to enter the critical section upon receiving the token.
FIT3143 Parallel Computing 30

Token Ring Algorithm
Processes don’t suffer from starvation. Before entering a critical section, a process waits at most for the duration that all other processes enter and exit the critical section.
Disadvantages
– If a token is lost for some reasons, another token must be generated.
– Detecting lost token is difficult since there is no upper bound in the time a token takes to rotate the ring.
– The ring must be reconstructed when a process crashes.
FIT3143 Parallel Computing 31

Mutual Exclusion Algorithm: A Comparison
▪ Messages Exchanged (Messages per entry/exit of critical section) • Centralized: 3
• Distributed: 2(n − 1) • Ring: 2
▪ Reliability (Problems that may occur)
• Centralized: coordinator crashes
• Distributed: any process crashes • Ring: lost token, process crashes
FIT3143 Parallel Computing 32

Group Mutual Exclusion
▪ In the group mutual exclusion problem, which generalizes mutual exclusion, processes choose ‘session’ when they want entry to the Critical Section (CS); processes are allowed to be in the CS simultaneously provided they request the same session.
Critical Section
CD1 (X) C D2 (Y)
CD3 (Z) C D4 (P)
FIT3143 Parallel Computing

Applications of GME
❑In some applications such as Computer Supported Cooperative Work (CSCW)
❑Wireless application
❑An efficient GME solution could also help improve the quality of services of an Internet server. The GME protocol can be used to group different requests for the same service, and thereby, reduce the memory swapping.
FIT3143 Parallel Computing 34

Deadlocks in Distributed Systems
▪ A deadlock is a condition where a process cannot proceed because it needs to obtain a resource held by another process and it itself is holding a resource that the other process needs.
▪ We can consider two types of deadlock:
– communication deadlock occurs when process A is trying to send a message to process B, which is trying to send a message to process C which is trying to send a message to A.
– A resource deadlock occurs when processes are trying to get exclusive access to devices, files, locks, servers, or other resources.
▪ We will not differentiate between these types since we can consider communication channels to be resources without loss of generality.
FIT3143 Parallel Computing 35

Necessary conditions for Deadlock
▪ Four conditions have to be met for deadlock to be present:
– Mutual exclusion. A resource can be held by at most one process
– Hold and wait. Processes that already hold resources can wait for another resource.
– Non-preemption. A resource, once granted, cannot be taken away from a process.
– Circular wait. Two or more processes are waiting for resources held by one of the other processes.
FIT3143 Parallel Computing 36

Deadlock Modeling
▪ For deadlock modeling, a directed graph, called a resource allocation graph, is used in which both the sets
of nodes and edges are partitioned
into two types, resulting in following graph elements-
– Process nodes: P1, P2, P3
– Resource nodes: R1, R2, R3
– Assignment edges: (R1, P1), (R1, P3)
– Request edges: (P1, R2), (P2, R1)
FIT3143 Parallel Computing 37

Necessary Conditions for Deadlock
▪ In resource allocation graph, a cycle is a necessary condition for a deadlock to exist.
It is a Deadlock But this is not FIT3143 Parallel Computing

Sufficient condition for a deadlock
▪ If there is only one copy of all resources – ACycleinresourceallocationgraph
▪ If there are multiple copies of some resources – AKnotinresourceallocationgraph
A knot in a directed graph is a collection of vertices and edges with the property that every vertex in the knot has outgoing edges, and all outgoing edges from vertices in the knot terminate at other vertices in the knot.
FIT3143 Parallel Computing 39

Sufficient condition for a deadlock
1. (P1, R2, P2, R1, P1) 2. (P3, R2, P2, R1, P3)
1. {P1, P2, P3, R1, R2}
Thus here is

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com