程序代写代做 cache data structure algorithm ECS150 FQ20

ECS150 FQ20
March 27, 2020
Lecture Notes 6 Multi-Object Synchronization
• Multiprocessor Performance – Locks can become performance bottleneck for shared objects
• Correctness – Interactions of multiple shared objects must also be correct
• Deadlock – Threads are permanently stuck waiting
Multiprocessor Lock Performance
• Request Parallelism – Requests are handled by multiple threads running on different cores
• Causes of limited parallelism:
1. Locking – Provides mutual exclusions, but also limits parallelism
2. Communication of shared data – Moving data between caches takes time
3. False Sharing – Multiple structures share same cache line, but are independent
Lock Design Patterns
• Fine-Grained Locking – Each shared object state subset has its own lock
• Per-Processor Data Structures – Each processor has its own structure to reduce contention
• Ownership Design Pattern – Object is removed from container why being accessed only one
“owner”
• Staged Architecture – Break system up into subsytems that communicate through message
queues
Lock Contention
• MCS Locks – Mellor-Crummey and Scott Lock, uses compare-and-swap to pass queue tail
• RCU Locks – Read-Copy-Update Lock optimized Readers/Writers lock to majority readers
1. Restricted update – Writer must “publish” changes with atomic memory write (often
done through atomically updating pointer to new version)
2. Multiple concurrent versions – Readers may see new or old versions
3. Integration with thread scheduler – Must keep multiple copies of state to avoid freeing
old data too soon, provides “grace period” for old version
Multi-Object Atomicity
• Acquire-All/Release-All – Acquire all locks that will be needed for operation before proceeding
• Serializability – Result of program execution is as if were done sequentially
• Two-phase Locking – Acquire locks as needed, but release only once all is completed
This content is protected and may not be shared, uploaded, or distributed. Lecture Notes 6
1 of 2

ECS150 FQ20 March 27, 2020
Deadlock
• Deadlock – A cycle of waiting among a set of threads
• Mutually Recursive Locking – Locks are grabbed in opposite order by different threads
• Nested Waiting – Waiting on a lock, but nested within another lock (signal requires locking
outer first)
• Deadlock vs. Starvation – Deadlock is a form of multiple thread starvation
• Necessary Conditions for Deadlock
1. Bounded Resources – Finite number of threads can simultaneously use a resource
2. No Preemption – Once thread acquires a resource, it cannot be revoked
3. Wait While Holding – A thread holds a resource and waits on another
4. Circular Wait – There is a cycle of waiting threads
• Preventing Deadlock
• Provide Sufficient Resources – Make sure sufficient resources for all
• Preempt Resources – Forcefully take the resources back
• Release Lock when calling out of module – Release locks before calling other modules
• Lock Ordering – Provide a strict ordering to locking
• Banker’s Algorithm for Avoiding Deadlock – Delay requests until stay in safe state
• Safe State – For any possible sequence there is at least one safe sequence
• Unsafe State – At least one sequence of future request leads to deadlock
• Deadlock State – At least one deadlock exists
• Detecting and Recovering from Deadlock
• Proceed without the resource
• Transactions: rollback and retry – Commit when completed, or abort if error (or
preempted)
• Detecting Deadlock – Simple cycle detection if only one of each type of resource
Non-Blocking Synchronization
• Non-blocking Data Structures – No thread has to wait for read/write operations
• Wait-free Data Structure – Guarantees progress for every thread
• Lock-free Data Structure – Guarantees progress for some thread
• Software Transactional Memory (STM) – General purpose transaction for in-memory data
structures
This content is protected and may not be shared, uploaded, or distributed. Lecture Notes 6
2 of 2