CS代考 CS162: Operating Systems and Systems Programming

Spring 2022
University of California, Berkeley College of Engineering Computer Science Division EECS
Midterm II
Joseph & Kubiatowicz

Copyright By PowCoder代写 加微信 powcoder

March 17th, 2022
CS162: Operating Systems and Systems Programming
Your Name:
SID AND Autograder Login (e.g. student042):
Discussion Section Time:
General Information:
This is a closed book exam. You are allowed 2 pages of notes (both sides). You have 110 minutes to complete as much of the exam as possible. Make sure to read all of the questions first, as some of the questions are substantially more time consuming.
WRITE ALL OF YOUR ANSWERS IN YOUR ANSWER BOOK, NOT IN THIS BOOKLET! Make your answers as concise as possible. On programming questions, we will be looking for performance as well as correctness, so think through your answers carefully. If there is something about the questions that you believe is open to interpretation, please ask us about it!
Problem Possible Score
1 20 2 18 3 30 4 12 5 20

CS 162 Spring 2022 Midterm II SOLUTION March 17th, 2022
[ This page left for π ] 3.14159265358979323846264338327950288419716939937510582097494459230781640628620899

CS 162 Spring 2022 Midterm II SOLUTION March 17th, 2022
Problem 1: True/False and Why[20 pts]
Please EXPLAIN your answer in TWO SENTENCES OR LESS (Answers longer than this may not get credit!). Also, answers without an explanation GET NO CREDIT.
Problem 1a[2pts]: Round Robin scheduling always results in a lower average response time than First Come First Serve scheduling.
⬜ TrueFalse
Explain:If jobs happen to arrive in sorted order, smallest burst time first, then FCFS will
produce the lowest average response time.
Problem 1b[2pts]: The Banker’s algorithm can be used to help avoid deadlocks (caused by resource cycles) from occurring between a set of threads.
True ⬜ False
Explain:The Banker’s algorithm double-checks each resource acquisition request to see if it
might have the potential to cause deadlock, and delays such acquisitions until they can be done safely.
Problem 1c[2pts]: Lottery scheduling is always more accurate at providing the intended proportional CPU shares than Stride scheduling.
⬜True  False
Explain: Lottery Scheduling relies on randomness to choose the next thread to run, so it
provides exact results only on average. Stride scheduling provides a deterministic scheduling algorithm, that is more accurate over short periods of time.
Problem 1d[2pts]: Memory management systems that use page-based memory allocation (for the lowest level) do not suffer from external fragmentation.
True ⬜ False
Explain: External fragmentation occurs when different sized blocks are allocated from a
larger chunk of memory, leading to a challenge of finding new blocks after the system runs for a while. Page-based memory allocation avoids this problem because all pages are the same size.
Problem 1e[2pts]: A memory fault will always cause the faulting process to be terminated by the operating system.
⬜ TrueFalse
Explain:If the OS can fixup the fault, then it can restart the process. A good example of this
is a page-fault, in which the OS can load the missing page from disk, fix the page table, and restart the process.

CS 162 Spring 2022 Midterm II SOLUTION March 17th, 2022 Problem 1f[2pts]: Priority donation temporarily decreases the priority of a thread.
⬜ TrueFalse
Explain: Priority donation avoids priority inversion by temporarily increasing the priority of
a thread holding a lock, not by decreasing it.
Problem 1g[2pts]: Page faults are handled in hardware by the MMU. ⬜ TrueFalse
Explain: Page faults are handled in software by the OS. Problem 1h[2pts]: Compulsory cache misses can never be avoided.
⬜ TrueFalse
Explain:Compulsory cache misses can be avoided by prefetching.
Problem 1i[2pts]: A good scheduling algorithm can achieve high throughput and low latency for
any mix of jobs.
⬜ TrueFalse
Explain:High throughput is achieved by switching infrequently, whereas low latency is best achieved by interrupting long-running threads and switching to threads with the shortest burst.
Problem 1j[2pts]: A system that uses priority donation cannot deadlock and guarantees that all threads will eventually complete.
⬜ TrueFalse
Explain:Priority donation helps prevent priority inversion. However, it will not prevent
resource cycles involving threads running at the same priority.

CS 162 Spring 2022 Midterm II SOLUTION March 17th, 2022
Problem 2: Multiple Choice [18pts] Problem 2a[2pts]: Which of the following are true about priority inversion?
(choose all that apply):
A: ⬜ Priority inversion could not occur if there were only two possible priorities for threads.
B:Priority donation is a solution to priority inversion.
C:Priority inversion could result in starvation of high priority threads.
D: ⬜ Priority inversion is only a problem on a multi-core system. E:Priority inversion would not happen in a first come first serve scheduler.
Problem 2b[2pts]: What are the conditions necessary for deadlock? (choose all that apply): A:Circular wait condition between threads and resources.
B: ⬜ Infinite number of each resource
C:Resources cannot be taken away from a thread until it releases them
D:Each resource can be held by only one thread
E:Threads will hold resources and wait while acquiring new ones
Problem 2c[2pts]: Which of the following is true about a multi-level feedback queue (MLFQ)? (choose all that apply):
A: ⬜ Starvation is never a problem for a MLFQ.
B: ⬜ Long-running tasks are given higher priority.
C:A MLFQ is an approximation of shortest remaining time first.
D: ⬜ A MLFQ is more fair than round robin.
E:One way to keep a job’s priority high in a MLFQ is to insert a bunch of short sleep operations.
Problem 2d[2pts]: Which of the following are true about the Linux Completely Fair Scheduler (CFS) (choose all that apply):
A: ⬜ CFS attempts to equalize the response time for all jobs.
B:A higher nice value results in CFS allocating a lower share of CPU time.
C: ⬜ Finding a new thread to run in CFS takes O(log n) time.
D:The target latency in CFS is the period of time over which every job should get service. E: ⬜ Minimum granularity in CFS is the total number of nice values that are allowed.

CS 162 Spring 2022 Midterm II SOLUTION March 17th, 2022 Problem 2e[2pts]: Which of the following are potential ways to avoid cache misses?
(choose all that apply):
A:Increase cache size.
B:Increase cache associativity. C:Decrease cache associativity. D:Prefetching.
E:Rearrange the layout of data structures.
Problem 2f[2pts]: Which of the following are true about deadlocks? (choose all that apply): A: Acquiring resources in a predetermined order is one way to avoid deadlock.
B: ⬜ Starvation is a form of deadlock.
C:Rolling back threads is one way to address deadlock once it occurs.
D: ⬜ Deadlock means that no threads are able to make progress.
E: ⬜ Banker’s algorithm is one way to recover from deadlock once it occurs.
Problem 2g[2pts]: Which of the following statements about memory management are true? (choose all that apply):
A: ⬜ In general, MIN is the best page replacement policy that can be implemented.
B: ⬜ A cache of 512 bytes always outperforms a cache of 256 bytes.
C: ⬜ Adding more physical memory to a system will always reduce the page miss rate.
D:The Clock algorithm is an approximation of LRU. E:A hardware implemented modified bit is optional.

CS 162 Spring 2022 Midterm II SOLUTION March 17th, 2022 Problem 2h[2pts]: Consider a computer system with the following parameters:
V ariable PTLB PF
PL2 TTLB TL1 TL2 TM TD
Measurement Probability of TLB miss
Probability of a page fault when a TLB miss occurs on user pages (assume page faults do not occur on page tables). Probability of a first-level cache miss for all accesses Probability of a second-level cache miss for all accesses Time to access TLB (hit)
Time to access L1 cache (hit)
Time to access L2 cache (hit)
Time to access DRAM
Time to transfer a page to/from disk
V alue App Dependent App Dependent
App Dependent App Dependent 1ns
10 ms = 10,000,000 ns
The TLB operates in parallel with the cache lookup (at level 1) when there is a TLB hit.. Further, the TLB is refilled automatically by the hardware on a miss. The 2-level page tables are kept in physical memory and are cached like other accesses. Assume that the costs of the page replacement algorithm and updates to the page table are included in the TD measurement. Also assume that no dirty pages are replaced and that pages mapped on a page fault are not cached.
If the full working set of the application fits into DRAM, what is the effective access time (the time for an application program to do one memory reference) on this computer?
A:⬜𝑇􏰂􏰃􏰄 􏰅𝑇􏰃􏰆 􏰅𝑃􏰃􏰆 􏰇􏰈𝑇􏰃􏰉 􏰅𝑃􏰃􏰉 􏰇𝑇􏰊􏰋􏰅𝑃􏰂􏰃􏰄 􏰇􏰈2𝑇􏰊 􏰅𝑃􏰌 􏰇𝑇􏰍􏰋
B:⬜𝑇􏰃􏰆 􏰅𝑃􏰃􏰆 􏰇􏰈𝑇􏰃􏰉 􏰅𝑃􏰃􏰉 􏰇𝑇􏰊􏰋􏰅𝑃􏰂􏰃􏰄 􏰇𝑇􏰂􏰃􏰄
C:⬜𝑇􏰂􏰃􏰄 􏰅􏰈1􏰒𝑃􏰂􏰃􏰄 􏰇𝑃􏰌􏰋􏰇􏰏𝑇􏰃􏰆 􏰅𝑃􏰃􏰆 􏰇􏰈𝑇􏰃􏰉 􏰅𝑃􏰃􏰉 􏰇𝑇􏰊􏰋􏰐􏰅
𝑃􏰂􏰃􏰄 􏰇􏰎2􏰏𝑇􏰃􏰆 􏰅𝑃􏰃􏰆 􏰇􏰈𝑇􏰃􏰉 􏰅𝑃􏰃􏰉 􏰇𝑇􏰊􏰋􏰐􏰅𝑃􏰌 􏰇􏰈𝑇􏰃􏰆 􏰅𝑇􏰃􏰉 􏰅𝑇􏰊 􏰅𝑇􏰍􏰋􏰑
D:𝑇􏰃􏰆 􏰅𝑃􏰃􏰆 􏰇􏰈𝑇􏰃􏰉 􏰅𝑃􏰃􏰉 􏰇𝑇􏰊􏰋􏰅𝑃􏰂􏰃􏰄 􏰇􏰎𝑇􏰂􏰃􏰄 􏰅 2􏰏𝑇􏰃􏰆 􏰅𝑃􏰃􏰆 􏰇􏰈𝑇􏰃􏰉 􏰅𝑃􏰃􏰉 􏰇𝑇􏰊􏰋􏰐􏰑 E:⬜𝑇􏰂􏰃􏰄 􏰅𝑇􏰃􏰆 􏰅𝑇􏰃􏰉 􏰅𝑇􏰊 􏰅𝑇􏰍
Problem 2i[2pts]: Earliest Deadline First (EDF) scheduling has an important advantage over other scheduling schemes discussed in class because (choose all that apply):
A:⬜ It can hand out more total processor cycles to mix of real-time and non-realtime tasks than other scheduling schemes.
B:It can allocate up to 100% of processor cycles to real-time periodic tasks while still providing a guarantee of meeting real-time deadlines.
C: ⬜ It can operate non-preemptively and is thus simpler than many other scheduling schemes. D:⬜ It can provide the lowest average responsiveness to a set of tasks under all circumstances—
even in the presence of long-running computations. E: ⬜ None of the above

CS 162 Spring 2022 Midterm II SOLUTION March 17th, 2022
Problem 3: Short Answer [30pts]
Please write your answer in THREE SENTENCES OR LESS (Answers longer than this may not
get credit!).
Problem 3a[4pts]: Suppose there are n jobs J1, J2 … Jn ready to run, with no other jobs in the system. Assume job Ji will take Ti time to finish. Furthermore, suppose T1 < T2 < ... < Tn. What order should the jobs be run to minimize average completion time? Provide a brief mathematical proof (Hint: start by writing down an expression for average completion time). The jobs should be run with an FCFS scheduler in the order: J1, J2 ... Jn to minimize average completion time. Proof Sketch: Assume FCFS and that the final completion time for each of the jobs is C1, C2... Cn. If we schedule them in order J1, J2 ... Jn, then the, the average completion time is: 11􏰘􏰮1􏰆 𝑛􏰭􏰮𝐶􏰮 􏰯𝑛􏰭􏰭𝑇􏰰 􏰯𝑛􏰭𝑗􏰇𝑇􏰮 􏰮􏰱􏰆 􏰰􏰱􏰆 􏰮􏰱􏰘 The middle term arises because, for instance, C1=T1, C2 = C1+T2=T1+T2, etc.. Notice that this is the best order for FCFS, because any swapping of the order of the Tj in the final expression will increase the value (if we swap Tx and Ty then the sum will change by): 􏰈𝑛􏰒𝑥􏰅1􏰋􏰇􏰲𝑇􏰠 􏰒𝑇􏰳􏰴􏰅􏰈𝑛􏰒𝑦􏰅1􏰋􏰇􏰲𝑇􏰳 􏰒𝑇􏰠􏰴􏰯􏰈𝑦􏰒𝑥􏰋􏰇􏰲𝑇􏰠 􏰒𝑇􏰳􏰴􏰵0 Further, anything other than FCFS (like RR) will only make the individual Cj larger by putting pieces of later threads before the completion. Problem 3b[3pts]: In a typical cache, the tag-bits are pulled from the top (most significant bits) of the address, while index and offset bits are pulled from less significant bits. Why wouldn’t it make sense to take the cache index from the most significant bits? Explain. Because the resulting cache would not work well for systems with spatial locality that extended beyond the cache-line. For instance, with the index in the normal place, a set of accesses to an array that was bigger than a cache-line size could still fit into the cache (many different index values would be in use). In contrast, if the index were placed at the top of the address, the index bits would tend to stay constant when accessing such an array. Thus, ever array access that changed cache lines would automatically conflict with every other cache line. Problem 3c[3pts]: You just brought an expensive gaming PC with a 64-bit virtual address space, 32 GiBs of RAM, an NVidia 3090ti GPU, and 16 TiB M.2 SSD. You decided to write a custom 64- bit OS for it. To reduce memory translation latency, you decided to implement a single-level page table based scheme, but will use a standard memory layout (stack grows down from top of address space, heap grows up). Assume each page is 4KiB. Will this approach work or not? Explain. No, this won’t work. To handle the sparse address space, we will need a full page table. The page table will need 264/212=252 entries, one for each page in the address space. Further, the PTE will need 52 bits of physical page number, so will need to be at least 52/8=6.5 bytes. Rounding up to 8 bytes (for access control bits), we will need: 252×8= 255bytes for the page table, which is 32 PiB (peta bytes!) of physical storage just for the page table of one process! CS 162 Spring 2022 Midterm II SOLUTION March 17th, 2022 Problem 3d[3pts]: Before implementing a virtual memory system, you measure that accessing a word that is resident in physical memory takes about 10ns (with the hardware cache and TLB taken into account). You measure the time it takes the OS to service an access to an on-disk page (including page fault overhead, fetching the page from disk, and re-executing the instruction) to be about 10ms. What is the highest page fault rate we can tolerate to prevent the average memory access time from exceeding 20 ns? Explain how you calculated your answer. Don’t forget to equalize units by converting ms to ns: AMAT = 10ns + pfault×10ms×(106ns/ms)=(10+pfault×107) ns ≤ 20 ns  pfault ≤ 10-6 So, we want to keep our page fault rate under 0.0001% Problem 3e[2pts]: In PINTOS, how do you calculate effective priority? 𝑇􏰶􏰙􏰗􏰷􏰙􏰗􏰡􏰠􏰸􏰹􏰹 􏰯𝑚𝑎𝑥􏰺𝑇􏰶􏰙􏰗􏰷􏰙􏰗􏰡􏰠􏰻􏰼􏰽􏰾,𝑚𝑎𝑥􏰿􏱀𝐶􏰶􏰙􏰗􏰷􏰙􏰗􏰡􏰠􏰸􏰹􏰹 |𝐶∈𝑇􏱁􏱂􏰗􏱃􏱄􏰙􏰚􏰘􏱅􏱆􏱇 Must mention effective priorities in PINTOS are computed as the max of the thread’s base priority and the max of each child’s effective priority. Problem 3f[2pts]: For the PINTOS alarm clock, in timer_sleep() how is synchronization enforced on the list of waiting threads? Synchronization is enforced because timer_sleep() operates in a disabled interrupt context. No credit for any other answer. Problem 3g[8pts]: For the following problem, assume a hypothetical machine with 4 pages of physical memory and 7 pages of virtual memory. Given the access pattern: ABCDEAAECFFGACGDCF Indicate in the following table which pages are mapped to which physical pages for each of the following policies. Assume that a blank box matches the element to the left. We have given the FIFO policy as an example. You may break ties in any order you wish. Access→ A B C D E A A E C F F G A C G D C F 1AEC 2BAD 3CF 4DG 1AF 2BEFGF 3CF 4DF 1AEAF 2BAG Any one of these FIFO MIN LRU CS 162 Spring 2022 Midterm II SOLUTION March 17th, 2022 Problem 3h[5pts]: Here is a table of processes and their associated arrival and running times: Process ID Process A Process B Process C Process D Arrival Time CPU Running Time Show the scheduling order for Shortest-Remaining-Time-First context switch overhead is 0, that new processes are available for scheduling as soon as they arrive, and that new processes are added to the head of the queue except for FCFS, where they are added to the tail: Time Slot FCFS SRTF RR 0AAA 1AAB 2BBA 3BBB 4BCC 5BBB 6BBD 7CBB 8DDD 9DDB 10 D D D 11 D D D these processes under 3 policies: First Come First Serve (FCFS), (SRTF), Round-Robin (RR) with timeslice quantum = 1. Assume that Page 10/18 CS 162 Spring 2022 Midterm II SOLUTION March 17th, 2022 Problem 4: Pintos Scheduling [12pts] Marcus, inspired from reading through the Linux scheduler, decides to add in a simplified version of scheduling classes into Pintos. He decided that each thread will have a priority ranging from 0-139. Furthermore, the scheduler will be divided into two scheduling classes: Threads with priorities 0-99 will be scheduled FIFO within a priority, whereas threads with priorities 100-139 will utilize lottery scheduling. Marcus decides that threads scheduled with FIFO will only be run after there are no more threads to be run in the lottery class. Assume threads with a larger priority number indicates a higher priority. Below are struct and function definitions within the kernel which may be helpful for this problem: struct thread { int priority; struct list_elem elem; static struct list ready_list_lottery; static struct list ready_list_fifo[100]; // Calculate (and return) the number of tickets a thread with base priority // ‘priority’ should receive. Don’t worry about how this is implemented int tickets_from_priority(int priority); /* List operations */ #define list_entry(LIST_ELEM, STRUCT, MEMBER) void list_push_back (struct list *, struct list_elem *); struct list_elem *list_pop_front (struct list *); struct list_elem *list_remove (struct list_elem *); bool list_empty (struct list *); Problems 4A-4L[1pt each]: Complete the implementation for the scheduler. Only one piece of code should be written per blank (i.e. no multiple statements with semicolons and no blank lines). Use proper C syntax with the given APIs. Pseudocode or comments will not receive any credit. You may only use methods and data structures given in this question as well as built-in ones. Finally, assume that calls to any given method will succeed and that ready_list_lottery and every sub-list of ready_list_fifo[] are initialized: Line 4A: Line 4B: Line 4C: // Place a thread on the ready structure appropriate for the // current active scheduling policy. void thread_enqueue(struct thread *t) { if (_t_‐_>_p_r_i_o__r_i_t_y__<__1_0_0____________________________________________) { // Handle FIFO scheduling with priority list_push_back(&ready_list_fifo[t‐>priority], &t‐>elem)
_________________________________________________________________;
list_push_back(&ready_list_lottery, &t‐>elem)
_________________________________________________________________;
Page 11/18

CS 162 Spring 2022 Midterm II SOLUTION March 17th, 2022
!list_empty(&ready_list_lottery)
Line 4F: Line 4G: Line 4H: Line 4I:
Line 4J: Line 4K: Line 4L:
chosen_ticket ‐= tickets_from_priority(t‐>priority)
// This is the primary scheduling function.
// Choose (and return) the next thread to execute.
// Make sure to remove thread from list; will be added back after run
static struct thread* thread_schedule (void) {
(______________________________________________________________) {
int total_tickets = 0;
// compute total_tickets in use
for (struct list_elem* e = list_begin(&ready_list_lottery);
e != list_end(&ready_list_lottery);
e = list_next(e)) {
struct thread *t = list_entry(e, struct thread, elem);
// Random number between 0 and total_tickets ‐ 1, inclusive
int chosen_ticket = random_ulong() % total_tickets;
// Find winning thread (from list of threads)
for (struct list_elem* e = list_begin(&ready_list_lottery);
e != list_end(&ready_list_lottery);
e = list_next(e)) {
struct thread *t = list_entry(e, struct thread, elem);
total_tickets += tickets_from_priority(t‐>priority)
______________________________________________________________;
______________________________________________________________;
chosen_ticket < 0 if (________________________________________________________) { list_remove(e) ___________________________________________________________; ___________________________________________________________; for (int p = 99; p >= 0; p‐‐) {
!list_empty(&ready_list_fifo[p])
if (________________________________________________________) {
return list_entry(e, struct thread, elem)
return idle_t

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com