CS 162 Operating Systems and Systems Programming Spring 2021 Midterm 2
INSTRUCTIONS
This is your exam. Complete it either at exam.cs61a.org or, if that doesn’t work, by emailing course staff with your solutions before the exam deadline.
This exam is intended for the student with email address
Copyright By PowCoder代写 加微信 powcoder
For questions with circular bubbles, you should select exactly one choice. # You must choose either this option
# Or this one, but not both!
For questions with square checkboxes, you may select multiple choices. 2 You could select this choice.
2 You could select this one too!
You may start your exam now. Your exam is due at
Exam generated for
regarding the exam questions or answers in any capacity.
When answering questions, make no assumptions except as stated in the problems. If there that you believe is open to interpretation, please use the “Clarifications” button to request a clarification. We will issue an announcement if we believe your question merits one.
This exam has 4 parts of varying difficulty and length. Make sure you read through the exam completely before starting to work on the exam. Answering a question instead of leaving it blank will NOT lower your score. We will overlook minor syntax errors in grading coding questions.
Student ID
Please read the following honor code: “I understand that this is a closed book exam. I hereby promise that the answers that I give on the following exam are exclusively my own. I understand that I am allowed to use two 8.5×11, double-sided, handwritten cheat-sheets of my own making, but otherwise promise not to consult other people, physical resources (e.g. textbooks), or internet sources in constructing my answers.” Type your full name below to acknowledge that you’ve read and agreed to this statement.
Exam generated for
Please EXPLAIN your answer in TWO SENTENCES OR LESS. Answers without any explanation GET NO CREDIT.
(a) (3.0 points) i.
In a page table-based addressing scheme, every virtual address in a process’ virtual address space always corresponds to a valid page table entry.
# True False
Virtual addresses may not be mapped to a physical address (and hence, have an invalid PTE) due to multiple reasons. For example, the size of virtual memory may be greater than physical memory, which requires pages to be swapped out for other pages on disk. This leads to invalidating the PTE’s of pages being evict from memory. Another example is if a page corresponding to a requested virtual address hasn’t been accessed prior and fetched into memory yet, the PTE would be invalid.
Exam generated for
(b) (3.0 points) i.
In a page table-based addressing scheme, the TLB is a cache for page table entries. # True
The TLB stores a mapping of the virtual page numbers in a virtual address to the physical page number in the physical address. Therefore, the TLB acts as a cache for address translations.
Exam generated for
(c) (3.0 points) i.
Page faults always cause a user program to terminate. # True
Page faults normally just trigger a read from disk.
Exam generated for
In PintOS with priority donation, immediately after a thread releases a lock, the thread’s effective priority may be different from the thread’s base priority.
A. True # False
There could be multiple locks and other threads with a priority higher than the base priority may have donated their priority. Example: Let’s say T1 is holding L1 and L2. T2 with higher priority waits on L1. T3 with even higher priority waits on L2. T1 releases L2.
Exam generated for
In PintOS with priority donation, immediately after a thread releases a lock, the thread’s effective priority must be the same as the thread’s base priority.
ii. A#. True False
There could be multiple locks and other threads with a priority higher than the base priority may have donated their priority. Example: Let’s say T1 is holding L1 and L2. T2 with higher priority waits on L1. T3 with even higher priority waits on L2. T1 releases L2.
Exam generated for
(e) (3.0 points) i.
Assume that Job A always bursts for 10 ticks at a time, and runs forever. Also assume that our system uses a MLFQS (Multi-Level Feedback Queue Scheduler), with the following structure:
(Round-Robin (Q=8), Round-Robin (Q=16), FCFS)
If Job A is initially placed on the top queue when it enters the system, it will eventually be placed on the middle queue; after this point, when the job is not running, it will always be placed on the middle queue.
# True False
Job A bursts for 10 ticks, which means it runs for 8 ticks in the top queue before expiring and being moved to the middle queue. In the middle queue, it runs for another 2 ticks then finishes the burst and yields, thereby moving back up to the top queue. The job will continually move between the top two queues.
Exam generated for
(f) (3.0 points) i.
Assume that, in our system, there are 10 jobs that will run for 100 ticks total (and never voluntarily yield the CPU or block). If our system uses a round robin scheduler, increasing the quantum from 10 ticks to 20 ticks will always result in higher throughput.
True # False
This is true because the system will preempt and context switch between threads about half as many times. Since each switch takes some non-zero amount of time, the total time to run the 10 jobs will be less with a higher quantum, implying a higher throughput.
Exam generated for
2. (18.0 points) Multiple Choice
In the following multiple choice questions, please select all options that apply.
(a) (3.0 pt)
Which of the following scheduling algorithms may cause starvation? First Come First Serve (FCFS)
2 Round-Robin (RR)
Shortest Remaining Time First (SRTF)
2 Completely Fair Scheduler (CFS)
2 None of the above
Starvation may occur in FCFS when a thread runs indefinitely, and in SRTF when systems have many bursty threads, thus long-running threads never get run. In RR and CFS, preeemption can occur, and all threads will run.
(b) (3.0 pt)
Which of the following descriptions of Banker’s Algorithm are correct?
2 Banker’s Algorithm is used in deadlock recovery.
2 Banker’s Algorithm can be used to determine whether or not a system is guaranteed to deadlock. Using Banker’s Algorithm results in an execution that is guaranteed to avoid deadlock.
Banker’s Algorithm relies on having known upper limits for resource requests per thread.
2 None of the above.
Banker’s Algorithm is used in deadlock avoidance and determines whether or not a system is in an unsafe state. Being in an unsafe state does not mean a system is guaranteed to deadlock; however, Banker’s Algorithm does give us a sequence of safe serial executions that, while inefficient, avoids deadlock.
(c) (3.0 pt)
Which of the following scheduling algorithms are (1) deterministic and (2) will eventually preempt any infinitely looping thread in favor of a thread with finite run-time?
2 First Come First Serve (FCFS) Round-Robin (RR)
Multi-Level Feedback Queue Scheduling (MLFQS) 2 Lottery Scheduling
2 Strict Priority Scheduling
Shortest Remaining-Time First Scheduling (SRTF) 2 None of the above
Lottery Scheduling is not deterministic. FCFS will run a infinitely looping thread indefinitely. SPS will run a high-priority infinitely looping thread indefinitely.
Exam generated for
(d) (3.0 pt)
Which of the following could be components of a page table entry (PTE)? Physical page number
2 Virtual page number Valid bit
2 None of the above
A PTE in a system of multi-level page tables may contain the PPN of the physical address or of the next page table. Metadata bits are also part of PTEs. The VPN is used to index into page tables, and the offset is used to index into a page; both are part of virtual addresses, not PTEs.
(e) (3.0 pt)
Assume that the goal for our page replacement policy is to minimize total page replacements. Which of the following statements are accurate?
FIFO can have the same performance as MIN.
2 Clock Algorithm always has the same performance as Second- Algorithm. 2 LRU performs at least as good as any Nth- Algorithm.
2 An optimal page replacement policy prevents thrashing.
2 None of the above.
FIFO can have the same performance as MIN – for example, in the trivial case where the working set is less than the size of physical memory. Second- Algorithm uses half the pages of physical memory for a second-chance list, while [Second-Chance] Clock Algorithm uses a separate use bit to access all pages of physical memory at full speed. Nth- Algorithm is an approximation of LRU, so there is no guarantee of better or worse. (For example, consider LRU vs. Clock for the memory access pattern ACABC in a system with 2 pages of physical memory). Thrashing is dependent on memory size, not replacement policy.
(f) (3.0 pt)
Assume we make a single memory access on a processor which has no caches and no TLB. All else being equal, which of the following are true?
Base-and-bound will on average be faster than a single-level page table.
2 An inverted page table will on average be faster than a single-level page table. A single-level page table will on average be faster than a multi-level page table. 2 A multi-level page table will on average be faster than an inverted page table. 2 None of the above.
To translate the virtual address to a physical address, base-and-bound would require 0 memory accesses (only arithmetic operations), a single-level page table and an inverted page table would both require 1 memory access (to access the page table), and a multi-level page table would require multiple memory
accesses, depending on the number of page tables to traverse.
Exam generated for
3. (38.0 points) Short Answer
(a) (11.0 points) Virtual Memory Bits Pieces
Suppose we have virtual memory mapped with a two-level page table scheme, where each page table fits exactly inside a page, and each page table entry is exactly 4 (22) bytes in size. We also have a physical memory size of 16 MiB (224 B), and a page size of 4 KiB (212 B). Additionally, we have a fully-associative tagged TLB, where each entry includes 1 byte of metadata containing information including a valid bit, dirty bit, and a process ID.
i. (1.0 pt)
How many bits are in the offset of an address in this scheme?
log(4 KiB) = 12 bits.
ii. (1.0 pt)
How many bits are in the first-level page table index in this scheme?
log(4 KiB / 4 B) = 10 bits.
iii. (1.0 pt)
How many bits are in the second-level page table index in this scheme?
log(4 KiB / 4 B) = 10 bits.
iv. (1.0 pt)
How many bits are in the physical page number in this scheme?
log(16 MiB / 4 KiB) = 12 bits.
Exam generated for
How large of a virtual address space does this scheme address, in bytes? You may leave your answer as a power of 2.
Two-level page table: 2ˆ10 * 2ˆ10 * 4 KiB = 2ˆ32 bytes = 4 GiB.
vi. (2.0 pt)
If our TLB had 8 entries, how big is it in bytes?
TLB entry size = VPN size (2 * 10 bits) + PPN size (12 bits) + 1 byte of metadata. Each entry is 40 bits. 40 * 8 entries / 8 bits per byte = 40 bytes.
vii. (2.0 pt)
Assume we start with an empty TLB and make the following memory accesses:
• Process 1 accesses 0x10000000 • Process 1 accesses 0x10001000 • Process 2 accesses 0x10000000
How many entries of our TLB will now be filled?
We track the process ID in our TLB metadata, so multiple processes’ memory mappings can live in our TLB at once.
viii. (2.0 pt)
How many different pages will now be in memory, assuming there were 0 valid pages in physical memory before these accesses?
A clarification was made during the exam to also consider the pages in the page table, but we decided to accept both answers that consider page tables and those that don’t. If we do NOT consider the PT, since Process 2’s 0x10000000 is different from Process 1’s 0x10000000, we will have 3 valid physical pages. If we DO consider the PT, then there will be 7 pages in total:
• Process 1: because the first level index is the same for both accesses, we have 1 first level PT + 1 second level PT + 2 data pages
• Process 2: separate processes have separate page tables, so we have another 1 first level PT + 1 second level PT + 1 data page
Exam generated for
and disagree about the best page replacement policies for demand paging – each is trying to throw a wrench into the other’s arguments! In the following problems, assume that the system has 3 pages of physical memory (1-3), and the program they run uses 4 pages of virtual memory
(A-D). Provide lists as a comma-separated list of virtual memory accesses (e.g. A,B,C,D,A). i. (3.0 pt)
is convinced that the best page replacement policy for demand paging is FIFO (First In First Out). Demonstrate the problems with FIFO by providing a list of 6 additional accesses (after the first two accesses that are given) that would cause every access to page fault.
ii. (3.0 pt)
Page \ Access A B
Possible Solutions: C,D,A,B,C,D D,C,A,B,D,C
is convinced that the best page replacement policy for demand paging is MRU (Most Recently Used). Demonstrate the problems with MRU by providing a list of 6 additional accesses
(after the first two accesses that are given) that would cause every access to page fault.
Page \ Access A B
Possible Solutions: C,D,C,D,C,D D,C,D,C,D,C
Exam generated for
After putting their disagreements aside, they come up with an optimal page replacement policy. and use that replacement policy with a memory access pattern that contains 3 A’s, 2 B’s, 2 C’s, and 1 D (not necessarily in that order; you should not assume it begins with A and B). What is the maximum total number of page faults that could occur when using that optimal page replacement policy on such a memory access pattern?
The optimal page replacement policy is MIN (and one could construct an arbitrary heuristic that has the same behavior as MIN for any workload under the above constraints). With 4 pages of virtual memory, the minimum number of page faults with MIN is 4, once per page. To find the maximum number of page faults, we should space the memory accesses out as far as possible, therefore we have to maximize the sum of differences between accesses of the same virtual page. With the above constraints, all memory patterns have either 4 or 5 page faults. One example of 5 is shown below.
Page\Access A B C D A B C A
1A 2BC 3CD
Exam generated for
Rosalina’s Round-Robin (RR) scheduler is a special implementation of RR that considers thread priorities. More specifically, the quanta of a thread (in ticks) is equal to its priority. Say there is an IO-bound thread, Thread A, with a priority of 64 that always bursts for 1 tick in between I/O. Another CPU-bound thread, Thread B, has a priority of 2 and always bursts for 100 ticks in between I/O. Assume each I/O operation takes 1 tick.
i. (2.0 pt)
Assuming the two threads only yield for I/O, for what fraction of time will Thread A run on the CPU on average?
ii. (2.0 pt)
Why doesn’t the higher priority thread (Thread A), with a higher quanta, run for a majority of the time?
Thread A does not use up its quanta, which means that it cannot take advantage of its higher quanta to gain more time on the CPU.
Exam generated for
(d) (14.0 points) Mario’s Moves
Mario wrote a program that synchronizes three of his special moves as threads.
sem_t semaphore;
pthread_mutex_t lock;
void threadA() {
sem_wait(&semaphore);
printf(“threadA!”);
sem_post(&semaphore)
void threadB() {
sem_wait(&semaphore);
pthread_mutex_lock(&lock); // line 2
printf(“threadB!”);
pthread_mutex_unlock(&lock);
sem_post(&semaphore);
void threadC() {
sem_wait(&semaphore);
pthread_mutex_lock(&lock); // line 2
sem_wait(&semaphore);
printf(“threadC!”);
sem_post(&semaphore);
pthread_mutex_unlock(&lock);
sem_post(&semaphore);
int main() {
sem_init(&semaphore, 0, /*initial value*/ 2);
pthread_mutex_init(&lock, NULL);
pthread_t threads[3];
pthread_create(&threads[0], NULL, threadA, NULL);
pthread_create(&threads[1], NULL, threadB, NULL);
pthread_create(&threads[2], NULL, threadC, NULL);
Exam generated for
What order of execution will cause this program to print nothing? Provide your answer as a list of lines of thread:N, specifying the order of execution of the lines of code of each thread.
NOTE: include any lines/expressions that start executing but may not finish executing.
ii. (2.0 pt)
What is name of the condition that causes the program to print nothing?
iii. (4.0 points)
Mario realizes his mistake, and decides to implement a new function to replace pthread_mutex_lock to prevent this from happening. Complete the function below to ensure that all three print statements are executed regardless of how the threads are scheduled (assume that every pthread_mutex_lock(&lock) line is replaced with pthread_mutex_lock_safe()). You may only use one statement/expression in each of the provided blanks.
void pthread_mutex_lock_safe() {
___[A]___;
pthread_mutex_lock(&lock);
___[B]___;
Possible solutions (brackets indicate any order is acceptable): threadC:1 threadC:2 threadB:1 {threadA:1 threadB:2 threadC:3} threadB:1 threadC:1 threadC:2 {threadA:1 threadB:2 threadC:3} {threadB:1 threadC:1} threadA:1 threadC:2 {threadB:2 threadC:3} threadC:1 threadB:1 threadC:2 {threadA:1 threadB:2 threadC:3}
sem_post(&semaphore);
sem_wait(&semaphore);
Exam generated for
Explain how your solution above changes the ordering of resource allocation to prevent the condition you noted in 4.2.
This new implementation ensures that every thread acquires the lock before “acquiring” (downing) the resource represented by the semaphore. Since resources are requested in a consistent/particular order, there cannot be deadlock.
Exam generated for
In order to reduce the latency of L1 cache hits, many modern CPU architectures employ a type known as Virtually-Indexed, Physically-Tagged (VIPT) caches. In this system, rather than having to perform address translation to be able to look up an entry in the cache, the CPU instead selects the cache set based on the virtual address in parallel with address translation, only afterwards storing or comparing bits from the now-translated physical address for the tag. You have been contracted to help implement such a design for the L1 data cache of an application-specific operating system.
After much deliberation, you and your team of Koopalings have decided on a 256 byte, 2-way set associative cache with LRU replacement and 16-byte cach
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com