CS 162 Operating Systems and Systems Programming
Fall 2021 Midterm 2
INSTRUCTIONS
This is your exam. Complete it either at exam.cs61a.org or, if that doesn’t work, by emailing course staff with your solutions before the exam deadline.
Copyright By PowCoder代写 加微信 powcoder
This exam is intended for the student with email address
For questions with circular bubbles, you should select exactly one choice. You must choose either this option
Or this one, but not both!
For questions with square checkboxes, you may select multiple choices. You could select this choice.
You could select this one too!
You may start your exam now. Your exam is due at
Exam generated for
(a) Full name
(b) Student ID number
(c) Autograder login (e.g. student123)
Exam generated for
Please pick whether each statement is true or false and explain why. Explanations must be two sentences or less. You may not use semicolons to circumvent this limit. Any answer longer than two sentences will receive a 0.
(a) (3.0 points)
i. An unsafe state means that a deadlock is guaranteed to happen.
True False
ii. Explain.
An unsafe state means that there is an order of requests that leads to deadlock, but ther can be orders of requests that don’t lead to deadlock.
Exam generated for
(b) (3.0 points)
i. Processes may use the same virtual address but will actually access different physical addresses.
True False
ii. Explain.
Controlled overlap is the idea that the OS provides an abstraction where each process has the entire computer to itself, but in reality each process only has a portion of physical memory.
Exam generated for
(c) (3.0 points)
i. Compulsory cache misses can be fixed by increasing the cache size or associativity.
True False
ii. Explain.
Compulsory misses are unavoidable because it’s the first access to a page. It can only be solved by pre-fetching.
Exam generated for
(d) (3.0 points)
i. If the goal of a preemptive scheduler is to minimize the response time, it should use SJF.
True False
ii. Explain.
SJF is optimal for minimizing average response times among non-preemptive schedulers. SRTF should be used for preemptive schedulers.
Exam generated for
(e) (3.0 points)
i. The main reason why performance degrades when a process thrashes is due to heavy context-switching.
True False
ii. Explain.
The main reason performance degrades when a process trashes is due to frequent page faults, as the process’ working set does not fit in the memory.
Exam generated for
(f) (3.0 points)
i. In a one-level paging scheme, a single page table and a single TLB are shared by all processes.
True False
ii. Explain.
Each process has its own page table. The TLB, however, is still shared across all processes.
Exam generated for
(g) (3.0 points)
i. SRTF scheduler can lead to starvation.
True False
ii. Explain.
SRTF can starve if many small jobs run, preventing large jobs from running.
Exam generated for
(h) (3.0 points)
i. When a system approaches a steady-state 100% utilization, its queue length will approach infinity.
True False
ii. Explain.
By Little’s Law, the length of the queue is λ × Tq where Tq ∝ u . Therefore, as u−1
utilization approaches 100%, the queue length approaches infinity.
Exam generated for
i. When using demand paging, adding more memory to a system will always reduce the number of page faults.
True False
ii. Explain.
Bélády’s anomaly states that adding more memory can increase page faults for certain page replacement algorithms (e.g. FIFO).
Exam generated for
For the following questions, select all that apply. If none should be selected, don’t select any. Selecting an incorrect choice or not selecting a correct choice will lead to point deductions, but the overall score for each question will be a minimum of 0.
(a) (3.0 pt) Which of the following are true about I/O?
Direct memory access (DMA) requires the processor to transfer the data between devices and memory. A PCI bus (not PCI Express bus) can handle a single request at a time.
Polling is always more efficient than interrupts.
A blocking I/O operation leads to the process being suspended until the operation completes.
An asynchronous read operation is blocking.
i. DMA programs the device controller to transfer directly to memory via the bus.
ii. PCI Express can handle multiple requests at at time.
iii. Periodically polling for a sparse input is inefficient.
iv. This is the definition of blocking I/O.
v. An asynchronous read operation provides pointer to user’s buffer and returns immediately. Later, the kernel fills buffer and notifies user.
(b) (3.0 pt) Which of the following are true about I/O devices?
Port mapped I/O interacts with a device via special privileged in/out instructions.
A memory-mapped display controller allows the process to change the image on the screen by using memory store operations.
Seek time is the time it takes for the desired sector to move under the disk head.
Magnetic disks have much faster sequential access than random access.
Copy-on-write enable SSDs to reduce the write overhead.
i. Port mapped I/O uses in/out instructions to read/write to ports.
ii. With a memory-mapped display controller a process can write directly to the display buffer (also called frame buffer).
iii. This is the rotation time. Seek time is the time it takes to locate the correct track.
iv. Random access is much slower as we need to move the disk head around.
v. Copy-on-write avoids the overhead of the erasure operation which is much more expensive than a write operation, by writing the updates to an empty page.
Exam generated for
It is possible to have a deadlock when there’s only one process in the system, and that process is single threaded.
A thread in a critical section cannot be preempted by the OS.
Banker’s algorithm ensures some deadlock-free execution order exists at all times
A cycle in a resource allocation graph always implies deadlock.
Mutual exclusion, hold and wait, no preemption, and circular waiting guarantee that a deadlock will happen.
i. Acquiring a non-reentrant lock twice in a row before releasing will result in deadlock within the same thread.
ii. A thread in a critical section will be context switched when its time quanta expires.
iii. Banker’s algorithm prevents deadlock.
iv. Reference lecture 12 slide 30 for an example.
v. All four conditions can be satisfied with multiple instances of a type of a resource.
(d) (3.0 pt) Which of the following are true about demand paging?
Belady’s anomaly states that with certain access patterns, LRU can have a worse hit rate than FIFO. The working set of a process is the set of memory locations that it accesses throughout its lifetime. Demand paging does not require a TLB.
On a page fault, the physical memory traps to the OS.
Under the working set model, cache hit rate grows linearly with respect to the cache size.
i. Belady’s anomaly is about the effect of increasing cache capacity on the hit rate.
ii. Working set is the subset of the address space the process is currently working through.
iii. TLB is just a cache for address translations.
iv. The memory management unit (MMU) traps to the OS.
v. There are spikes every time a new working set fits.
Exam generated for
(e) (3.0 pt) Which of the following are true about synchronization and deadlock? A deadlock implies starvation but the converse is not true.
Most modern operating systems use Banker’s algorithm to avoid deadlocks.
A deadlock can occur with resources such as memory or sockets.
A thread can be blocked on more than one condition variable.
A preemptive CPU scheduler would eliminate the possibility of deadlock in the system.
i. Deadlock is a form of starvation with a stronger condition where threads fail to make progress due to cyclical waiting.
ii. In most modern operating systems, the number of processes is not fixed and total amount of resources used for each process is unknown, making implementing Banker’s algorithm impossible.
iii. Deadlock is a circular waiting of any type of resource not limited to locks.
iv. A thread blocks when waiting on a condition variable.
v. A preemptive CPU scheduler would not release locks from threads that hold them since it only preempts the CPU.
(f) (3.0 pt) Which of the following are true about schedulers? FCFS has throughput at least as good as RR.
Gang scheduling is a method to avoid deadlocks by scheduling all threads of a process at once. Lottery scheduling will not result in starvation in expectation.
EDF schedules the task with the shortest completion time.
RR is the fairest scheduler with regards to wait time for CPU.
i. At the very least, RR incurs the same number of context switches as FCFS. In reality, RR will incur many more context switches, resulting in context switching overhead and suboptimal cache performance.
ii. Gang scheduling minimizes the overhead of spinlocks.
iii. Each job is given at least one ticket, so it should not starve on expectation.
iv. EDF schedules tasks with the earliest deadline per the name.
v. Fairness is well defined here, and each task waits the same amount of time for RR.
Exam generated for
Answer each question in three sentences or less unless otherwise specified. You may not use semicolons to circumvent this limit. Any answer longer than three sentences will receive a 0.
(a) (4.0 pt) Consider a SSD without a flash translation layer (FTL) having 4 KiB pages, 512 KiB erasure blocks, 3 ms erasure times, and 50 μs read and write times. Assuming no queueing and controller times, how long in ms does it take to write to an existing page? Explain.
(b) (4.0 pt) What is thrashing? How do you prevent it?
(c) (4.0 pt) In terms of speed or memory efficiency, what is one advantage and one disadvantage of using a single level page table compared to a multilevel page table?
(d) (4.0 pt) In a system with N > 1 jobs of equal length T that arrive at the same time, will FCFS always have a lower average response time than RR with quantum < T? Assume there is no I/O operation for any of the jobs. Explain.
(e) (4.0 pt) Does a direct-mapped cache always have a lower hit rate than a fully associative cache for the same access pattern? Explain.
(f) (4.0 pt) Explain priority inversion and how to correct a priority scheduler to avoid priority inversion.
15.8 ms. Every time a page is written to, the block containing that page needs to be read in, erased, and written back. This gives us number of pages in erasure block × (read time per page + write time per page) + erasure time = (512 KiB/erasure block÷4 KiB/page)×(50 s+50 s)+3 ms = 15.8 ms
Thrashing occurs when a cache is too small to hold the working set resulting in constant page faults and cache misses. Thrashing can be prevented by increasing the cache size/associativity, optimizing memory accesses for better spatial/tem- poral locality, or using a better replacement algorithm.
Single level page tables require less memory lookups, resulting in faster transla- tion times. However, a sparse virtual address space can result in wasted memory space when using single level paging due to internal fragmentation.
Yes. RR will finish all jobs in the last N time slices since the job lengths are greater than the quantum.
No. A bad page replacement algorithm for fully associative cache may result in a worse hit rate than a direct-mapped cache. Moreover, if there are no conflict misses, the direct-mapped cache will work just as well as fully associative.
Priority inversion occurs when a high priority thread blocks on a resource held by a low priority thread. Priority inversion can be avoided with priority donation from high priority thread to low priority thread.
Exam generated for
(h) (4.0 pt) Can a user program unfairly manipulate priority to take advantage of the MLFQ scheduler? Explain.
LRU assumes an entry that has not been used recently will not be used in the fu- ture. Access patterns that lack temporal locality will result in poor performance.
A program can add meaningless I/O (e.g. printf) to unfairly increase the priority. By “yielding” early, the user program will remain at high priority queue levels.
Exam generated for
Fill in the blank for each question with a word or phrase that is the most appropriate based on your design review. Each blank should consist of no more than 10 words. Any answers longer than 10 words will receive a 0
(a) (3.0 pt) Any method that unblocks thread(s) needs to check if the unblocked thread(s) __________ for strict priority scheduler.
(b) (3.0 pt) To accomplish strict priority scheduling, all scheduling decisions should be made using the __________ of each thread.
(c) (3.0 pt) The data structure for user level locks and semaphores is similar to the __________ from Project 1.
(d) (3.0pt)Withmultipleprocess-levelsynchronizationprimitives,deadlockshouldbeavoidedby__________.
(e) (3.0 pt) When the main thread calls pthread_exit, it must __________ threads.
(f) (3.0 pt) process_wait and pthread_join need to __________ before blocking on the semaphore.
have greater priority than the current thread
effective priority
file descriptor table
defining a global acquisition order
join on all unjoined
drop all locks
Exam generated for
Inspired by the scheduling homework, Michael and Nathan decide to implement the clock algorithm. They’ve started to write the skeleton, but they need your help to finish the rest of it!
You are given the following data structures and methods.
struct clock {
struct list entries;
struct entry* curr_entry;
struct entry {
struct list_elem elem;
/* Pintos list of blocks in this clock */
/* Current entry the clock hand is pointing to. */
/* Page this entry holds, -1 if empty or invalid. */
/* Evicts CLOCK->CURR_ENTRY. If entry is empty or invalid, does nothing. */
void evict_entry(struct clock* clock);
/* Brings in PAGE but does not update CLOCK->CURR_ENTRY. */
void admit_page(struct clock* clock, int page);
/* Advances CLOCK->CURR_ENTRY to next entry, wrapping around to the first entry if necessary.*/
void advance_clockhand(struct clock* clock);
/* Runs an iteration of the clock algorithm.
Return 1 on a cache hit and 0 on a miss. */
int clock_iteration(struct clock* clock, int target_page) {
struct list_elem* e;
for (__________[A]__________) {
struct entry* entry = __________[B]__________;
if (__________[C]__________) {
__________[D]__________;
__________[E]__________;
while (1) {
__________[F]__________;
if (__________[G]__________) {
__________[H]__________;
if (__________[I]__________) {
__________[J]__________;
__________[K]__________;
__________[L]__________;
__________[L]__________;
return -1; }
Exam generated for
capacity must stay fixed (i.e. not allowed to change the number of entries).
You are only allowed to write one line of code per line (fill in two lines for [L]). You may not use semicolons to write multiple lines (except for filling in the blank of a for loop). You are only allowed to use methods and data structures provided to you in this problem and the reference sheet.
(a) (2.0 pt) [A]
(b) (2.0 pt) [B]
(c) (3.0 pt) [C]
(d) (3.0 pt) [D]
(e) (2.0 pt) [E]
(f) (3.0 pt) [F]
(g) (3.0 pt) [G]
(h) (3.0 pt) [H]
(i) (3.0 pt) [I]
e = list_begin(&clock->entries); e != list_end(&clock->entries); e =
list_next(e)
list_entry(e, struct entry, elem)
entry->page == target_page
entry->use = 1
advance_clockhand(clock)
clock->curr_entry->page != -1 && clock->curr_entry->use == 1
clock->curr_entry->use = 0
clock->curr_entry->page != -1
Exam generated for
(k) (3.0 pt) [K]
(l) (6.0 pt) [L]
evict_entry(clock)
admit_page(clock, target_page)
clock->curr_entry->page = target_page
clock->curr_entry->use = 1
Exam generated for
Marcus is building a virtual memory scheme for his new operating system MarcOS. He designs a virtual memory paging scheme using single level page tables with 24-bit virtual addresses, 32-bit physical addresses, 64 KiB page size, and a page table entry of size 4 bytes. Assume each page table fits in a single page and memory addresses are byte-addressed.
(a) (6.0 points)
For the following questions, write a single decimal integer. Any answer not matching this format (e.g. using mathematical expressions) will receive a 0.
i. (2.0 pt) What is the least number of entries necessary per page table?
ii. (2.0 pt) How many levels of page tables would be required to map the entire virtual address space to the physical addresses?
The original intent of this question was that the question did not give the assumption of a sin- gle level paging scheme.
iii. (2.0 pt) How many bits of metadata can you fit in each page table entry?
256 entries. The page size is 64 KiB, meaning log2 64 = log2(28 · 210) = 18 bits of the virtual address are used for the offset. This leaves 8 bits for the virtual page number, meaning there are 28 = 256 virtual pages. Page tables are indexed by virtual page numbers, so the number of page table entries is the number of virtual pages.
1 level. From (i), there are 28 virtual pages. Since each page table entry is 4 bytes, the page table would be of size 210 bytes or 1 KiB, which fits in one page of size 64 KiB.
16 bits. From (i), we calculated there were 16 bits for the offset. The offset bits are the same in virtual and physical addresses, so this leaves 16 bits for the physical page number. Page table entries are composed of the physical page number and metadata bits. Since each page table entry is 4 bytes = 32 bits, this leaves 16 bits for the metadata bits.
Exam generated for
Suppose the first six page table entries are as follows: 0x123442C3, 0x4320E8C4, 0xAD007DA2, 0x20200D6A, 0x7A511594,0x6000CCE3. Assume all six are valid and the rest of the page table entries are invalid.
For the following questions, write the virtual address that maps to the given physical address as a hexadecimal integer with the 0x prefix (x lowercase) without any leading zeros (e.g. 0x1 not 0x01). Any hex letters must be written in capital letters (e.g. 0xC). If no such virtual address exists, write NONE in capital letters. Any answers not matching this format will receive a 0.
i. (2.0 pt) 0x20001234
ii. (2.0 pt) 0x20207A51
iii. (2.0 pt) 0x1234AD00
iv. (2.0 pt) 0x2020AD00
v. (2.0 pt) 0xAD002000
NONE. From (a.i), we calculated there were 16 bits bits for the offset, which is 4 hex digits. The hex digits left of
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com