CS代考 CS162: Operating Systems and Systems Programming

Spring 2019
University of California, Berkeley College of Engineering Computer Science Division  EECS
Midterm II SOLUTIONS
John Kubiatowicz

Copyright By PowCoder代写 加微信 powcoder

April 4th, 2019
CS162: Operating Systems and Systems Programming
Your Name: Your SID:
Discussion Section Time:
General Information:
This is a closed book exam. You are allowed 2 pages of notes (both sides). You have 2 hours to complete as much of the exam as possible. Make sure to read all of the questions first, as some of the questions are substantially more time consuming. Write all of your answers directly on this paper. Make your answers as concise as possible. On programming questions, we will be looking for performance as well as correctness, so think through your answers carefully. If there is something about the questions that you believe is open to interpretation, please ask us about it!
Problem Possible Score
5 22 Total 100

CS 162 Spring 2019 Midterm II
April 4th, 2019
[ This page left for  ] 3.14159265358979323846264338327950288419716939937510582097494459230781640628620899

CS 162 Spring 2019 Midterm II
April 4th, 2019
Problem 1: True/False [18 pts]
Please EXPLAIN your answer in TWO SENTENCES OR LESS (Answers longer than this may not
get credit!). Also, answers without an explanation GET NO CREDIT.
Problem 1a[2pts]: A direct mapped cache can sometimes have a higher hit rate than a fully
associative cache with an LRU replacement policy (on the same reference pattern). True ⬜ False
addresses stride across cache lines (i.e. with 32-byte cache lines, address 0, 32, 64, … 32N, 0, 32, 64, …. 32N, ….). The LRU cache would miss on every access (since the N+1st entry is never in the cache); the direct-mapped cache would miss on only 2 out of N+1 accesses, with only 0 and 32N conflicting.
Consider a cache of N cache lines with an N+1 sequential access pattern whose
Problem 1b[2pts]: If the Banker’s algorithm finds that it’s safe to allocate a resource to an existing thread, then all threads will eventually complete.
⬜ True  False
Threads fail to complete for many reasons other than cycles in the resource graph. For instance, a thread could go into an infinite loop independent of the Banker’s algorithm.
Problem 1c[2pts]: Even though most processors use a physical address to address the cache, it is possible to overlap the TLB lookup and cache access in hardware.
True ⬜ False
Particularly, the page offset stays the same. If the cache index fits entirely in the page offset, then the SRAM lookup of the cache can happen in parallel with the TLB lookup, with the Tag check happening after the TLB access finishes. [For instance, with 4K pages and a 16-way, 64K cache, the cache index would come entirely from the offset. For larger caches, those bits of the index that fit inside the offset could start the cache lookup, with the remainder happening after the TLB lookup completes.]
True ⬜ False
Not all bits in a Virtual address are translated during mapping to Physical address.
Problem 1d[2pts]: The lottery scheduler prevents CPU starvation by assigning at least one ticket to each scheduled thread.
By assigning at least one ticket to each scheduled thread, the kernel can ensure that the lottery scheduler will always give at least a little CPU to every thread (probabilistically).
Problem 1e[2pts]: You can always reduce the number of page faults by increasing the amount of memory available to a process.
⬜ True  False
exhibits Belady’s anomaly, in which certain access patterns experience increased page faults with increased memory.
This result depends on the replacement policy. A FIFO page replacement policy

CS 162 Spring 2019 Midterm II April 4th, 2019
Problem 1f[2pts]: The Shortest Remaining Time First (SRTF) algorithm is the best preemptive scheduling algorithm that can be implemented in an Operating System.
⬜ True  False
Since SRTF relies on knowledge of the future, it cannot be implemented. It is merely a strawman against which to compare practical scheduling algorithms.
Problem 1g[2pts]: A Patch for the Meltdown security flaw can increase the cost of execution in Linux significantly (by over 800% in some reports) on some process/kernel combinations.
True ⬜ False
and kernel (i.e. no more inaccessible kernel mappings in the user’s page table). On processors which do not have process-ID tags in their TLB (or on OSes that don’t use these bits), this means that every system call must flush the TLB twice – once on entrance to the kernel and once on exit. [ The result is a huge performance hit – reported to be 800% in some cases…]
Problem 1h[2pts]: Anything that can be done with a monitor can also be done with semaphores. True ⬜ False
once can simply build monitors out of semaphores, then use these monitors to do whatever we want— thus demonstrating the equivalence.
Problem 1i[2pts]: Page Tables have an important advantage over simple Segmentation Tables (i.e. using base and bound) for virtual address translation in that they eliminate external fragmentation in the physical memory allocator.
allocator can use any physical page for any part of a virtual address space; thus, there is no externally wasted physical memory. In a simple segmentation scheme, however, the chunks of physical memory vary widely in size (based on the size of the segments); thus, the physical memory allocator is forced to find variable sized chunks – leading to external fragmentation.
True ⬜ False
The primary software patch for Meltdown involves separate page tables for a user
Since it is possible to create both locks and condition variables out of semaphores,
Since page tables operate on pages, which are of fixed size, the physical memory

CS 162 Spring 2019 Midterm II April 4th, 2019
Problem 2: Multiple Choice [20pts]
Problem 2a[2pts]: When a process asks the kernel for resources that cannot be granted without causing deadlock (as determined by the Banker’s algorithm), the kernel must (choose one):
B:  C:⬜ D:⬜
Preempt the requested resources from another process and give them to the requesting process, thereby avoiding cycles in the resource dependency graph.
Put the requesting process to sleep until the resources become available.
Send a SIGKILL to the requesting process to prevent deadlock from occurring.
This question does not make sense because the Banker’s algorithm is run only when a new process begins execution.
Problem 2b[2pts]: Suppose that two chess programs are running against one another on the same CPU with equal “nice” values. Also assume that the programs are given real-time limits on the total time they spend making decisions (similar to a real chess tournament). Why might one of the programs want to perform a lot of superfluous I/O operations? (choose one):
To hide the fact that the program was cheating by receiving information from a real chess master over the network.
The random completion time for these I/O operations will serve as an important source of randomness, thereby improving the chess playing heuristics.
Problem 2c[2pts]: A processor which provides “Precise Exceptions” is one in which (choose one):
There is never any ambiguity as to which type of exception occurred in the user’s program.
At the time that an exception handler begins execution, there is a well-defined instruction address in the instruction stream for which all prior instructions have committed their results and no following instructions (including the excepting one) have modified processor state.
Important exceptions are synchronous (always occurring at the same place in the instruction stream) as opposed to asynchronous. This helps during restart after the exception is handled.
The Operating System can disable out-of-order execution during critical sections of the user’s program, thereby preventing instructions following an exception from being partially executed.
This will split up the long-running heuristic computation into many short bursts, thereby causing the chess program to be classified as “interactive” by the scheduler and thus receiving higher priority than the competing program.
This question does not make sense, since the extra I/O operations will only slow down chess program.

CS 162 Spring 2019 Midterm II April 4th, 2019
Problem 2d[2pts]: The Meltdown security flaw (made public in 2018) was able to gain access to protected information in the kernel because (choose one):
Data-specific variability in the processing of system-calls (related to how Intel processors handled synchronous exceptions) by one process (the victim) allowed another process (the attacker) to successfully guess values stored in the kernel stack of the victim process.
A POSIX-compatible system call implemented by multiple operating systems neglected to properly check arguments and could thus be fooled into returning the contents of protected memory to users.
There was a wide-spread bug in the x86 architecture that caused certain loads to ignore kernel/user distinctions in the page table under the right circumstances. This allowed user- code to directly load and use data that was supposed to be protected in kernel space.
Many processors allowed timing windows in which illegal accesses could be performed speculatively and made to impact cache state – even though the speculatively loaded data was later squashed in the pipeline and could not be directly used.
Problem 2e[2pts]: Earliest Deadline First (EDF) scheduling has an important advantage over other scheduling schemes discussed in class because (choose one):
It can hand out more total processor cycles to an asynchronously arriving mix of real-time and non-realtime tasks than other scheduling schemes.
It can allocate up to 100% of processor cycles to real-time periodic tasks while still providing a guarantee of meeting real-time deadlines.
It can operate non-preemptively and is thus simpler than many other scheduling schemes.
It can provide the lowest average responsiveness to a set of tasks under all circumstances— even in the presence of long-running computations.
Problem 2f[2pts]: What is the Clock Algorithm and where is it used? (choose one)
The Clock Algorithm provides fair queueing of CPU cycles within the Linux CFS scheduler; it uses virtual time to give each thread its fair fraction of cycles and is considered a simpler alternative to the Linux O(1) scheduler.
The Clock Algorithm helps to synchronize time over the network and is capable of synchronizing to better than 10ms over the scale of a metropolitan area. It is used to help timestamp all activities within the operating system.
The Clock Algorithm is used to select replacement pages in the virtual memory subsystem. It places physical pages in a giant ring and scans through them for idle pages by manipulating the hardware “use” bit. Pages in the ring are marked as “valid” and accessible by running programs until they are chosen for replacement.
The Clock Algorithm is used to select replacement pages in the virtual memory subsystem. It places pages into two groups – an active group that is mapped as “valid” and managed in a FIFO list and an inactive group that is mapped as “invalid” and managed as an LRU list.

CS 162 Spring 2019 Midterm II
Problem 2g[2pts]: Consider a computer system with the following parameters:
V ariable PTLB PF
PL2 TTLB TL1 TL2 TM TD
Measurement Probability of TLB miss
Probability of a page fault when a TLB miss occurs on user pages (assume page faults do not occur on page tables). Probability of a first-level cache miss for all accesses Probability of a second-level cache miss for all accesses Time to access TLB (hit)
Time to access L1 cache (hit)
Time to access L2 cache (hit)
Time to access DRAM
Time to transfer a page to/from disk
April 4th, 2019
10 ms = 10,000,000 ns
V alue 0.1
The TLB is refilled automatically by the hardware on a miss. The 2-level page tables are kept in physical memory and are cached like other accesses. Assume that the costs of the page replacement algorithm and updates to the page table are included in the TD measurement. Also assume that no dirty pages are replaced and that pages mapped on a page fault are not cached.
What is the effective access time (the time for an application program to do one memory reference) on this computer? Assume physical memory is 100% utilized and ignore any software overheads in the kernel.
􏰅𝑃 􏰇􏰈𝑇 􏰅𝑃 􏰇𝑇 􏰋􏰅𝑃 􏰇𝑃 􏰇𝑇
B:⬜ 𝑇􏰂􏰃􏰄 􏰅𝑇􏰃􏰆 􏰅𝑃􏰃􏰆 􏰇􏰈𝑇􏰃􏰉 􏰅𝑃􏰃􏰉 􏰇𝑇􏰊􏰋􏰅𝑃􏰂􏰃􏰄 􏰇􏰎2􏰏𝑇􏰃􏰆 􏰅𝑃􏰃􏰆 􏰇􏰈𝑇􏰃􏰉 􏰅𝑃􏰃􏰉 􏰇𝑇􏰊􏰋􏰐􏰅𝑃􏰌 􏰇𝑇􏰍􏰑
􏰂􏰃􏰄 􏰃􏰆 􏰃􏰆 􏰃􏰉 􏰃􏰉 􏰊 􏰂􏰃􏰄 􏰌 􏰍
C:⬜ 𝑇􏰂􏰃􏰄 􏰅𝑇􏰃􏰆 􏰅𝑃􏰃􏰆 􏰇􏰈𝑇􏰃􏰉 􏰅𝑃􏰃􏰉 􏰇𝑇􏰊􏰋􏰅𝑃􏰂􏰃􏰄 􏰇􏰈2𝑇􏰊 􏰅𝑃􏰌 􏰇𝑇􏰍􏰋 D:𝑇􏰂􏰃􏰄 􏰅􏰈1􏰒𝑃􏰂􏰃􏰄 􏰇𝑃􏰌􏰋􏰇􏰏𝑇􏰃􏰆 􏰅𝑃􏰃􏰆 􏰇􏰈𝑇􏰃􏰉 􏰅𝑃􏰃􏰉 􏰇𝑇􏰊􏰋􏰐􏰅
A:  B:  C: D: E:
Setting breakpoints in GDB to walk through specific function calls. Setting memory watchpoints in GDB to watch for variable modifications. Checking function pre- and post-conditions with ASSERT statements. Checking execution progress with PANIC statements.
Checking variables with print statements.
None of the above.
𝑃􏰂􏰃􏰄 􏰇􏰎2􏰏𝑇􏰃􏰆 􏰅𝑃􏰃􏰆 􏰇􏰈𝑇􏰃􏰉 􏰅𝑃􏰃􏰉 􏰇𝑇􏰊􏰋􏰐􏰅𝑃􏰌 􏰇􏰈𝑇􏰃􏰆 􏰅𝑇􏰃􏰉 􏰅𝑇􏰊 􏰅𝑇􏰍􏰋􏰑
Problem 2h[3pts]: Debugging in Pintos can often be very difficult due to the complexity of the code base and often just staring at the code is an ineffective strategy, Mark all of the following that are good general strategies to debug Pintos:

CS 162 Spring 2019 Midterm II April 4th, 2019
Problem 2i[3pts]: Which of the following schedulers gives priority to interactive processes (those getting input from users) over long-running ones and can be used in a workstation without requiring annotations from the programmer or other supplemental mechanisms (mark all that apply):
A: Linux CFS
B:⬜ C:⬜ D:⬜ E: F: ⬜
Round- -Level Scheduler None of the above.

CS 162 Spring 2019 Midterm II
April 4th, 2019
Problem 3: Potpourri [22pts]
Problem 3a[8pts]: For the following problem, assume a hypothetical machine with 4 pages
of physical memory and 7 pages of virtual memory. Given the access pattern:
ABCDEAAECFFGACGDCF
Indicate in the following table which pages are mapped to which physical pages for each of the following policies. Assume that a blank box matches the element to the left. We have given the FIFO policy as an example.
Access→ A B C D E A A E C F F G A C G D C F
1AEC 2BAD 3CF 4DG 1AF 2BEFGF 3CF 4DF 1AEAF 2BAG
4DFD Problem 3b[2pts]: Suppose that you decide to try running PintOS on a real piece of hardware. Everything seems to work at first, but after testing for some time in user mode you realize that none of your syscalls are working. It turns out that the CPU you’re testing is faulty; for some reason, the int (interrupt) instruction is treated as a no-op by your CPU. In one sentence, briefly explain how you could modify the Pintos kernel in order to service syscalls if the int instruction does not work on your CPU. Your modification is allowed reduce the usability of the CPU in other ways.
You could change any of the synchronous exception handlers (e.g. divide by zero, bad instruction opcode, page fault to a particular unmapped address, etc) to handle system calls. You could also potentially use x86 features like the sysenter/sysexit or call gate mechanisms to handle syscalls. The only requirements are that the operation synchronously enter the kernel and be recognizable as the intended syscall rather than a fault that kills the process. So, if you did a divide-by-zero, the kernel would have to notice that the divide came from libc() to be recognized as a syscall rather than error.
Problem 3c[2pts]: Assuming that you made the above modification to the kernel correctly, show an example of x86 code that a user could write in order to call the practice() system call, which takes one argument that is stored in register eax, and has a syscall number of 1. Do not worry about a return value or the exact syntax of x86 assembly code.
push eax #push argument onto stack___
push 1 #push syscall number onto stack
move ebx, 0 #setup for divide by zero______
div ebx # divide by zero (syscall!)____
Any one of these
FIFO MIN LRU

CS 162 Spring 2019 Midterm II April 4th, 2019
Problem 3d[4pts]: You are designing a kernel from scratch and need to make sure that system calls properly check user memory. Implement validate_buffer() so that it ensures that the memory between ptr and ptr + size – 1 points to valid user memory.
Assume you have access to the following:
size_t PGSIZE; // the size of a page
/* returns the byte offset of ptr within its page */
size_t pg_ofs(void *ptr);
/* returns true if the pointer is on a mapped, userspace page */
int validate_pointer(void *ptr);
Finish the implementation of validate_buffer() by adding code to the missing lines, below. Your code should be as efficient as possible and there should be no more than one semicolon per line. You will not be given full credit for solutions that check one byte at a time. Also, you cannot write any control flow statements that we have not laid out for you (e.g. if, while, for).
// Ensure memory between ptr and ptr+size-1 is valid user memory
void validate_buffer(void *ptr, size_t size)
while (size > 0) {
if ( !validate_pointer(ptr)) ) thread_exit(); // Invalid buffer
size_t bytes_validated;
size_t page_remaining = PGSIZE – pg_ofs(ptr);
if (page_remaining > size)
bytes_validated = size;
bytes_validated = page_remaining; ptr += bytes_validated; _________________________________________ size -= bytes_validated;
Page 10/22

CS 162 Spring 2019 Midterm II
April 4th, 2019
Problem 3e[6pts]:
Here is a table of processes and their associated arrival and running times.
Process ID Arrival Time
Process A 0 Process B 1 Process C 4 Process D 7 Process E 8
CPU Running Time
Show the scheduling order for these processes under 3 policies: First Come First Serve (FCFS), Shortest-Remaining-Time-First (SRTF), Round-Robin (RR) with timeslice quantum = 1. Assume that context switch overhead is 0, that new processes are available for scheduling as soon as they arrive, and that new processes are added to the head of the queue except for FCFS, where they are added to the tail.
Time Slot FCFS SRTF RR 0AAA 1AAB 2BBA 3BBB 4BCC 5BBB 6BBB 7BBD 8CBE 9DEB
10 D E D 11 D E E 12 D D B 13 E D D 14 E D E 15 E D D
Page 11/22

CS 162 Spring 2019 Midterm II April 4th, 2019 [This page intentionally left blank!]
Page 12/22

CS 162 Spring 2019 Midterm II April 4th, 2019
Problem 4: Deadlock and the Alien Nosh [18pts] Consider a large table with multi-armed aliens. They are bilaterally symmetric, with the same
number of left and right appendages. In the center of the table is a pile of Foons and Sporks. In order to eat, they must have a “Foon” in each left hand and a “Spork” in each right hand. The aliens are so busy talking that they can only grab one Foon or Spork at a time. In this problem, we are going to use monitor-style programming in combination with the Banker’s Algorithm to design a deadlock-free API that allows an alien to grab utensils one at a time and to return all utensils after eating. The prototypes for our API calls are as follows:
// Allocate and initialize a new table (numarms is number on each side!)
alien_table_t *InitTable(int numAliens, int numArms,
int numFoons, int numSporks);
// Allow alien to grab utensil. Sleep if insufficient resources.
// utype = 0 for a Foon and 1 for a Spork
int GrabOne(alien_table_t *table, int alien, int utype);
// Return all of the given alien’s utensils to center table.
// Wake sleeping aliens.
void ReturnAll(alien_ta

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com