CS代考 CS162: Operating Systems and Systems Programming

Spring 2020

University of California, Berkeley College of Engineering Computer Science Division  EECS
Midterm I Solutions

Copyright By PowCoder代写 加微信 powcoder

February 27th, 2020
CS162: Operating Systems and Systems Programming
Your Name:
SID AND 162 Login (e.g. s042):
Discussion Section Time:
General Information:
This is a closed book exam. You are allowed 1 page of notes (both sides). You may use a calculator. You have 110 minutes to complete as much of the exam as possible. Make sure to read all of the questions first, as some of the questions are substantially more time consuming.
Write all of your answers directly on this paper. Make your answers as concise as possible. On programming questions, we will be looking for performance as well as correctness, so think through your answers carefully. If there is something about the questions that you believe is open to interpretation, please ask us about it!
Problem Possible Score
6 10 Total 100

CS 162 Spring 2020 Midterm I Solutions
February 27th, 2020
[ This page left for  ] 3.14159265358979323846264338327950288419716939937510582097494459230781640628620899

CS 162 Spring 2020 Midterm I Solutions February 27th, 2020
Problem 1: True/False [18 pts]
Please EXPLAIN your answer in TWO SENTENCES OR LESS (Answers longer than this may not
get credit!). Also, answers without an explanation GET NO CREDIT.
Problem 1a[2pts]: Threads within the same process can share data (since they live in the same address
space), but threads in different processes cannot share data. ⬜ True  False
shared file, (2) opening a pipe between them (before forking, for instance), (3) opening sockets between them (works even across machines), (4) setting up a shared-memory segment between them (we only mentioned this in the first couple of lectures, more later in the term.
There are many ways for processes to share information including: (1) opening a
Problem 1b[2pts]: Let n be the size of the virtual address space. On a fork() call, the OS does O(n) work to duplicate the parent’s address space for the child process.
True ⬜ False
which mostly scales with the size of the parent’s virtual address space. Note, however, that the question does say “virtual address space.” So, if the virtual address space is very sparsely populated (most of the virtual space is not mapped to actual physical addresses), then multi-level page tables would not have a number of page table entries that scale directly with the size of the virtual address space. If you point this out, we will take “False” as an answer.
The OS copies all of the parent’s page table entries (to perform copy-of-write),
Problem 1c[2pts]: A zombie process is one that got hung up trying to access a disk which is experiencing permanent read errors.
⬜ True  False
relayed its exit code to its parent (i.e. the parent hasn’t executed a “wait()” call on it yet). Thus, this process’ PCB must wait around to hold the exit code.
⬜ True  False
A zombie process is one that is not running anymore (has exited) but nas not yet
Problem 1d[2pts]: The pipe() system call creates a file and returns file descriptors to read and write to it. Child processes created with fork() share file descriptors with their parents, so both parent and child now have access to this file through the pipe file descriptors.
Pipes are implemented as buffers in memory, not as files. Some people pointed out that file descriptors are copied, not shared; we also gave credit for this.
Problem 1e[2pts]: Dual mode operation is an important mechanism for providing memory-level isolation between processes on the same machine.
True ⬜ False
kernel and user, thus allowing the hardware to restrict the ability to change the virtual memory mappings to the kernel (i.e. when code is running in kernel mode). This, in turn, means that user-code running in processes cannot change their own mappings to violate process isolation.
As discussed in class, Dual-mode operation permits distinguishing between the

CS 162 Spring 2020 Midterm I Solutions February 27th, 2020
Problem 1f[2pts]: One of the first things that Pintos (running on an Intel processor) does after a user executes a system call is to replace the user’s stack with the kernel’s special “system call
stack.” ⬜ True  False
thread (and kernel stack) for every user thread. So, rather than a single, special “system call stack”, PintOS starts running on the particular kernel stack associated with the calling user process. Second, with a modern Intel processor, the system call instruction causes the stack to be switched automatically, before entering PintOS. So, PintOS on an Intel processor doesn’t have to manipulate the stack – this has already been done.
Problem 1g[2pts]: The test&set instruction consists of two independent operations: (1) read the value from memory and (2) replace it with the value “1”. Thus, it must be used in a loop to protect against the case in which multiple threads execute test&set simultaneously and end up interleaving these two operations from different threads.
⬜ True  False
different test&set operations from different threads to interleave. If it were possible for this to happen, then test&set would be no better than a load followed by a store, and we would be back to the complexity of the “too much milk #3” solution from class…
There are at least two reasons that this is false. First, PintOS provides a kernel
The test&set instruction is atomic (that is the whole point), so there is no way for
Problem 1h[2pts]: When using a Monitor, a programmer must remember to release the lock before going to sleep (waiting) on a condition variable.
⬜ True  False
condition variables inside the critical section, i.e. with the lock held. Thus, the programmer should
The Monitor programming paradigm requires that programmers only sleep on never release the lock before waiting on a condition variable.
Problem 1i[2pts]: TCP/IP makes it possible for a server with IP address X and well-known port Y to simultaneously communicate with multiple remote clients without intermixing response streams.
True ⬜ False
client side, an [IP address:port] at the server side, and a protocol (which is TCP in this case). The socket layer can distinguish between connections as long as one of these pieces of information are different. In the case of the server using TCP/IP with [IP address X:port Y], it could have many streams, as long as they have different client IP addresses, different client ports, or both.
Each unique connection is defined by a “5-tuple” of an [IP address:port] at the

CS 162 Spring 2020 Midterm I Solutions February 27th, 2020
Problem 2: Multiple Choice [12pts]
Problem 2a[2pts]: What needs to be saved and restored on a context switch between two threads in two different processes (choose all that apply):
Computational registers (integer and floating point) The Page-table root pointer
Buffered data in streams that have been opened with fopen() Contents of the file descriptor table
Stack pointer register
Problem 2b[2pts]: What are some reasons that overuse of threads is bad (i.e. using too many threads at the same time)? (choose all that apply):
The thread name-space becomes too large and fragmented, making it very difficult to efficiently track the current executing thread.
The overhead of switching between too many threads can waste processor cycles such that the overhead outweighs actual computation (i.e. thrashing)
Excessive threading can waste memory for stacks and TCBs
Most schedulers have a scheduling cost (overhead) that is quadratic in the number threads, thus making the mechanism for choosing the next thread to run extremely expensive.
Problem 2c[2pts]: What are the disadvantages of disabling interrupts to serialize access to a critical
section? A: 
B: ⬜ C:  D:  E:
(choose all that apply):
User code cannot utilize this technique for serializing access to critical sections.
Interrupt controllers have a limited number of physical interrupt lines, thereby making it problematic to allocate them exclusively to critical sections.
This technique would lock out other hardware interrupts, potentially causing critical events to be missed.
This technique is a very coarse-grained method of serializing, yielding only one such lock for the whole machine.
This technique could not be used to enforce a critical section on a multiprocessor.
There might not be enough parallelism in the algorithm to justify the additional threads.

CS 162 Spring 2020 Midterm I Solutions February 27th, 2020
Problem 2d[2pts]: In PintOS, every user thread is matched with a corresponding kernel thread (complete with a kernel stack). What is true about this arrangement (choose all that apply):
C:  D:⬜ E: 
When the user-thread makes a system call that must block (e.g. a read to a file that must go to disk), the thread can be blocked at any time by putting the kernel thread (with its stack) to sleep and waking another kernel thread (with its stack) from the ready queue.
The presence of the matched kernel thread makes the user thread run twice as fast as it would otherwise.
The kernel thread helps to make sure that the page table is constructed properly so that the user thread’s address space is protected from threads in other processes.
While user code is running, the kernel thread manages cached data from the file system to make sure the most recent items are stored in the cache and ready when the user needs them.
The kernel gains safety because it does not have to rely on the correctness of the user’s stack pointer register for correct behavior.
Problem 2e[2pts]: Kernel mode differs from User mode in the following ways (choose all that apply):
C:⬜ D:  E: 
The CPL (current processor level) is 0 in kernel mode and 3 in user mode for Intel processors
In Kernel mode, additional instructions become available, such as those that modify page table registers and those that enable/disable interrupts.
Specialized instructions for security-related operations (such as for cryptographic signatures) are only available from Kernel mode.
Control for I/O devices (such as the timer, or disk controllers) are only available from kernel mode
Pages marked as Kernel-mode in the PTEs are only available in kernel mode. Problem 2f[2pts]: Which of the following are true about semaphores (choose all that apply):
Semaphores can be initialized to any 32-bit values in the range -231 to 231-1
Semaphore.V() increments the value of the semaphore and wakes a sleeping thread if the value of the semaphore is > 0
Semaphores cannot be used for locking.
The interface for Semaphore.P() is specified in a way that prevents its implementation from busywaiting, even for a brief period of time.
The pure semaphore interface does not allow querying for the current value of the semaphore.

CS 162 Spring 2020 Midterm I Solutions February 27th, 2020
Problem 3: Readers-Writers Access to Database [19 pts]
Reader() {
//First check self into system
lock.acquire();
while ((AW + WW) > 0) {
okToRead.wait(&lock);
lock.release();
// Perform actual read‐only access
AccessDatabase(ReadOnly);
// Now, check out of system
lock.acquire();
if (AR == 0 && WW > 0)
okToWrite.signal();
lock.release();
Writer() {
// First check self into system
lock.acquire();
while ((AW + AR) > 0) {
okToWrite.wait(&lock);
lock.release();
// Perform actual read/write access
AccessDatabase(ReadWrite);
// Now, check out of system
lock.acquire();
if (WW > 0){
okToWrite.signal();
} else if (WR > 0) {
okToRead.broadcast();
lock.release();
Problem 3a[2pts]: Above, we show the Readers-Writers example given in class. It used two condition variables, one for waiting readers and one for waiting writers. Suppose that all of the following requests arrive in very short order (while R1 and R2 are still executing):
Incomingstream:R1 R2 W1R3 W2 W3R4 R5 R6 W4 R7W5 W6 R8 R9W7 R10
In what order would the above code process the above requests? If you have a group of requests that are equivalent (unordered), indicate this clearly by surrounding them with braces ‘{}’. You can assume that the wait queues for condition variables are FIFO in nature (i.e. signal() wakes up the oldest thread on the queue). Explain how you got your answer.
SOLN: Processed as follows: {R1 R2} W1W2 W3 W4 W5 W6 W7 { R3 R4 R5 R6 R7 R8 R9 R10}
Since this algorithm gives precedence to writes over reads, although the first two reads get to execute together and right away, the remaining reads are deferred until each write is processed (one at a time). After all writes are finished, then the reads execute as a group.
Problem 3b[2pts]: Let us define the logical arrival order by the order in which threads first acquire the monitor lock. Suppose that we wanted the results of reads and writes to the database to be the same as if they were processed one at a time – in their logical arrival order; for example,
W1 W2 R1 R2 R3 W3 W4 R4 R5 R6 would be processed as W1 W2 {R1 R2 R3} W3 W4 {R4 R5 R6} regardless of the speed with which these requests arrive. Explain why the above algorithm does not satisfy this constraint.
Once it starts buffering items (i.e. putting requests to sleep on condition variables), the above algorithm does not keep track of the arrival order. Or, to say it another way, the above algorithm loses all ordering between buffered reads and buffered writes. Thus, it cannot guarantee that reads execute after earlier writes and before later writes (as defined by logical arrival order) unless it never buffers items (in which case it is not really doing anything at all).

CS 162 Spring 2020 Midterm I Solutions February 27th, 2020
Assume that our system uses Mesa scheduling and that condition variables have FIFO wait queues. Also assume that the only reason that a thread will wake up on a condition variable is because of a signal() or broadcast().
Below is a sketch of a solution that uses only two condition variables and that does return results as if requests were processed in logical arrival order. Rather than separate methods for Reader() and Writer(), we have a single method which takes a “NewType” variable (0 for read, 1 for write):
1. 2. 3. 4.
Lock MonitorLock; // Methods: acquire(),release() Condition waitQueue, onDeckQueue; // Methods:wait(),signal(),broadcast() int Queued = 0, onDeck = 0; // Counts of sleeping threads
int CurType = 0, NumAccessing = 0;// About current DataBase Accessors
Accessor (int NewType) { // type = 0 for read, 1 for write
/* Monitor to control entry so that one writer or multiple readers */ MonitorLock.acquire();
10. waitQueue.wait();
11. Queued—‐;
Missing wait condition */
13. /* 14. {
15. onDeckQueue.wait();
16. onDeck‐‐;
Missing code */
MonitorLock.release();
// Perform actual data access
AccessDatabase(NewType);
/* Missing code */
Missing wait condition */
ThewaitQueueconditionvariablekeepsunexaminedrequestsinFIFOorder. The onDeckQueue keeps a single request that is currently incompatible with requests that are executing. We want to allow as many parallel requests to the database as possible, subject to the constraint of obeying logical arrival order. Here, logical arrival order is defined by the order in which requests acquire the lock at line #7.
Problem 3c[2pts]: Explain why you might not want to use a “while()” statement in Line #8 – despite the fact that the system has Mesa scheduling:
Because a while loop will potentially reorder requests on the waitQueue condition variable – and we want to keep requests in their original logical arrival order. Thus whatever we do, we do not want to execute waitQueue.wait()more than once per thread.

CS 162 Spring 2020 Midterm I Solutions February 27th, 2020 Problem 3d[2pts]: What is the missing code in Line #8? Hint: it is a single “if” statement.
If anyone in the system is waiting, then we want to wait so that we don’t get ahead of others (i.e. so that we preserve the logical arrival order):
if (Queued + onDeck > 0) // Wait unless no one waiting to be evaluated
Problem 3e[2pts]: What is the missing code in Line #13? This should be a single line of code. If the current request is incompatible with the set of executing requests, then we have to wait:
while ((NumAccessing > 0) && (NewType == 1 || CurType == 1))
You could also use an “if” here, but then you would need to conditionalize the “onDeckQueue.signal()” statement in (3g).
Problem 3f[2pts]: What is the missing code in Line #18? You should not have more than three (3) actual lines of code.
By the time that a thread has fallen through to line #18, we know that it will access the database, and that it represents the new access type. Further, in case this thread came from the onDeckQueue, we should see if someone waiting in the waitQueue can make forward progress:
NumAccessing++;
CurType = NewType;
waitQueue.signal();
// Have another thread accessing the database
// In case we are first of new group
// Wake up next thread (if they exist)
Problem 3g[3pts]: What is the missing code in Line #22? You should not have more than five (5) actual lines of code.
MonitorLock.acquire();
NumAccessing‐‐; // One less accessing onDeckQueue.signal(); // Wake up someone stalled on conflict MonitorLock.release();
If you use an “if” in (3e), then need to conditionalize your signaling of the onDeskQueue in this way (making sure there are no additional accessors):
if(NumAccessing ==0)
onDeckQueue.signal(); // Wake up someone stalled on conflict
Problem 3h[4pts]: Suppose that condition variables did not have FIFO ordered wait queues. What changes would you have to make to your code? Be explicit (you should not need more than 6 new or changed lines). Hint: consider assigning a sequentially increasing ID to each thread.
New variables to track sequential IDs added to Line #4:
int HeadID = 0, TailID = 0; // NEW: “Take a number”
New code for Line #8 (one new line, one changed line):
int MyCurrentID = TailID++; // NEW: Next logical ID to new request while (MyCurrentID != HeadID) // CHANGED: Threads exit queue in order.
Finally, code for Line #18 is slightly different (only added 1 line, changed 1 line):
NumAccessing++;
CurType = NewType;
waitQueue.broadcast();
// (UNCHANGED LINE)
// (UNCHANGED LINE)
// NEW: Next in‐order thread can wake
// CHANGED: Wake up next thread in order
// (All wake, but only HeadID advance)
See next page for some notes on the grading of 3h:

CS 162 Spring 2020 Midterm I Solutions February 27th, 2020
Grading notes for 3h: We accepted non-code answers for this question, HOWEVER, we did not just give credit for copying the hint. Further, although using a new data structure (such as a queue) and iterating to choose a thread to signal might be feasible, very few people who mentioned this actually did it correctly.
Page 10/22

CS 162 Spring 2020 Midterm I Solutions February 27th, 2020
Problem 4: Pintos Word Count System Calls [26pts]
In this question you will partially implement a word count feature for Pintos! Word count is meant to be a feature for user programs. Here’s how it works.
There are 3 syscall handler functions with the following signatures:
void wc_open(char *word); // Open a word for the given thread
int wc_inc(); // Increment opened‐word’s count return result void wc_close(); // Close the current word
The count for a word is shared between all user programs. A user must open a word to get access to it. Each program must open a word before it

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com