Computer Science 162
University of California, Berkeley Quiz 3
August 3, 2020
Name Student ID
Copyright By PowCoder代写 加微信 powcoder
This is an open-book exam. You may access existing materials, including online materials (such as the course website). During the quiz, you may not communicate with other people regarding the quiz questions or answers in any capacity. For example, posting to a forum asking for help is prohibited, as is sharing your exam questions or answers with other people (or soliciting this information from them).
You have 80 minutes to complete it. If there is something in a question that you believe is open to interpretation, please make a private post on Piazza asking for clarification. If you run into technical issues during the quiz, join our open Zoom call and we will try to help you out. The final page is for reference.
We will overlook minor syntax errors in grading coding questions. You do not have to add the necessary #include statements at the top.
Grade Table (for instructor use only)
Question: 1 2 3 4 5 6 7 Total Points: 1 15 20 18 11 15 0 80 Score:
1. (1 point) Please check the boxes next to the following statements after you have read through and agreed with them.
√ I will complete this quiz with integrity, doing the work entirely on my own.
√ I will NOT attempt to obtain answers or partial answers from any other people.
√ I will NOT post questions about this quiz on online platforms such as StackOverflow. √ I will NOT discuss any information about this quiz until 24 hours after it is over.
√ I understand the consequences of violating UC Berkeley’s Code of Student Conduct.
Quiz 3 – Page 2 of 18 Student ID:
2. (15 points) Operating System Concepts
Choose either True or False for the questions below. You do not need to provide justifications.
Each student received 15 questions, chosen randomly from the ones below.
(a) (1 point) Segmentation (without paging) is prone to internal fragmentation.
√ True ⃝ False
(b) (1 point) Segmentation (without paging) allows different processes to share parts of their address space.
√ True ⃝ False
(c) (1 point) Paging is prone to internal fragmentation.
√ True ⃝ False
(d) (1 point) For a fully associative cache using a Least-Recently-Used (LRU) eviction policy, increasing the cache size may decrease the hit rate.
⃝ True √ False
(e) (1 point) For a fully associative cache using a First-In-First-Out (FIFO) eviction policy, increasing the cache size may decrease the hit rate.
√ True ⃝ False
(f) (1 point) It is possible for a single memory access to result in a TLB hit and a page fault.
√ True ⃝ False
(g) (1 point) It is possible for a single memory access to result in a TLB miss and a cache
√ True ⃝ False
(h) (1 point) Invalid PTEs may be cached in the TLB. ⃝ True
(i) (1 point) On every page fault, the OS either brings in a page from disk to memory or
terminates the process that issued the memory access. ⃝ True
(j) (1 point) While the OS brings a page in from disk to memory in response to a page fault, the thread that caused the page fault is blocked, allowing the OS to schedule another thread in its place.
√ True ⃝ False
Quiz 3 – Page 3 of 18 Student ID:
(k) (1 point) The clock algorithm will evict exactly the same pages as LRU, without the need to manipulate a linked list on every access.
⃝ True √ False
(l) (1 point) While a process is executing on the CPU, the page table for its address space must be held entirely in physical memory.
⃝ True √ False
(m) (1 point) A user program may interact directly with certain I/O devices using port-mapped I/O (i.e., issuing in and out instructions directly).
⃝ True √ False
(n) (1 point) The CPU may only issue commands to a device using memory-mapped I/O if the device is attached directly to the memory bus, without any intervening buses, bridges, or bus adapters.
The commodity storage drives with largest storage capacity are Hard-Disk Drives True
(o) (1 point) (HDDs).
(p) (1 point)
terns as they do on sequential scans.
Hard-Disk Drives typically provide similar performance on random access pat- ⃝ True
(q) (1 point) The operating system is responsible for remapping pages on a Solid-State Drive
(SSD) for the purpose of wear leveling. ⃝ True
(r) (1 point) In the FAT file system, file metadata is stored directly in directory entries.
√ True ⃝ False
(s) (1 point) In Berkeley FFS, file metadata is stored directly in the directory entries. ⃝ True
(t) (1 point) In Berkeley FFS, a highly fragmented file may span multiple inodes in the inode
(u) (1 point) In Windows NTFS, a highly fragmented file may span multiple entries in the Master File Table.
Page 4 of 18 Student ID:
√ True ⃝ False
(1 point) A soft link in one file system may refer to a file in a different file system.
√ True ⃝ False
(1 point) One file system may contain a hard link to a file in a different file system. ⃝ True
(1 point) Unix-like operating systems check a file’s permissions on open system calls, but
not on read and write system calls. √ True
(1 point) In Unix-like operating systems, the data passed to write may not be visible to subsequent read syscalls to the same file, if the written data is in the buffer cache and has not yet reached the storage device.
⃝ True √ False
Quiz 3 – Page 5 of 18 Student ID:
3. (20 points) File Systems
Consider system with the following properties:
• The file system is Berkeley FFS with 10 direct pointers, and 1 each of indirect, doubly indirect, and triply indirect pointers.
• The block size is 512 bytes.
• Each disk address is 4 bytes.
• Assume that the contents of each directory fit within a single disk block.
• The buffer cache is initially empty, but is large enough to simultaneously contain all disk blocks accessed in this question.
• The buffer cache is not flushed until all disk I/Os for this question are complete. Suppose that a process opens the file /cs162/file.txt and reads the first 5633 bytes of the
(a) (2 points) Where does the OS look up the inode number of the root directory?
Solution: The root directory is at a known inumber specific to the file system. (Al- ternatively, saying that the OS looks up this information at a superblock at a known disk address is also acceptable.)
(b) (4 points) How many disk reads are issued by the open system call? For each disk read, explain its purpose.
Solution: 5 total disk reads. Read root inode, read cs162 directory entry, read cs162 inode, read file.txt directory entry, read file.txt inode.
(c) (4 points) How many disk reads are issued by the read system call(s)? Assume the inode for file.txt is already in the buffer cache. Explain how you calculated your answer.
Solution: 13 disk reads. 5633 = 512 ∗ 11 + 1, so 12 data blocks must be read. There are only 10 direct pointers in the inode, so the indirect block must be accessed, adding one additional disk access.
(d) (4 points) Suppose that the file system is stored on a Hard Disk Drive (HDD). List two techniques used by Berkeley FFS to reduce the seek time when performing the above operations.
Quiz 3 – Page 6 of 18 Student ID:
Solution: Berkeley FFS organizes the file system into block groups, so that blocks that are likely to be accessed together are nearby (for example, the inode for file.txt will be near the file’s data). Additionally, Berkeley FFS uses first-fit block allocation so that the data blocks for file.txt are likely to be contiguous on disk.
(e) (3 points) Of the four components of a file system (Directory Structure, Index, File Data Blocks, Free Blocks), which components must the operating system access when the above process issues the open system call? (You should count hits in the buffer cache as accesses to the component to which the block belongs.)
√ Directory Structure √ Index
File Data Blocks
Free Blocks
None of These
Each student received one of the following two questions, selected randomly.
(f) (3 points) Of the four components of a file system (Directory Structure, Index, File Data Blocks, Free Blocks), which components must the operating system access when the above process issues the read system call(s)? (You should count hits in the buffer cache as accesses to the component to which the block belongs.)
Directory Structure √ Index
√ File Data Blocks
Free Blocks
None of These
(g) (3 points) Suppose that, after reading the file, the process seeks to a known offset in the file by issuing an lseek system call with the SEEK_SET flag. Of the four components of a file system (Directory Structure, Index, Data Blocks, Free Blocks), which components must the operating system access when the above process issues the lseek system call? (You should count hits in the buffer cache as accesses to the component to which the block belongs.)
Directory Structure
File Data Blocks
Free Blocks
√ None of These
Quiz 3 – Page 7 of 18 Student ID:
4. (18 points) Demand Paging
Each student received either Part a or Part b, not both.
(a) (3 points) Under what workload characteristics does LRU perform comparably to MIN, the optimal page replacement algorithm?
Solution: LRU performs comparably to MIN for workloads exhibiting temporal local- ity. Under these assumptions, pages that have not been accessed in a long time (the least recently used pages in memory) won’t be used for a very long time in the future, meaning that they are good candidates for eviction.
(b) (3 points) Are there any workload characteristics for which FIFO would outperform LRU? Explain.
Solution: For workloads that exhibit very poor temporal locality, FIFO may out- perform LRU. This is because LRU is only an approximation to MIN (the optimal algorithm) under the assumption of temporal locality.
(c) (4 points) Before implementing a virtual memory system, you measure that accessing a word that is resident in physical memory takes about 10 ns (with the hardware cache and TLB taken into account). You measure the time it takes the OS to service an access to an on-disk page (including page fault overhead, fetching the page from disk, and re-executing the instruction) to be about 10 ms. What is the highest page fault rate we can tolerate to prevent the average memory access time from exceeding 20 ns? Explain how you calculated your answer.
Solution: 0.0001%. We get the equation 10 + 10000000r = 20, and solving for r gives us the answer.
(d) (3 points) Suppose that, after implementing a virtual memory system, you find that it is thrashing on certain workloads. What changes might you make to the operating system to mitigate the problem?
Solution: Run fewer processes at a time. For example, suspend some processes for an extended period of time so that the working sets of others fit in memory simultaneously, allowing them to make progress.
Quiz 3 – Page 8 of 18 Student ID:
For the remainder of this question, consider a virtual memory system based on the Second- algorithm discussed in class, with the following change. When a page that is not resident in memory is accessed, it is moved to the back of the active list, instead of the front of the active list. Accesses to a page on the second-chance list work just as explained in class—the accessed page is moved to the front of the active list. In both cases, the item previously at the back of the active list is moved to the the second-chance list, and the least-recently-accessed item in the second-chance list is evicted.
We will refer to this modified algorithm as “Modified Second- ” (MSCL), and the original unmodified version we covered in class as simply “Second- ” (SCL).
(e) (2 points) Suppose that an on-disk page was just accessed, and that the OS has just finished bringing it into physical memory, according to MSCL. Should the OS mark the PTE referencing this page valid or invalid? Explain.
Solution: Valid, because the page is on the active list. (If we marked it as invalid, then we would just fault again when attempting to restart the program.)
(f) (3 points) Consider a workload where memory accesses are i.i.d. according to a Zipf dis- tribution (i.e., the page accessed by each each memory access follows a Zipf distribution, and furthermore each access is independent of all others). Explain why MSCL might outperform SCL for this workload.
Solution: With the Zipf distribution, there are a few pages that are very popular, but a significant fraction of accesses are to pages that are not popular. Because the accesses are independent (no temporal locality), recently accessed unpopular pages will not be accessed again for a very long time. By moving pages from disk to the back of the active list, MSCL ensures that such pages leave the active list quickly. The same will happen for popular pages, except that popular pages will likely be accessed again while they are on the second chance list, and therefore they will be moved to the front of the active list and spend a long time there. Thus, MSCL will try to fill the active list with the most popular pages in the Zipf distribution.
(g) (3 points) Under what workload characteristics might SCL outperform MSCL?
Solution: SCLmightoutperformMSCLforworkloadsthatexhibittemporallocality— a page that was just accessed is likely to be accessed again in the near future, even after the next page fault.
Quiz 3 – Page 9 of 18 Student ID:
Commentary:
One could potentially obtain more flexibility by splitting the second-chance list into two pieces. If a page near the front of the second-chance list is accessed, it is moved to the front of the active list, but if a page near the back of the second-chance list is accessed, it is moved to the back of the active list. The threshold between the front and back of the second-chance list would be a configurable parameter. This would decouple the threshold of what is considered a “frequent” access from the length of the second-chance list.
Quiz 3 – Page 10 of 18 Student ID:
5. (11 points) Early Alarm
Bobby, William, Jonathan, and Kevin are in a group for their CS 162 project, and they have just finished Project 2. William would like to extend the Alarm Clock functionality from Project 2 by implementing void wakeup_early(void), a function that wakes up all threads currently blocked due to a call to timer_sleep, even if they have not slept for their full duration. The wakeup_early function must behave correctly whether it is called by a thread or an interrupt handler.
In their Project 2 implementation, the students declare and initialize the following global vari- able, intended to be a list of sleeping threads.
static struct list sleepers;
Additionally, they added the fields int64_t awaketime; and struct list_elem sleepelem;
to struct thread, and they implemented the timer_sleep function as follows:
void timer_sleep(int64_t ticks) {
int64_t start = timer_ticks();
ASSERT(intr_get_level() == INTR_ON);
struct thread* current = thread_current();
enum intr_level old_level = intr_disable();
current->awaketime = start + ticks;
list_insert(&sleepers, ¤t->sleepelem);
thread_block();
intr_set_level(old_level);
Finally, as part of Project 2, the group modified the timer_interrupt function to awaken all sleeping threads whose awaketime is less than or equal to the current number of timer ticks. The code for this is not shown.
(a) (3 points) Jonathan suggests that, instead of introducing the new field sleepelem, the group could have just reused the existing elem field in struct thread. In particular, the list_insertcallabovecouldbereplacedwithlist_insert(&sleepers, ¤t->elem). Is Jonathan correct? Explain.
Solution: Yes, Jonathan is correct. Although elem is used for the waiters list in a semaphore and for the ready list, the thread being blocked will never be part of any of those lists while it is sleeping or executing the relevant code in time_sleep. Therefore, it is safe to reuse elem for the list of sleeping threads.
Quiz 3 – Page 11 of 18 Student ID:
(b) (8 points) Now, implement the void wakeup_early(void) function that the group de- sires. Your implementation should (1) work correctly with priority scheduling, and (2) work correctly in both thread context and external interrupt context. If necessary,youmayassumethatthepriorityfieldofstruct threadcontainsthethread’s priority, including any priority donations.
void wakeup_early(void) {
enum intr_level old_level = intr_disable();
while (!list_empty(&sleepers)) {
struct list_entry* curr = list_pop_front(&sleepers);
struct thread* t = list_entry(curr, struct thread, sleepelem);
thread_unblock(t);
intr_set_level(old_level);
if (intr_context()) {
intr_yield_on_return();
thread_yield();
Commentary:
One could potentially make the above solution more efficient by only yielding if one of the awakened threads has higher priority than the current thread. This was not required, however.
Quiz 3 – Page 12 of 18 Student ID:
6. (15 points) Address Translation
There are ten versions of this problem. Each student received only one version.
This document contains only one version.
Consider an x86 computer with the following memory architecture:
• 32-bit virtual address space
• 32-bit physical address space
• 4 KiB page size
• Two-level page table (equal size at each level)
• 4-entry, fully associative TLB, unified for both instructions and data • 32-bit page table entry (PTE) size
(a) (5 points) For the virtual address of the first instruction, 0x1084 0104, fill in the value of each line. For each page accessed when fetching that instruction, give the byte offset of the reference into that page.
Index of entry in Root Page Table: Byte offset of PTE in Root PT page: Index of entry in L2 Page Table:
Byte offset of PTE within L2 PT page: Code page offset:
0x0000 0042 (or 0b0001 0000 10) 0x0000 0108 (or 0b0001 0000 1000) 0x0000 0040 (or 0b00 0100 0000) 0x0000 0100 (or 0b0001 0000 0000) 0x0000 0104
Quiz 3 – Page 13 of 18 Student ID:
The state of the machine described on the previous page is shown below. The contents of several frames of physical memory are shown on the right with physical addresses. On the left are several machine registers, including the page table base register (cr3), which contains the physical frame number of the root page table. The TLB is initially empty (i.e., all TLB entries are initially invalid). Page table entries have the valid flag as the most significant bit and the physical frame number of valid entries in the low order bits. For this problem, you may ignore all other flags in each page table entry.
cr3 (PTBR) 0x0001 0020 eax 0x1004 1108 ebx 0x1044 2108 ecx 0x1044 2100 edx 0x1044 2104
Instructions
0x1084 0104 movl (%ebx), %edx
0x1084 0106 movl (%ecx), %eax
Physical Memory Phys. Addr. Contents
0x1001 0000
0x1001 0100 0x1044 0108
0x1001 0104 0x1084 2100
0x1001 0108 0x8001 0030
0x1002 0000
0x1002 0100 0x8001 0050
0x1002 0104 0x8001 0010
0x1002 0108 0x8001 0050
0x1003 0000
0x1003 0100 0x1084 2104
0x1003 0104 0x1084 2104
0x1003 0108 0x8001 0050
0x1004 0000
0x1004 0100 0x1084 0104
0x1004 0104 0x088b 1a8b
0x1004 0108 0x8001 0050
0x1005 0000
0x1005 0100 0x8001 0040
0x1005 0104 0x1004 1108
0x1005 0108 0x1004 2100
TLB Contents Tag
1 0x10840 1 0x10442 0
Quiz 3 – Page 14 of 18 Student ID:
(b) (10 points) You are to step through the instructions whose addresses and dis- assembly are shown on the previous page. In the space provided below, write down the operation, address, and value associated with every memory access associated with the instructions, by filling in the blank cells in the table. For reads, the “Value” field should contain
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com