CS 162 Operating Systems and System Programming
Fall 2020 Midterm 2
INSTRUCTIONS
This is your exam. Complete it either at exam.cs61a.org or, if that doesn’t work, by emailing course staff with your solutions before the exam deadline.
Copyright By PowCoder代写 加微信 powcoder
This exam is intended for the student with email address
For questions with circular bubbles, you should select exactly one choice. # You must choose either this option
# Or this one, but not both!
For questions with square checkboxes, you may select multiple choices. 2 You could select this choice.
2 You could select this one too!
You may start your exam now. Your exam is due at
Exam generated for
This is a proctored, closed-book exam. During the exam, you may not communicate with other people regarding the exam questions
or answers in any capacity. If there is something in a question that you believe is open to interpretation, please use the “Clarifications”
button to request a clarification. We will issue an announcement if we believe your question merits one. We will overlook minor syntax errors in grading coding questions. You do not have to add necessary #include statements. For coding questions, the number of blank
lines you see is a suggested guideline, but is not a strict minimum or maximum. There will be no limit on the length of your answer/solution.
Student ID
Please read the following honor code: “I understand that this is a
closed book exam. I hereby promise that the answers that I give on the following exam are exclusively my own. I understand that I am allowed to use two 8.5×11, double-sided, handwritten cheat-sheet of my own making, but otherwise promise not to consult other people, physical resources
(e.g. textbooks), or internet sources in constructing my answers.” Type your full name below to acknowledge that you’ve read and agreed to this statement.
Exam generated for
1. (18.0 points) True/False
Please EXPLAIN your answer in TWO SENTENCES OR LESS
(Answers longer than this may not get credit!). Also, answers without any explanation GET NO CREDIT!
a) (2.0 points) 1
Paging solves internal fragmentation because all pages are the same size.
Pages all being the same size solves external fragmentation.
Exam generated for
b) (2.0 points) 1
The translation of virtual to physical addresses is done by the kernel. # True
Address translation is done by the Memory Management Unit (MMU).
Exam generated for
c) (2.0 points) 1
Single level page tables are more efficient at representing sparse address spaces than multi-level page tables.
Multi-level page tables can represent ranges of unassigned (invalid)
virtual addresses by leaving whole second-level page tables empty (which can thus be left unused by marking the upper-level PTE as invalid). Consequently, they are much more memory-efficient at representing sparse address spaces than single-level page tables which must provide a PTE (marked invalid) for EVERY unused virtual page.
Exam generated for
d) (2.0 points) 1
Multi-level page tables are better memory-wise for sparse addresses in comparison to single level page tables.
True # False
Multi-level page tables can represent ranges of unassigned (invalid)
virtual addresses by leaving whole second-level page tables empty (which can thus be left unused by marking the upper-level PTE as invalid). Consequently, they are much more memory-efficient at representing sparse address spaces than single-level page tables which must provide a PTE (marked invalid) for EVERY unused virtual page..
Exam generated for
e) (2.0 points) 1
The number of bits in a virtual address is always the same as the number of bits in its corresponding physical address.
Even though the number of bits in the offset stays the same between the virtual address and the physical address, the number of bits in the VPN and the PPN do not necessarily have to be equal. The former is based on the number of virtual pages whereas the later is based on the number of physical pages.
Exam generated for
f) (2.0 points) 1
On a page fault, the MMU will invalidate previous virtual to physical address mappings if needed and create new mappings in the page table based on the requested data brought from the kernel.
The MMU is in charge of the actual address translation. The kernel will do the evicting, invalidating of mappings, going to disk for requested data, and creation of new mappings.
Exam generated for
g) (2.0 points) 1
For base & bound virtual memory, the two special registers BaseAddr and LimitAddr are stored in the Thread Control Block.
All threads within the same process share the same base and bound, so the two registers will be stored in the Process Control Block.
Exam generated for
h) (2.0 points) 1
Adding a TLB will always make memory lookups and accesses faster. # True
In the case where all lookups to the TLB miss, there’s the added overhead from the TLB misses that would not be taken into account if there was no TLB at all.
Exam generated for
i) (2.0 points) 1
The associativity of the TLB can be configured by modifying kernel source code.
The TLB lookup is implemented in hardware, so modifying the associativity requires replacing the TLB hardware, not a software update.
Exam generated for
j) (2.0 points) 1
Swapping is the term denoting the process of swapping PCBs and other housekeeping such as switching page table pointers.
Swapping is an extreme form of context switching in which parts of or all of the previous process is moved to disk in order to make room for the incoming process.
Exam generated for
k) (2.0 points) 1
Thrashing is characterized by slow performance and high CPU utilization. # True
Since thrashing involves constantly switching out pages in our cache due to page faults, performance is slow and processes do not get to
progress, which leads to low CPU utilization.
Exam generated for
l) (2.0 points) 1
Thrashing is characterized by slow performance and low CPU utilization. True
Since thrashing involves constantly switching out pages in our cache due to page faults, performance is slow and processes do not get to
progress, which leads to low CPU utilization.
Exam generated for
m) (2.0 points) 1
Killing the process with the largest working set is a guaranteed solution to thrashing.
It is quite possible that there are still too many processes with too much total memory, so that the system will still be thrashing after killing the initial process.
Exam generated for
n) (2.0 points) 1
Write through caches do not need a dirty bit. True
Because write through caches update both the cache and memory simultaneously, the cache doesn’t need a dirty bit that signals if the entry has changed since it was pulled from disk.
Exam generated for
o) (2.0 points) 1
Write through caches need a dirty bit. # True
Because write through caches update both the cache and memory simultaneously, the cache doesn’t need a dirty bit that signals if the entry has changed since it was pulled from disk.
Exam generated for
p) (2.0 points) 1
In general, larger caches or caches with higher associativity have a higher hit rate than smaller, direct-mapped caches.
True # False
There’s fewer capacity and conflict misses in larger caches / caches with higher associativity, so the hit rate is typically higher.
Exam generated for
q) (2.0 points) 1
In general, larger caches or caches with higher associativity have a lower hit rate than smaller, direct-mapped caches.
There’s fewer capacity and conflict misses in larger caches / caches with higher associativity, so the hit rate is typically higher.
Exam generated for
r) (2.0 points) 1
In deadlock, one process continually responds to another process’s changes but is unable to complete any work.
In deadlock, a process can’t respond to another process’s changes/method calls because it’s blocked. Note: This is the livelock definition.
Exam generated for
s) (2.0 points) 1
In deadlock, one process can’t respond to another process’s operating calls and is unable to complete any work.
True # False
In deadlock, a process can’t respond to another process’s changes/method calls because it’s blocked. Note: This is the deadlock definition.
Exam generated for
t) (2.0 points) 1
Pre-emptive schedulers fix the problem of deadlock in a system. # True
The condition of “no preemption” in deadlock merely means that resources cannot be taken away from a process that owns them. This condition isn’t removed by the presence of a pre-emptive schedulers.
Exam generated for
u) (2.0 points) 1
A cyclic use of resources leads to deadlock. # True
A cycle is one of the four requirements for deadlock. It is only a necessary condition, not a sufficient condition.
Exam generated for
v) (2.0 points) 1
It’s not possible to use the Banker’s algorithm to guarantee the completion of tasks in a real-life operating system.
True # False
Even if they are prevented from deadlocking on resources, tasks can still go into infinite loops or otherwise refuse to complete for reasons having nothing to do with resources.
Exam generated for
w) (2.0 points) 1
The only way to prevent deadlock caused by cyclic use of resources is to use a dynamic algorithm such as the Banker’s algorithm to mediate all resource acquisition.
Resources can be ordered in some absolute fashion (i.e. alphabetical ordering) and can be acquired by processes in that order – without use of the Banker’s algorithm.
Exam generated for
x) (2.0 points) 1
Banker’s Algorithm can find more than one potential order of processes that result in a safe state.
True # False
Banker’s Algorithm determines a safe state by finding AN ORDERING of processes. There are other orderings possible that can bring the system
to a safe state.
Exam generated for
y) (2.0 points) 1
Banker’s Algorithm can only find one order of processes that results in a safe state.
Banker’s Algorithm determines a safe state by finding AN ORDERING of processes. There are other orderings possible that can bring the system
to a safe state.
Exam generated for
z) (2.0 points) 1
Multiprocessing networks with wormhole routing must use a dynamic scheduler built in hardware to implement the Banker’s Algorithm in order to avoid deadlock.
As discussed in class, multiprocessor networks can use “dimension-ordered routing” (routing X, then Y, then Z) to prevent cycles and thus prevent deadlock.
Exam generated for
aa) (2.0 points) 1
Assuming that proper feasibility checks have been performed on the workload, a real-time scheduler is not prone to starvation.
True # False
The feasibility checking makes sure that the processor is not overloaded so that the scheduler can meet all deadlines.
Exam generated for
ab) (2.0 points) 1
Real-time schedulers are prone to starvation even if proper feasibility checks have been performed on the workload.
The feasibility checking makes sure that the processor is not overloaded so that the scheduler can meet all deadlines.
Exam generated for
ac) (2.0 points) 1
There are scheduler workloads where a non-preemptive scheduler has a better a verage wait time than a preemptive scheduler.
True # False
Consider a workload with N equal tasks run by either a round-robin scheduler (preemptive) or a FCFS scheduler (non-premptive). The FCFS scheduler has a much better average wait time.
Exam generated for
ad) (2.0 points) 1
The SRTF Algorithm is an example of a scheduler algorithm that can’t be implemented in a real-life system.
True # False
We don’t know how long a process will run for in a real-time system, which is needed in SRTF.
Exam generated for
ae) (2.0 points) 1
The SRTF Algorithm is an example of a scheduler algorithm that can be implemented in a real-life system.
We don’t know how long a process will run for in a real-time system, which is needed in SRTF.
Exam generated for
af) (2.0 points) 1
Priority Donation can help to prevent priority inversion under some circumstances.
True # False
Consider the following livelock situation in a priority scheduler:
assume there are three threads T1, T2, and T3 at priorities 1 (highest), 2, and 3 (lowest), and in which T3 has acquired a lock that T1 is
sleeping on, with T2 running and effectively preventing T1 from running. This is a priority inversion. Priority donation from T1 to T3 would
allow T3 to release the lock and thereby prevent the priority inversion.
Exam generated for
ag) (2.0 points) 1
It is possible to build a scheduler that approximates SRTF using a moving average.
True # False
An SRTF scheduler requires a way to predict the future burst time for each process on the ready queue. It can do this by keeping enough information to compute a moving average of the last few burst times for each process and using the result to predict the following burst time.
Exam generated for
ah) (2.0 points) 1
It is possible to build a scheduler that approximates SRTF using a moving average.
An SRTF scheduler requires a way to predict the future burst time for each process on the ready queue. It can do this by keeping enough information to compute a moving average of the last few burst times for each process and using the result to predict the following burst time.
Exam generated for
2. (16.0 points) Multiple Choice a) (2.0 pt)
Select all that apply: It is possible for the addition of physical memory to decrease performance when our page replacement policy is: 2 LRU
2 None of the above
2 It is never possible for more physical memory to be hurtful Belady’s anomaly shows that performance may decrease with a FIFO algorithm. Random could potentially evict like FIFO.
b) (2.0 pt)
Suppose we have a 512 B single-page page table where each page table entry is 4 bytes. How big is the virtual address space?
# Not enough information
# None of the above
Each page is 512 B. The page table is one page long and has entries of 4 bytes, so it contains 512 / 4 = 128 PTE’s. Each PTE corresponds to one physical page. Therefore, 128 PTE’s * 512 B per page = 64 KB total.
Exam generated for
c) (2.0 pt)
Suppose we have a 512 B single-page page table where each page table entry is 4 bytes. How big is the physical address space?
Not enough information
# None of the above
We have no information on the physical address space size.
d) (2.0 pt)
Suppose that pages are 512 B and each page table entry is 4 bytes. Assume that somehow the virtual and physical address spaces were both 4 GB and that the page table begins at address 0x10000000. If we wanted to access the virtual address 0x00000345, what is the address of the PTE we would look at?
# 0x10000000
# 0x10000001
0x10000004
# 0x10000345
# Not enough information
# None of the above
Each page is 512 B, so we have 9 offset bits in our virtual address. Therefore, our VPN for 0x00000345 is 1. We index into the first PTE in the page table (since we use the VPN to index into the page table), which will be at address 0x100000004: each PTE is 4 bytes.
Exam generated for
e) (2.0 pt)
Select all that are true regarding inverted page tables.
2 It would be smart to use an inverted page table when our physical memory
space is very large.
Inverted page tables make it difficult to implement shared memory.
Lookup times are generally longer than standard page tables.
2 Inverted page tables save memory when compared to a standard page table. 2 None of the above
Inverted page table size scales off physical memory, so we would not
want to use an inverted page table when physical memory is large.
f) (2.0 pt)
Which of the following are true regarding virtual memory and address translation?
It is possible to have a larger virtual memory space than physical
memory space
It is possible to have a larger physical memory space then virtual
memory space
2 Physical memory pages and virtual memory pages usually differ in size Modern processors generally include dedicated hardware to assist with
address translation
2 Address translation is managed entirely in hardware on x86
g) (2.0 pt)
Which of the following are true regarding virtual memory?
Adjacent bytes within the same virtual page are always also adjacent in
physical memory
Adjacent bytes within the same physical page are always also adjacent in
virtual memory
2 Adjacent virtual pages are always stored in adjacent physical pages 2 Adjacent physical pages always correspond to adjacent virtual pages
Exam generated for
h) (2.0 pt)
Suppose a thread in Pintos is holding a lock called lock A. Which of the following could change the thread’s effective priority? Select all that apply.
2 The thread tries to acquire another lock, lock B, but has to wait.
The thread releases lock A.
Another thread tries to acquire lock A.
Some other thread dies.
The thread calls thread_set_priority and passes in a value less than
its current base priority.
2 None of the above
i) (2.0 pt)
Which of the following are true?
Deadlock will always cause starvation of CPU resources
Livelock will always cause starvation of CPU resources
Shortest Run Time First scheduling may cause starvation of CPU resources Longest Run Time First scheduling may cause starvation of CPU resources
j) (2.0 pt)
Consider the following simple alternative to demand paging: processes must declare ahead of time (i.e. by including this information in each executable) how much memory they will use, and the OS must hold this much physical memory in reserve for exclusive use by the process from the time the process begins running until the time it exits. Select all
of the following that are true regarding this alternative as compared to demand paging:
It will result in lower memory access latency on average for processes 2 It will result in overall higher CPU utilization on average for the
system as a whole
It will reduce the amount of disk I/O performed on average for the
system as a whole
2 It would require additional dedicated hardware support/features in order
to implementable
Exam generated for
k) (2.0 pt)
Consider an OS running on a single-core processor which handles page faults using the following approach: when a process encounters a page fault, the OS waits for the page to be brought in from disk and then resumes the process, without ever putting the process to sleep.
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com