CS代考 CS 162 Operating Systems and Systems Programming

CS 162 Operating Systems and Systems Programming
Fall 2021 Midterm 1
INSTRUCTIONS
This is your exam. Complete it either at exam.cs61a.org or, if that doesn’t work, by emailing course staff with your solutions before the exam deadline.

Copyright By PowCoder代写 加微信 powcoder

This exam is intended for the student with email address . If this is not your email address, notify course staff immediately, as each exam is different. Do not distribute this exam PDF even after the exam ends, as some students may be taking the exam in a different time zone.
For questions with circular bubbles, you should select exactly one choice. 􏰃 You must choose either this option
􏰃 Or this one, but not both!
For questions with square checkboxes, you may select multiple choices. 􏰄 You could select this choice.
􏰄 You could select this one too!
You may start your exam now. Your exam is due at Pacific Time. Go to the next page to begin.

Exam generated for 2 Preliminaries
(a) Full name
(b) Student ID number
(c) Autograder login (e.g. student123)

Exam generated for 3 1. (27.0 points)
For the following questions, select whether they are true or false and explain in two sentences or less. Explanations longer than two sentences or lack of explanation will receive no credit.
(a) (3.0 points)
i. A process’s address space in kernel space.
􏰃 True 􏰂 False
ii. Explain.
A process’s address space is in user space.

Exam generated for 4
(b) (3.0 points)
i. The OS directly schedules processes, not threads.
􏰃 True 􏰂 False
ii. Explain.
Threads encapsulate concurrency and are scheduled by the OS. Processes are instances of programs and encapsulate isolation.

Exam generated for 5
(c) (3.0 points)
i. Threads in the same process share the same stack.
􏰃 True 􏰂 False
ii. Explain.
Each thread has its own stack.

Exam generated for 6 (d) (3.0 points)
i. After forking, open files are preserved, so closing a file descriptor in the child process would make the parent process unable to use its own file descriptor that refers to the same file object.
􏰃 True 􏰂 False
ii. Explain.
File descriptors index into file descriptions which are reference counted, so the parent can still use the same file descriptors even after a child closes them.

Exam generated for 7
(e) (3.0 points)
i. Each user thread has its own separate kernel stack.
􏰂 True 􏰃 False
ii. Explain.
There is a kernel stack for each thread.

Exam generated for 8
(f) (3.0 points)
i. Programs which use multiple threads are always faster than programs that do not.
􏰃 True 􏰂 False
ii. Explain.
Overhead of multithreading (e.g. context switching, thread creation) synchroniza- tion can cause a multithreaded program to run slower than a single threaded pro- gram, especially on a single core machine where the threads would concurrently instead of in parallel. If the multithreaded program requires synchronization, a large critical section may result in threads being run sequentially instead of in parallel.

Exam generated for 9 (g) (3.0 points)
i. During a system call, the kernel validates arguments on the user stack then copies them over to the kernel stack.
􏰃 True 􏰂 False
ii. Explain.
The kernel copies over the arguments onto the kernel stack before validating them to prevent against time-of-check to time-of-use (TOCTOU) attacks.

Exam generated for 10 (h) (3.0 points)
i. x86 calling convention as talked about in Discussion 0 requires the base pointer to be pushed onto the stack by the callee.
􏰃 True 􏰂 False
ii. Explain.
This is suggested but not strictly required.

Exam generated for 11
(i) (3.0 points)
i. The Linux exec system call creates a new process.
􏰃 True 􏰂 False
ii. Explain.
Linux exec will override the current process image with a new image. To spawn a new process in Linux, fork should be used instead. Pintos exec is similar to Linux exec and fork combined.

Exam generated for 12
2. (18.0 points) Select All
(a) (3.0 pt) Which of the following that are true regarding I/O?
􏰁 High level I/O deals with FILE* struct.
􏰄 All I/O related syscalls are defined to all forms of I/O under the “everything is a file” uniformity idea
from POSIX.
􏰄 There is one file descriptor table per thread.
􏰄 Low-level I/O is buffered in user space.
􏰄 None of the above.
i. seen in lectures and assignments that FILE* is used for high level I/O.
ii. The uniformity idea from POSIX covers open, close, read, and write but nothing more. For instance,
sockets do not support lseek.
iii. Each process has a file descriptor table which are shared by all the threads in that process.
iv. Low-level I/O is buffered in kernel space.
(b) (3.0 pt) Which of the following are true regarding x86 calling convention as talked about in Discussion 0?
􏰄 addl (%eax), %ebx adds the longword in memory at the address eax to the longword in ebx, and
stores the result in eax.
􏰁 When the callee returns, the return address is gone but the arguments are still on the stack.
􏰄 Stack alignment means the stack is 16 byte aligned before we push arguments onto the stack.
􏰄 The saved ebp memory address is lower than the argument’s.
􏰁 The arguments are pushed onto the stack in reverse order.
i. Result is stored in the latter operand %ebx.
ii. Callee is responsible for popping the return address off the stack per step 4.
iii. Stack must be 16 byte aligned after we push the arguments.
iv. Saved ebp memory address refers to the base pointer of the previous stack frame. The arguments
being pushed onto the stack are for the next stack frame, meaning the saved ebp memory address is
above the arguments.
v. Step 1.

Exam generated for 13
(c) (3.0 pt) Which of the following are true about synchronization primitives?
􏰁 Semaphores can be used to implement monitors.
􏰁 Monitors can be used to implement semaphores.
􏰄 Semaphores can be used to achieve mutual exclusion but not scheduling constraints.
􏰄 Calls to signal a condition variable can always be replaced with broadcast when using Hoare semantics assuming behavior of Hoare semantics broadcast is the same as the behavior of Mesa semantics broadcast.
􏰄 None of the above.
i. Semaphores can be used to implement both locks and condition variables (see Fall 2019 Midterm 1 Question 4(c) for a thorough implementation).
ii. Keep track of a counter variable in our monitor as our shared data. When downing a semaphore, we wait using the condition variable until counter becomes a positive integer. When upping a semaphore, we increment the counter and broadcast using the condition variable. Any time the counter variable is accessed, the lock of the monitor should be used.
iii. Scheduling constraints can be accomplished as well with semaphores (e.g. shared data from Discussion 2).
iv. Mesa semantics broadcast doesn’t guarantee a thread will run right away. The if statements used to check the condition for Hoare semantics will no longer be valid.
(d) (3.0 pt) Which of the following are true about dual mode operation? 􏰁 A thread can execute entirely in user mode.
􏰁 A thread can execute entirely in kernel mode.
􏰄 Starting a new process is the only way to switch from kernel to user mode. 􏰁 Page table entries cannot be modified in user mode.
􏰄 None of the above.
i. Thread creation and exiting are ignored per Clarifications. ii. Some threads are designated to do purely kernel operations.
iii. Resuming after an interrupt, exception, or system call switches from kernel to user mode. iv. This privilege is limited to kernel mode for security reasons.
(e) (3.0 pt) Which of the following must be saved during a context-switch? 􏰄 Page table
􏰁 Base pointer
􏰄 Current CPU privilege mode 􏰄 Ready list
􏰁 Program counter
i. Threads share the same address space, so there’s no need to save the page table. Moreover, even in a process context switch, the page table wouldn’t need to be saved since the memory management unit handles the page table.
ii. Each thread has its own base pointer.
iii. Current privilege level (CPL) is stored as part of the code segment (CS) register in x86. Full credit
was given for selecting and not selecting due to not covering this during class.
iv. Ready list is stored in the memory.
v. Each thread has its own program counter.

Exam generated for 14 (f) (3.0 pt) Which of the following that are true about interrupts?
􏰁 The hardware saves the values for the stack pointer and program counter before jumping to the interrupt handler.
􏰄 Interrupts are synchronous events independent of the process.
􏰄 Disabling interrupts is guaranteed to implement mutual exclusion.
􏰁 Lower priority interrupts may be put on hold by a higher priority interrupt. 􏰄 None of the above
i. These values need to be saved when trapping from user mode to kernel mode. ii. Interrupts are asynchronous.
iii. A multiprocessor could allow other CPUs to enter critical section.
iv. Higher priority interrupts are able to preempt lower priority interrupts.

Exam generated for 15
3. Short Answer (a) (12.0 points)
Examinethefollowingfunctioncopy_parts(const char *src, const char *dest),whichcopiescertain bytes from the src file into the dest file.
void copy_parts(const char* src, const char* dest) {
char* buffer = malloc(1000 * sizeof(char));
int read_fd = open(src, O_RDONLY);
lseek(read_fd, 500, SEEK_SET);
read(read_fd, buffer, 150 * sizeof(char));
int write_fd = open(dest, O_WRONLY);
lseek(write_fd, 200, SEEK_SET);
write(write_fd, buffer, 100 * sizeof(char));
/* breakpoint */
read(read_fd, buffer, 100 * sizeof(char));
write(write_fd, buffer + 50, 100 * sizeof(char));
close(read_fd);
close(write_fd);
i. (4.0 pt) Is the following function’s behavior deterministic? If not, what is the issue and how do we fix it?
ii. (4.0 pt) If there was an issue with the function, suppose we have addressed the issue and the function behavior is now guaranteed to be deterministic. What is the current file position of the src and dest files at the breakpoint? Explain.
No. Since we are using low-level IO, we have no guarantee that we are able to read or write the amount desired in one function call to read or write. Therefore, we need to call read and write in a while loop to ensure that we can fin- ish reading and writing the same number of bytes each time we call copy_parts. Alternatively, we can use the high-level IO (fread and fwrite) to guarantee de- terministic behavior (assuming all calls succeed).
We are 650 bytes into the src file and 300 bytes into the dest file. lseek(read_fd, 500, SEEK_SET) brings us to position 500 in the src file. read(read_fd, buffer, 150 * sizeof(char) brings us to position 650 in the src file.
lseek(write_fd, 200, SEEK_SET) brings us to position 200 in the dest file. write(write_fd, buffer, 100 * sizeof(char)) brings us to position 300 in the dest file.

Exam generated for 16
iii. (4.0 pt) If there was an issue with the function, suppose we have addressed the issue and the function behavior is now guaranteed to be deterministic. Suppose a file src.txt has 500 a’s followed by 200 b’s followed by 300 c’s followed by EOF. Suppose a file dest.txt has 100 x’s followed by EOF. What is the result of calling copy_parts(“src.txt”, “dest.txt”)? Explain. You may use up to ten sentences for this question to explain if necessary.
lseek(read_fd, 500, SEEK_SET) brings us to position 500 in src.txt. The func- tion call read(read_fd, buffer, 150 * sizeof(char)) brings us to position 650 in src.txt, and reads 150 bs into buffer.
The function call lseek(write_fd, 200, SEEK_SET) brings us to position 200 in dest.txt, so dest.txt starts off with 100 xs followed by 100 null characters. The write(write_fd, buffer, 100 * sizeof(char)) copies 100 b’s into dest.txt, so now “dest.txt” consists of 100 xs followed by 100 null characters followed by 100 bs. The read(read_fd, buffer, 100 * sizeof(char)) starts at position 650 in src.txt and copies 50 bs and then 50 cs into the first 100 entries of buffer. The final write(write_fd, buffer + 50, 100 * sizeof(char)) copies 50 cs and then 50 bs into dest.txt. The bs from buffer[100] to buffer[149] were not overwritten by the previous read call.
In the end, dest.txt contains 100 xs followed by 100 null characters followed by 100 bs followed by 50 cs followed by 50 bs.

Exam generated for 17
(b) (4.0 pt) Can a user thread acquire a lock without entering the kernel? Explain.
(c) (4.0 pt) Explain the purpose of system calls, and why they cannot be executed in user mode.
(d) (4.0 pt) Explain two reasons why high-level I/O uses a user space buffer.
(e) (4.0 pt) What happens when a multithreaded program calls fork?
(f) (4.0 pt) In Pintos, is there a need to switch page directory base register when switching from user to kernel mode? Why or why not?
(g) (4.0 pt) What does the $0x30 represent in int $0x30 in Pintos?
Yes. For example, you can use an atomic read-modify-write instruction.
We use system calls to request services from the OS. They can’t be executed in user mode because that may violate process isolation / cause the kernel to crash.
System calls to low-level I/O are much more expensive (25x) compared to or- dinary function calls. Byte-oriented kernel buffers exacerbate this problem and greatly reduces throughput.
Moreover, low-level I/O lack functionality (e.g. reading until a newline using fgets or getline) that are useful to users.
The child process inherits the single thread where fork was called. All other threads are not copied and therefore do not exist in the new process.
There is no need since kernel virtual memory is global.
int $0x30 is used to trap to the kernel to make a system call. $0x30 represents the index number in the interrupt vector table that stores the handler for system calls (i.e. syscall_handler).

Exam generated for 18 4. (18.0 points) Shared Data Revisited
The following is a modified version of the shared data question from Discussion 2, and is related to the implementation of the wait syscall in Project 1. For this problem, we’re going to implement a wait function that allows one thread (the main thread) to wait for another thread to finish writing some data.
We’re going to assume we don’t have access to pthread_join for this problem. Instead, we’re going to use synchronization primitives (locks and semaphores). We need to design a struct for sharing information between the two threads.
We also need to implement three functions to initialize the shared struct and synchronize the 2 threads. initialize_shared_data will initialize our shared struct. wait_for_data (called by the main thread) will block until the data is available. save_data (called by the child thread) will write 162 to the struct. Another requirement is that the shared data needs to be freed once as soon as it is no longer in use. When using synchronization primitives, the critical section should be as few lines as possible while still ensuring a race free environment.
Fill in the blanks so that main will always print “Parent: Data is 162”. You may assume the program compiles without error and all library calls will succeed.
typedef struct shared_data {
sem_t sem;
_______________________
} shared_data_t;
void initialize_shared_data(shared_data_t* shared_data) {
sem_init(&shared_data->sem, 0, 0);
shared_data->data = -1;
_______________________
int wait_for_data(shared_data_t* shared_data) {
sem_wait(&shared_data->sem);
int data = shared_data->data;
_______________________
return data;
void* save_data(void* shared_pg) {
shared_data_t* shared_data = (shared_data_t*)shared_pg;
shared_data->data = 162;
sem_post(&shared_data->sem);
_______________________
return NULL;
int main() {
shared_data_t* shared_data = malloc(sizeof(shared_data_t));
initialize_shared_data(shared_data);
pthread_t tid;
int error = pthread_create(&tid, NULL, &save_data, (void*)shared_data);
int data = wait_for_data(shared_data);
printf(“Parent : Data is % d\n”, data);
_______________________

Exam generated for 19 (a) (3.0 pt) Fill in the missing code (if any) for shared_data_t. You may use up to five lines.
int ref_cnt;
pthread_mutex_t lock;
ref_cnt is necessary to keep track of how many threads have access to the shared data and when the shared data can be freed. The lock provides mutual exclusion for ref_cnt since ref_cnt may be modified by both threads concurrently (see wait_for_data and save_data below). Alternatively, another semaphore or equally valid synchronization mechanism could have been used to achieve mutual exclusion.
(b) (3.0 pt) Fill in the missing code (if any) for initialize_shared_data. You may use up to five lines. shared_data->ref_cnt = 2;
pthread_mutex_init(&lock, NULL);
shared_data->ref_cnt needs to be initialized to 2 since there are two threads who will have access to
shared_data. The lock also needs to be initialized with pthread_mutex_init.
(c) (4.0 pt) Fill in the missing code (if any) for wait_for_data. You may use up to 10 lines.
pthread_mutex_lock(&shared_data->lock);
int ref_cnt = –shared_data->ref_cnt;
pthread_mutex_unlock(&shared_data->lock);
if (ref_cnt == 0) {
free(shared_data);
The reference count needs to be decremented. The lock needs to be acquired before and released after decrementing the reference count. Finally, we need to check if the reference count is 0 and free shared_data if so.
We have to store the decremented shared_data->ref_cnt in a separate variable ref_cnt because a race condition could occur in a multitude of ways. For instance, let’s say both the child and parent thread have decremented shared_data->ref_cnt and are waiting to execute the if statement. The parent thread checks the condition of shared_data->ref_cnt == 0 and goes into the body of the if statement ready to free shared_data.
i. However, before it frees, we context switch into the child thread. The condition of shared_data->ref_cnt == 0 is satisfied as well, so the child thread goes into the body of the if statement ready to free shared_data. As a result, only one thread will truly get to free shared_data, while the other thread would be attempting to free a freed pointer. This results in undefined behavior.
ii. If the parent thread frees the shared_data, the child thread still needs to execute this check. As a result, the child thread will attempt to access shared_data->ref_cnt which is an illegal behavior since shared_data is freed.
Alternatively, you might propose to check shared_data->ref_cnt == 0 in the critical section. However, this holds on to the lock longer than necessary which the question specifically asked not to do.

Exam generated for 20 (d) (2.0 pt) Fill in the missing code (if any) for save_data. You may use up to 10 lines.
pthread_mutex_lock(&shared_data->lock);
int ref_cnt = –shared_data->ref_cnt;
pthread_mutex_unlock(&shared_data->lock);
if (ref_cnt == 0) {
free(shared_data);
(e) (2.0 pt) Fill in the missing code (if any) for main. You may use up to 10 lines.
No code is missing from main since shared_data should have been freed within wait_for_data or save_data.
(f) (4.0 pt) What kind of synchronization, if any, is needed for shared_data->data in both wait_for_data and save_data? Explain.
No synchronization is necessary on shared_data->data because the data will never be read before

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com