Feedback on Quiz # 5 (Summative)
The Quiz # 5 was composed of 10 questions, which were randomly selected from a Question Bank of 20 questions. Answers and feedback comments for all of the questions are given below:
Q01
Indirect deadlock prevention applies when the OS tries to avoid occurrence of any of the following deadlock conditions:
(Select all that apply. Marks will be deducted for incorrect answers)
(a)
(b)
No Preemption
Atomic Operations
(c)
(d)
(e)
Mutual exclusion
Circular Wait
Hold and Wait
Feedback:
Indirect deadlock prevention avoids occurrence of one of these three conditions: Mutual Exclusion, Hold and Wait, or No Preemption. Direct deadlock prevention, however, avoids the Circular Waits.
Q02
A race condition can occur due to the following reason(s):
(Select all that apply. Marks will be deducted for incorrect answers)
(a)
The execution order of processes.
(b)
Arbitrary context switching.
(c)
The relative execution speed of processes.
(d)
When the processes use outdated memory values.
(e)
When process do not share anything and are completely independent.
Feedback:
A race condition can occur due to any of the following reasons:
1) If the state of a shared resource depends on the precise execution order of processes.
2) If the processes’ context switching happens at arbitrary times during execution.
3) If the processes / threads operate with stale or dirty copies of memory values in registers / local variables.
Q03
Consider that you are designing a deadlock prevention mechanism, which imposes a total ordering of all resources, and requires that each process requests resources in increasing order of enumeration. This violates which one(s) of the condition(s) for a deadlock to be able to occur?
(Select all that apply. Marks will be deducted for incorrect answers)
(a)
Mutual Exclusion
(b)
Hold and Wait
(c)
No Preemption
(d)
Circular Wait
(e)
Atomic Operations
Feedback:
The choice to impose total ordering of resources will violate the circular wait condition, as every process will be required to take a resource with lower index before asking for a higher index one. In case the lower index resource is already held by another process, the requesting process will wait until it gets freed, thus circular wait cannot happen.
Q04
Which one(s) of the following condition(s) for deadlocks may lead to starvation of processes, if they are violated?
(Select all that apply. Marks will be deducted for incorrect answers)
(a)
Mutual Exclusion
(b)
Hold and Wait
(c)
No Preemption
(d)
Circular Wait
(e)
Atomic Operations
Feedback:
The violation of Mutual Exclusion condition cannot lead to starvation, as it will allow multiple processes to enter the critical section and lead to race conditions.
The violation of hold and wait means that we either allocate all resources (if they are available) or block a process until all these resources become available. Therefore, violation of hold and wait condition may lead to the starvation of some processes.
The violation of the no preemption condition would mean that the processes will have to either release resources that they are already holding if their request for a new resource is denied or they will have to preempt a resource which is already held by another process. In either case, the processes will not starve as some of the processes will be able to complete and release their resources, which will be used by other processes.
The violation of circular wait condition means that we associate an index with each resource and require processes to request for resources in increasing order of indices. This ensures that any process cannot hold a resource with higher index and then request another resource with a lower index, thus avoiding circular wait. This also means that deadlock & starvation cannot happen in such a system.
Q05
(a)
(b)
(c)
Circuit switching and packet switching are two different ways of sharing links in a communication network. Which one(s) of the following statements are correct? (Select all that apply. Marks will be deducted for incorrect answers)
Under some circumstances, a circuit-switched network may prevent some nodes from
starting a new connection (“call”).
In packet switching, we can guarantee a minimum level of performance as opposed to circuit switching networks.
Once a connection is correctly established, a switch in a circuit-switched network can
(d)
forward data correctly without requiring data frames to include a destination address.
In circuit switched networks, we can achieve a guaranteed level of performance as
(e)
compared to packet-switched networks, where we cannot.
In circuit-switched networks, if a circuit segment is idle, its capacity can be used by other connections.
Feedback:
In circuit switching a connection is established and then data passed over that connection with a guaranteed level of performance. At the end the connection is closed. With packet switching each packet is routed independently across a shared network.
Q06
(a)
(b)
Which of the following statements is/are correct about processes and threads? (Select all that apply. Marks will be deducted for incorrect answers)
A process is light weight, taking fewer resources than a thread.
Context switching between threads in the same process is much faster than switching
between processes
(c)
(e)
For kernel level threads, if one thread is blocked and waiting, a second thread in the same
(d)
process can still run.
A thread cannot read, write or change the data of another thread within the same process.
Multiple instances of the same program will result in processes that can share the same code and read/write each other’s global data values.
Feedback:
1. A process is heavy weight and resource intensive while a thread is light weight, taking fewer resources than a process.
2. Thread switching is faster as compared to process switching, as in the later case, the OS needs to load a new address space and process resources, which is not required for thread switching.
3. In the case of kernel level threads, if one thread is blocked the kernel can schedule another thread from the same process to run; in the case of user level threads, if one thread gets blocked, it will block the whole process as the kernel is not aware of user level threads.
4. With multiple processes each process operates independently of the others, while on the other hand a thread within a process can read, write or change the same data as another
thread in the same process.
5. Even if we have multiple instances of the same program, they will each have their own,
independent data (address space) which is not accessible to the other instances of the same program.
Q07
You have a banking system written in Java to allow people to check their account balances. Up to 100 users can access it concurrently. Each user’s request will run in a separate thread within the same process. In order to ensure correct behaviour which of the following would be appropriate approach(es)?
(a)
Implement locks to ensure that only one thread can read the value of any account balance at one time.
(b)
Implement locks to ensure that only one thread can read the value of any single account balance at any one time
(c)
Establish a queueing system so that each request is queued and only starts processing when it is its turn
(d)
None of these options
(e)
Use the Java synchronized keyword to implement mutual exclusion on the method that reads and returns the value of the account balance
Feedback:
Control of concurrent access to a resource is only needed if there is a potential for conflict. In this example, each thread will only read a value. Therefore, we do not need to impose any form of concurrency control since multiple reads can safely take place concurrently even if their executions overlap. Implementing some mechanism to ensure mutual exclusion is unnecessary and will add extra processing overheads.
Q08
(c)
You have a multi-threaded application that is running on a multicore system. You have just added some code to implement mutual exclusion on some critical sections of the code. Now the system behaves correctly but real-time performance is 50% slower than before. Which of the following might explain this?
(a)
The overheads of obtaining and releasing locks are consuming a lot of processor cycles
(b)
Threads that could previously run in parallel are now forced to run consecutively
(d)
(e)
None of these options
Threads are getting blocked until a lock has been released by another thread and the CPU
has a lot of idle time
Deadlocks are occurring
Feedback:
Locking does take time. Even if the work in the critical section is quite small it can add a significant overhead. Previously, threads could run in parallel (but possibly incorrectly) but they may now have to wait while another thread is in the critical section. This may mean that one or more CPU cores are
idle – this is the price of ensuring correct behaviour.
Q09
A system contains three processes, and each requires three tape units for its operation. The minimum number of tape units which the system must have in order that deadlocks can never arise:
(a)
6
(b)
7
(c)
8
(d)
9
(e)
5
Feedback:
If there are 6 resources, then it may be possible for all three process to have 2 resources and be waiting for 1 more resource to complete their execution. Therefore, all of them will have to wait indefinitely. If 7 resources are available, then at least one process will get 3 resources so deadlock cannot occur. Therefore, when that process finishes, it will release the resources and the other processes can proceed.
Q10
The following two functions P1 and P2 share a variable B which has an initial value of 2. They execute concurrently.
What are the possible distinct value(s) that B can have after both functions have completed their execution?
P1() {
C = B – 1;
B = 2 * C; }
P2() {
D = 2 * B;
B = D – 1; }
(a)
3
(b)
2
(c)
5
(d)
4
(e)
6
Feedback:
Different execution order of the instructions of P1 and P2 produce different results. When both processes execute sequentially (P1 then P2 gives B = 3):
P1: C = B – 1;
P1: B = 2 * C;
P2: D = 2 * B;
// B = 2, C = 1
// B = 2, C = 1
// B = 2, D = 4
P2: B = D – 1; // B = 3, D = 4
When both processes execute sequentially (P2 then P1 gives B = 4):
P2: D = 2 * B;
P2: B = D – 1;
P1: C = B – 1;
P1: B = 2 * C;
// B = 2, D = 4
// B = 3, D = 4
// B = 3, C = 2
// B = 4, C = 2
When both processes execute alternatively (P1, P2, P1, P2 gives B = 3):
P1: C = B – 1;
P2: D = 2 * B;
P1: B = 2 * C;
P2: B = D – 1;
// B = 2, C = 1
// B = 2, D = 4
// B = 2, C = 1
// B = 3, D = 4
When both processes execute alternatively (P2, P1, P2, P1 gives B = 2):
P2: D = 2 * B;
P1: C = B – 1;
P2: B = D – 1;
P1: B = 2 * C;
// B = 2, D = 4
// B = 2, C = 1
// B = 3, D = 4
// B = 2, C = 1
When P1 starts, but it is interrupted by P2, which executes completely before P1 runs again (P1, P2, P2, P1 gives B = 2):
P1: C = B – 1;
P2: D = 2 * B;
P2: B = D – 1;
P1: B = 2 * C;
// B = 2, C = 1
// B = 2, D = 4
// B = 3, D = 4
// B = 2, C = 1
When P2 starts, but it is interrupted by P1, which executes completely before P2 runs again (P2, P1, P1, P2 gives B = 3):
P2: D = 2 * B;
P1: C = B – 1;
P1: B = 2 * C;
P2: B = D – 1;
// B = 2, D = 4
// B = 2, C = 1
// B = 2, C = 1
// B = 3, D = 4
From here, the distinct values that may be produced are 2, 3 and 4.
Q11
Let’s assume we have two hosts A & B connected through 2 routers with zero propagation delay. We would like to transmit a 10 Mbit file from A to B, where the link bandwidths are given as R1 = 5Mbps, R2 = 2Mbps and R3 = 10Mbps. Assume that the file is completely stored by a router before it starts sending it to the next link. What would be the end-to-end delay between the hosts A & B?
(Select all that apply. Marks will be deducted for incorrect answers)
(a)
11 seconds
(b)
8 seconds
(c)
6 seconds
(d)
5 seconds
(e)
9 seconds
Feedback:
The transmission delay from first host to router #1: 10 Mbits / 5Mbps = 2 seconds The transmission delay from router #1 to router #2: 10 Mbits / 2Mbps = 5 seconds The transmission delay from router #2 to second host: 10 Mbits / 10Mbps = 1 second Total End-to-End delay will be = 8 seconds
Q12
Let’s assume we have two hosts A & B connected through 2 routers with zero propagation delay. We would like to transmit a 20 Mbit file from A to B, where the link bandwidths are given as R1 = 4Mbps, R2 = 10Mbps and R3 = 5Mbps. Assume that the file is completely stored by a router before it starts sending it to the next link. What would be the end-to-end delay between the hosts A & B?
(Select all that apply. Marks will be deducted for incorrect answers)
(a)
11 seconds
(b)
8 seconds
(c)
6 seconds
(d)
5 seconds
(e)
9 seconds
Feedback:
The transmission delay from first host to router #1: 20 Mbits / 4Mbps = 5 seconds The transmission delay from router #1 to router #2: 20 Mbits / 10Mbps = 2 seconds The transmission delay from router #2 to second host: 20 Mbits / 5Mbps = 4 second Total End-to-End delay will be = 11 seconds
Q13
Suppose that four long-running processes, P1, P2, P3 and P4, are running on a system. None of the processes performs any system calls that might cause it to block, and there are no other processes in the system. P1 has 4 threads, P2 has 1 thread, P3 has 2 threads and P4 also has 2 threads. The system has been deployed with a user level threading library and the processes are creating threads using functions from this library. What percentage of the CPU time will be given to each of the processes, if all processes have equal priority?
(Select all that apply. Marks will be deducted for incorrect answers)
(a)
P1 = 25.00%, P2 = 25.00%, P3 = 25.00%, P4 = 25.00%
(b)
P1 = 40.00%, P2 = 10.00%, P3 = 20.00%, P4 = 20.00%
(c)
P1 = 44.44%, P2 = 11.11%, P3 = 22.22%, P4 = 22.22%
(d)
P1 = 35.00%, P2 = 15.00%, P3 = 25.00%, P4 = 25.00%
(e)
P1 = 45.00%, P2 = 15.00%, P3 = 20.00%, P4 = 20.00%
Feedback:
As the system uses user level threads, the kernel does not know about them. Therefore, each process will be given equal share of the CPU time i.e. 25.00% for all processes.
Q14
Suppose that four long-running processes, P1, P2, P3 and P4, are running on a system. None of the processes performs any system calls that might cause it to block, and there are no other processes in the system. P1 has 4 threads, P2 has 1 thread, P3 has 2 threads and P4 also has 2 threads. The system has been deployed using 1-to-1 mapping with kernel level threads. What percentage of the CPU time will be given to each of the processes, if all processes have equal priority?
(Select all that apply. Marks will be deducted for incorrect answers)
(a)
P1 = 25.00%, P2 = 25.00%, P3 = 25.00%, P4 = 25.00%
(b)
P1 = 40.00%, P2 = 10.00%, P3 = 20.00%, P4 = 20.00%
(c)
P1 = 44.44%, P2 = 11.11%, P3 = 22.22%, P4 = 22.22%
(d)
P1 = 35.00%, P2 = 15.00%, P3 = 25.00%, P4 = 25.00%
(e)
P1 = 45.00%, P2 = 15.00%, P3 = 20.00%, P4 = 20.00%
Feedback:
As the system has been deployed using kernel level threads, the kernel assigns time to each process according to its number of threads. For P1 it will be 4/9 = 44.44%, for P2 1/9 = 11.11% and for both P3 and P4 it will be 2/9 = 22.22%
Q15
You and your friend David are working on a Parallel Programming project which is a compute intensive application. You have been told by your project manager that your software should have at-least 8 times speedup as compared to its sequential implementation. You have also been told that your application will run on a system with 16 CPUs. Estimate how much of the code (in percentage) needs to be parallelized in your application, in order to meet the above performance requirement?
(Select all that apply. Marks will be deducted for incorrect answers)
(a)
93.33%
(b)
88.89%
(c)
95.42%
(d)
92.90%
(e)
86.75%
Feedback:
Let’s denote Speedup by S. Amdhal’s law states that: S = 1 / ((1-f) + f/N)
We have been given S = 8, and N = 16 and we are required to estimate f (the percentage of parallel code). We can re-write Amdhal’s law and separate f as shown below:
f = (N (S – 1)) / (S (N – 1))
f=(16(8–1))/(8(16–1))=>0.9333==93.33%
Q16
You and your friend Jacob are working on a Parallel Programming project which is a compute intensive application. You have been told by your project manager that your software should have at-least 10 times speedup as compared to its sequential implementation. You have also been told that your application will run on a system with 32 CPUs. Estimate how much of the code (in percentage) needs to be parallelized in your application, in order to meet the above performance requirement?
(Select all that apply. Marks will be deducted for incorrect answers)
(a)
93.33%
(b)
88.89%
(c)
95.42%
(d)
92.90%
(e)
86.75%
Feedback:
Let’s denote Speedup by S. Amdhal’s law states that: S = 1 / ((1-f) + f/N)
We have been given S = 10, and N = 24 and we are required to estimate f (the percentage of parallel code). We can re-write Amdhal’s law and separate f as shown below:
f = (N (S – 1)) / (S (N – 1))
f=(32(10–1))/(10(32–1))=>0.9290 ==92.90%
Q17
(a)
(c)
(d)
Consider the following resource allocation graph:
Now, choose which one(s) of the following statement are correct for the above graph? (Select all that apply. Marks will be deducted for incorrect answers)
The system is deadlocked, and we cannot complete any of the processes
(b)
The system is not deadlocked, and we can complete processes in the P2, P1 and P3 order.
The system is not deadlocked, and we can complete processes in the P1, P3 and P2 order.
The system is not deadlocked, and we can complete processes in the P3, P1 and P2 order.
(e)
The system is not deadlocked, and we can complete processes in the P2, P3 and P1 order.
Feedback:
The system is not deadlocked, and we can execute processes in any of the following orders:
P2, P1, P3
P2, P3, P1
The key point to note is that P1 and P3 both need one more instance of R2 and R3, respectively, in the beginning and only P2 can complete as it has all the desired resources. Once P2 completes, then either P1 or P3 can proceed to completion.
Q18
Consider the following two resource allocation graphs:
Now, choose which one(s) of the following statement(s) are correct for the above graphs? (Select all that apply. Marks will be deducted for incorrect answers)
(a)
The processes are deadlocked in both graphs
(b)
Deadlock occurs in graph (1) but not in (2)
(c)
Deadlock occurs in graph (2) but not in (1)
(d)
The processes are free of deadlocks in both graphs
(e)
Provided information is not enough to decide whether a deadlock will occur or not
Feedback:
The processes P1 and P2 are deadlocked in graph (1) as the P1 needs one instance of R1, which is held by P2, while P2 needs one more instance of R2, which is held by P1. So there exists a circular dependency between processes P1 and P2 leading to a deadlock. P3 does not contribute to this deadlock.
In graph (2) there is no deadlock. P1 has all the needed resources allocated to it, so it can complete and release one instance of R1 and one instance of R3. P2 needs one more instance of R1, which is available so it can complete as well. P3 needs one more instance of R3, which is available, and needs one more instance of R2, which will be available once P2 completes, so it will also be able to complete. P4 needs only one instance of R2, which is allocated to it, so it can complete as well.
Q19
A resource allocation system that uses the Banker’s algorithm for 3 resource types (A, B, C) and 4 processes (P1, P2, P3, P4) is currently in the following state.
(Claim Matrix: max need of each process. Allocation Matrix: resources held by each process. Need Matrix: resources needed to complete. Available: free resources.)
P1
P2
P3
P4
A
3
1
5
3
B
4
2
0
5
C
5
4
2
1
P1
P2
P3
P4
A
2
1
2
1
B
4
2
0
4
C
3
2
2
1
P1
P2
P3
P4
A
1
0
3
2
B
0
0
0
1
C
A
B
C
2
2
0
2
2
0
Claim Matrix
Allocation Matrix
Need Matrix
0
Available Resource Vector
Presuming that the system is currently in Safe State, which one(s) of the following safe sequence(s) is/are possible?
(Select all that apply. Marks will be deducted for incorrect answers)
(a)
P4, P3, P1, P2
(b)
P4, P3, P2, P1
(c)
P2, P1, P3, P4
(d)
P3, P2, P1, P4
(e)
The presumption is incorrect, the system is not in Safe State
Feedback:
Only P4 can run to completion now, so we will have the following available vector:
Only P3 can run to completion now, so we will have the following available vector:
Now either P1 or P2 can complete; let take P1 first:
Let’s complete P2 now:
So, we have two possible safe sequences i.e. P4, P3, P1, P2
P4, P3, P2, P1
A
B
C
3
6
1
A
B
C
5
6
3
A
B
C
7
10
6
A
B
C
8
12
8
Q20
A resource allocation system that uses the Banker’s algorithm for 3 resource types (A, B, C) and 4 processes (P1, P2, P3, P4) is currently in the following state.
(Claim Matrix: max need of each process. Allocation Matrix: resources held by each process. Need Matrix: resources needed to complete. Available: free resources.)
P1
P2
P3
P4
A
3
1
5
3
B
4
2
0
5
C
5
4
2
2
P1
P2
P3
P4
A
2
1
2
1
B
4
2
0
5
C
3
2
2
1
P1
P2
P3
P4
A
A
B
C
2
2
0
Claim Matrix
Allocation Matrix
1
0
3
2
Need Matrix
B
0
0
0
0
C
2
2
0
1
Available Resource Vector
(a)
(b)
(c)
(d)
(e)
Presuming that the system is currently in Safe State, which one(s) of the following safe sequence(s) is/are possible?
(Select all that apply. Marks will be deducted for incorrect answers)
P4, P3, P1, P2
P4, P3, P2, P1
P2, P1, P3, P4
P3, P2, P1, P4
The presumption is incorrect, the system is not in Safe State
Feedback:
After carefully analyzing the requirements of each process, we can see that
P1 cannot complete, is it needs (1,0,2) whereas only (2,2,0) are available. Resource C is lacking!
P2 cannot complete, is it needs (0,0,2) whereas only (2,2,0) are available. Resource C is lacking!
P3 cannot complete, is it needs (3,0,0) whereas only (2,2,0) are available. Resource A is lacking!
P4 cannot complete, is it needs (2,0,1) whereas only (2,2,0) are available. Resource C is lacking! Therefore, our presumption that the system is in Safe State is incorrect! There is no safe sequence.