Week 1 Exercises: Processes
From “Operating Systems Concepts” Silberschatz, Galvin and Gagne.
1. A CPU scheduling algorithm determines an order for the execution of its scheduled processes. Given n processes to be scheduled on one processor, how many possible different schedules are there? Give a formula in terms of n.
2. Define the difference between preemptive and nonpreemptive scheduling.
3. Suppose that the following processes arrive for execution at the times indicated. Each process will run the listed amount of time. In answering the questions, use nonpreemptive scheduling and base all decisions on the information you have at the time the decision must be made.
a. What is the average turnaround time for these processes with the FCFS scheduling algorithm?
b. What is the average turnaround time for these processes with the SJF scheduling algorithm?
c. The SJF algorithm is supposed to improve performance, but notice that we chose to run process P1 at time 0 because we did not know that two shorter processes would arrive soon. Compute what the average turnaround time will be if the CPU is left idle for the first 1 unit and then SJF scheduling is used. Remember that processes P1 and P2 are waiting during this idle time, so their waiting time may increase. This algorithm could be known as future- knowledge scheduling.
4. What advantage is there in having different time-quantum sizes on different levels of a multilevel queueing system?
5. Many CPU-scheduling algorithms are parametrised. For example, the RR algorithm requires a parameter to indicate the time slice. Multilevel feedback queues require parameters to define the number of queues, the scheduling algorithms for each queue, the criteria used to move processes between queues, and so on.
These algorithms are thus really sets of algorithms (for example, the set of RR algorithms for all time slices, and so on). One set of algorithms may include another (for example, the FCFS algorithm is the RR algorithm with an infinite time quantum).What (if any) relation holds between the following pairs of sets of algorithms?
a. Priority and SJF
b. Multilevel feedback queues and FCFS c. Priority and FCFS
d. RR and SJF
6. Suppose that a scheduling algorithm (at the level of short-term CPU scheduling) favours those processes that have used the least processor time in the recent past. Why will this algorithm favor I/O-bound programs and yet not permanently starve CPU-bound programs?
Process Arrival Time Burst Time
P1 08
P2 0.44
P3 11
7. Why is it important for the scheduler to distinguish I/O-bound programs from CPU-bound programs?
8. Discuss how the following pairs of scheduling criteria conflict in certain settings. a. CPU utilization and response time
b. Average turnaround time and maximum waiting time
c. I/O device utilization and CPU utilization
9. Consider the following set of processes, with the length of the CPU burst given in milliseconds:
Process Burst Time Priority
P1 103
P2 11
P3 23
P4 14
P5 52
The processes are assumed to have arrived in the order P1, P2,P3, P4, P5, all at time 0.
a. Draw four Gantt charts (like the pictures in the lecture slides) that illustrate the execution of these processes using the following scheduling algorithms:
1. FCFS; 2. SJF;
3. Non-pre-emptive priority (a smaller priority number implies a higher priority),
4. RR (quantum = 1).
b. What is the turnaround time of each process for each of the scheduling algorithms in part a?
c. What is the waiting time of each process for each of the scheduling algorithms in part a? d. Which of the algorithms in part a results in the minimum average waiting time (over all processes)?
10. Which of the following scheduling algorithms could result in starvation? a. First-come, first-served
b. Shortest job first
c. Round robin
d. Priority
11. Consider a variant of the RR scheduling algorithm in which the entries in the ready queue are pointers to the PCBs.
a. What would be the effect of putting two pointers to the same process in the ready queue? b. What would be two major advantages and two disadvantages of this scheme?
c. How would you modify the basic RR algorithm to achieve the same effect without the duplicate pointers?
12. Consider a system running ten I/O bound tasks and one CPU-bound task. Assume that the I/O-bound tasks issue an I/O operation once for every- millisecond of CPU computing and that each I/O operation takes 10 milliseconds to complete. Also assume that the context- switching overhead is 0.1 millisecond and that all processes are long-running tasks.
What is the CPU utilization for a round-robin scheduler when:
a. The time quantum is 1 millisecond b. The time quantum is 10 milliseconds
13. Consider a system implementing multilevel queue scheduling. What strategy can a computer user employ to maximize the amount of CPU time allocated to the user’s process?
14. Consider a pre-emptive priority scheduling algorithm based on dynamically changing priorities. Larger priority numbers imply higher priority.
When a process is waiting for the CPU (in the ready queue, but not running), its priority changes at a rate a; when it is running, its priority changes at a rate p. All processes are given a priority of 0 when they enter the ready queue. The parameters a and p can be set to give many different scheduling algorithms.
a. What is the algorithm that results from p > a > 0? b. What is the algorithm that results from a < p < 0?
15. Explain the differences in the degree to which the following scheduling algorithms discriminate in favour of short processes:
a. FCFS
b. RR
c. Multilevel feedback queues
16. Using the Windows XP scheduling algorithm, what is the numeric priority of a thread for the following scenarios?
a. A thread in the REALTIME_PRIORITY CLASS with a relative priority of HIGHEST
b. A thread in the NORMAL-PRIORITY CLASS with a relative priority of NORMAL
c. A thread in the HIGH_PRIORITY_CLASS with a relative priority of ABOVE_NORMAL
Solutions to Week 1 Exercises
1. n! (n factorial)
2. Pre-emptive scheduling allows a process to be interrupted in the midst of its execution, taking the CPU away and allocating it to another process. Nonpreemptive Scheduling ensures that a process relinquishes control of the CPU only when it finishes its current CPU burst.
3. Solutions are:
(a) 10.53 (b) 9.53 (c) 6.86
4. Processes that need more frequent servicing, for instance, interactive processes such as editors, can be in a queue with a small time quantum. Processes with no need for frequent servicing can be in a queue with a larger quantum, requiring fewer context switches to complete the processing, and thus making more efficient use of the computer.
5. Solutions are:
(a) The shortest job has the highest priority.
(b) The lowest level of MLFQ is FCFS.
(c) FCFS gives the highest priority to the job having been in existence the longest. (d) None.
6. It will favour the I/O -bound programs because of the relatively short CPU burst request by them; however, the CPU -bound programs will not starve because the I/O -bound programs will relinquish the CPU relatively often to do their I/O .
7. I/O-bound programs have the property of performing only a small amount of computation before performing I/O. Such programs typically do not use up their entire CPU quantum. CPU-bound programs, on the other hand, use their entire quantum without performing any blocking IO operations. Consequently, one could make better use of the computer’s resources by giving higher priority to I/O-bound programs and allow them to execute ahead of the CPU-bound programs.
8.
a. CPU utilization and response time: CPU utilization is increased if the overheads
associated with context switching is minimized. The context switching overheads could be lowered by performing context switches infrequently. This could however result in increasing the response time for processes.
b. Average turnaround time and maximum waiting time: Average turnaround time is minimized by executing the shortest tasks first. Such a scheduling policy could however starve long-running tasks and thereby increase their waiting time.
c. I/O device utilization and CPU utilization: CPU utilization is maximized by running long-running CPU-bound tasks without performing context switches. I/O device utilization is maximized by scheduling I/O-bound jobs as soon as they become ready to run, thereby incurring the overheads of context switches.
9.
(a) Gannt Charts:
(b) Turnaround Times:
FCFS
RR
SJF
Priority
P1
10
19
19
16
P2
11
2
1
1
P3
13
7
4
18
P4
14
4
2
19
P5
19
14
9
6
(c) Waiting Times
FCFS
RR
SJF
Priority
P1
0
9
9
6
P2
10
1
0
0
P3
11
5
2
16
P4
13
3
1
18
P5
14
9
4
1
(d) Shortest Job First.
10. Shortest Job First and Priority.
11.
(a) In effect, that process will have increased its priority since by getting time more often it
is receiving preferential treatment.
(b) The advantage is that more important jobs could be given more time, in other words,
higher priority in treatment. The consequence, of course, is that shorter jobs will suffer.
(c) Give a longer amount of time to processes deserving higher priority. In other words,
have two or more quanta possible in the Round-Robin scheme.
12. Answer:
(a) The time quantum is 1 millisecond: Irrespective of which process is scheduled, the
scheduler incurs a 0.1 millisecond context-switching cost for every context-switch. This
results in a CPU utilization of 1/1.1 * 100 = 91%.
(b) The time quantum is 10 milliseconds: The I/O-bound tasks incur a context switch after
using up only 1 millisecond of the time quantum. The time required to cycle through all the processes is therefore 10*1.1 + 10.1 (as each I/O-bound task executes for 1 millisecond and then incur the context switch task, whereas the CPU-bound task executes for 10 milliseconds before incurring a context switch). The CPU utilization is therefore 20/21.1 * 100 = 94%.
13. The program could maximize the CPU time allocated to it by not fully utilizing its time quantums. It could use a large fraction of its assigned quantum, but relinquish the CPU before the end of the quantum, thereby increasing the priority associated with the process.
14.
(a) FCFS
(b) LIFO (Last in First out)
15. Answer:
(a) FCFS —discriminates against short jobs since any short jobs arriving after long jobs will
have a longer waiting time.
(b) RR —treats all jobs equally (giving them equal bursts of CPU time) so short jobs will
be able to leave the system faster since they will finish first.
(c) Multilevel feedback queues—work similar to the RR algorithm—they discriminate
favourably toward short jobs.
16. Answers: (a) 26
(b) 8 (c) 14
Week 4 Exercises
2
1. List 4 benefits of multithreading and explain why each is possible.
2. A computer game, controlled by the keyboard, has a character that the user must control, a single 'enemy' that will chase the user, suggest 5 different threads this program could use to parallelise its activity.
3. Why might a web server need to be multithreaded?
4. Explain what it means for an instruction to be atomic, and why incrementing the value of a counter is not an atomic instruction.
5. Explain why the sharing of an array between two processes that add data to it might lead to information loss.
6. Consider the following solution to the critical section problem (Attempt 1)
a) Show that this solution satisfies mutual exclusion without using a state diagram (i.e. the
way we did for attempts 3 and 4, or by induction if you prefer).
b) Which properties are not satisfied by this solution? Explain your answer.
integer turn ß 1
p
q
loop_forever
p1: non-critical section p2: await turn = 1
p3: critical section p4: turnß2
loop_forever
q1: non-critical section q2: await turn = 2
q3: critical section q4: turn ß 1
7. Show mutual exclusion for the following solution to the critical section problem (Attempt 3) by drawing the state diagram. For brevity note that you can achieve the same result whilst only considering the lines relevant to syncrhonisation (numbered lines). Which properties are not satisfied by this solution? Explain your answer.
boolean wantp ß false, wantq ß false
p
q
loop_forever
non-critical section
p1: wantp ß true
p2: await wantq = false p3: critical section p3: wantp ß false
loop_forever
non-critical section
q1: wantq ß true
q2: await wantp = false q3: critical section q4: wantq ß false
8. State whether the following solution to the critical section problem (Attempt 4) satisfies each of the 3 properties of a solution to the aforementioned problem. If the property is satisfied provide a proof (in the style of the lecture slides); if it is not show an interleaving that breaks that property.
boolean wantp ß false, wantq ß false
p
q
loop_forever
p1: non-critical section p2: wantp ß true
P3: while wantq
p4: wantp ßfalse
P5: wantp ßtrue
p6: critical section p7: wantp ß false
loop_forever
q1: non-critical section q2: wantq ß true
q3: while wantp
q4: wantq ßfalse
q5: wantq ßtrue
q6: critical section q7: wantq ß false
9. The following code can be used to implement an atomic assignment in Java:
public class AtomicInt {
int value;
public AtomicInt(int initialValue){
value = initialValue;
}
public synchronized void assign(int newValue){
value = newValue;
} }
a) Create a similar class AtomicBoolean that provides the same functionality for a boolean variable.
b) Create a buffer class, similar to the one in the lecture slides that has two methods that allows threads to add ints to the buffer (add) and print the buffer (printBuffer) respectively, you should not make the add or print methods synchronized instead the Producer class is going to handle mutual exclusion using Peterson’s Algorithm (part c). You may assume that the buffer will be sufficiently large that it does not become full (do not implement remove).
c) Now create a Producer class (based on the one in the slides) and implement Peterson's algorithm within this class (and without using the keyword Synchronized in the buffer class) to ensure that two producer threads cannot execute addItem or printBuffer simultaneously (you may consider both calls to be within the same critical section). Hint 1: recall you can implement await using while and sleep. Hint 2: if you find this difficult to do as a single class, feel free to implement 2 separate thread classes named p and q that extend producer, and have different implementations of the run method.
d) Write a main method that:
i. Creates an instance of your buffer object of size 10;
ii. Creates two producer threads p and q that reference this buffer and loop 5 times
adding the number 1 or 2 respectively to the buffer, printing the buffer after each
addition.
iii. Starts these threads running.
e) Test the output of your code (remember to make sure you put the waste some time code in the run() method of the threads in order to make it more likely that you will spot bugs).
f) You can extend this exercise by implementing the other three solutions in the lecture and observing the output.
10. Revision exercises: for each of the solutions to the Mutual Exclusion problem:
a) Write out the solution then close your notes;
b) State which of the properties the solution satisfies and provide a proof for each;
c) State which of the properties the solution does not satisfy and provide an example
interleaving to demonstrate this.
Solutions to Week 2 Exercises
1. See lecture 2 slide 14.
2. A thread to do keyboard I/O; A thread to update the enemy position; a thread to update the character position; a thread to update the user interface; a main thread to start the other threads and do any other necessary computation.
3. If a web server were not multi-threaded then only one user would be able to connect at once, and others would have to wait until that user had completed all tasks and disconnected before they could connect. This would be very inefficient.
4. An atomic instruction is one whose on a CPU cannot be interrupted part way through. Incrementing a counter is not atomic because it requires loading the value of the counter into the register, then adding to it, then storing it back in memory. Two threads could both execute this and be interleaved thus:
• load x into register;
• load x into register;
• increment x;
• store x in memory;
• increment x;
• store x in memory.
In this scenario x will only appear to have been incremented once because the second thread loaded its value before the first had stored the result.
5. If two processes attempt to update the array simultaneously then data loss can occur if the process of adding items to the array is not forced to occur under mutual exclusion. Suppose that both threads are going to store something at the next available position in the array:
• Thread 1: int i = get next available position;
• Thread 2: int i = get next available position;
• Thread 1: array[i] = myData;
• Thread 2: array[i] = myData;
• Thread 1: increment next available position;
• Thread 2: increment next available position.
Now the data from Thread 2 overwrote the data from thread 1, so the data it put in the array was lost.
6.
• For mutual exclusion to be violated we must reach the state (p3,q3,_,_). That is, one
process must remain at line 3 whilst the other transitions to line 3. Since the processes are symmetrical we will assume, p is at p3 and show that q cannot transition to q3;
• If p is at p3 then turn = 1:
• This is because the only statement that sets turn = 2 is p4, but p2 (await turn = 1) must execute between p4 and p3 (inspection of the program).
• If turn = 1 then:
• • •
if q is at q4 it will simply set it to 1 again and continue to q1;
if q is at q1 it will continue to q2, turn is unaffected;
if q is at q2 (or when it reaches q2) it will have to wait and cannot proceed.
Therefore mutual exclusion holds.
• The solution does not satisfy freedom from starvation. If either process terminates in its
non-critical section then the other can execute its critical section once. Following that it will set turn such that the terminated process can proceed and it cannot. Since the
terminated process has, in fact, terminated, it will never set turn to allow the executing process to run. An execution trace that leads to starvation of p is: q1 (q terminates), p1,p2,p3,p4.
7. State Diagram: since the state p3,p3,_,_ never arises mutual exclusion is satisfied.
Deadlock can occur: the state p2,q2,t,t is reachable, but there is no path out of it: if the
system enters that state then it can never exit. 8.
• Mutual Exclusion: satisfied. For mutual exclusion to be violated we would require both processes to be at p3. That is, one must be at line 6 and the other must transition to line 6. Since the processes are symmetrical let's consider the case where p is at p6 and q must transition to q6:
• If p is at p6 then want p = true:
• The only process to change wantp is p. Since p is at p6 the last statement
executed to assign a value to wantp was p5: wantp = true.
• If q is at q7,q1, or q2 it will proceed until it reaches q3 and no change will be made
to wantp.
• If q is at q3 (or reaches it from earlier) then it will remain in the while loop q3-q7,
altering the value of wantq, but it cannot exit this loop as q cannot change wantp,
only p can do that once it has left p6. Therefore mutual exclusion holds.
• Freedom from Deadlock: satisfied. For deadlock to occur both processes must be executing the while loop continuously without either being able to escape. The following is an extract of the state diagram for this region: we know that when p is at p3 wantp must be true, since p3 follows execution of either p2 or p5 (both of which are wantp = true) and q does not change wantp. Similarly at q3 wantq = true, therefore we need only consider this in the state diagram (no need for p3,q3,f,t or p3,q3,t,f). The dotted lines and ... represent joins to the remainder of the state diagram.
Observe that from every state in the state diagram it is possible to reach a state where either p or q enter their critical section (p3,q6.t.f or p6,q3,t,f). Therefore there is no deadlock in the system (for deadlock to occur there would need to exist a state from which it was not possible to reach one of these.
• Freedom from Starvation: not satisfied. Continued execution of the following cycle leads to starvation:
1. Consider the Bakery Algorithm for 2 processes shown below
Week 7 Exercises
3
integer npß 0, integer nqß0
p
q
loop_forever
p1: non-critical section
p2: np ßnq + 1
p3: await nq = 0 or np <= nq p4: critical section
p5: np ß0
loop_forever
p1: non-critical section p2: nq ßnp + 1
p3: await np = 0 or nq < np p4: critical section
p5: nq ß0
i. Show that Mutual Exclusion holds for this algorithm;
ii. Show that the algorithm is free from deadlock;
iii. Show that the algorithm is free from starvation.
2. Why does it no longer matter that assignment is atomic in the test and set and swap approaches?
3. Show how each of the following can be used to solve the critical section problem:
i. Test and Set; ii. Swap.
4. What are the possible outputs of the following program:
Semaphore S ß 1, boolean b ß false
p
q
p1: wait(S) p2: B ß true p3: signal(S)
p2: wait(S)
p1: while not B
q3: print(“*”)
q4: signal(S)
5. Consider the following scenario. A swing bridge can be used by cars to cross the river. It can also be raised to allow tall ships to pass underneath it. The bridge must not be raised when cars are on it; and it must not be lowered whilst ships are passing underneath.
i. Write pseudocode to show how semaphores can be used to solve this problem.
ii. You will now implement this in Java. First create a Java class SwingBridge:
a) Add the following to the class Vehicle (given at the bottom of these exercises):
• A Semaphore that monitors whether the bridge is clear and allows one vehicle to use (or pass under) the bridge at once.
• A boolean bridgeOpen, representing the state of the bridge, which is initialised to false (meaning the bridge is down and can be used by road users).
b) CreateaclassCarwhichextendsVehicle:
• It should have a Constructor that sets needsToBeOpen to false;
• It should be able to run as a thread;
• Create the method crossBridge() which:
◦ Closes the bridge if it is open;
◦ Prints a message that v.getName() is crossing the bridge; ◦ Sleeps for 100ms;
• Implement the run method for this thread so that it: ◦ Loops 5 times:
◦ Waits for the bridge to be clear;
◦ Executes crossBridge()
◦ Signals that the bridge is clear.
c) Create a class Ship which extends vehicle:
• It should have a Constructor that sets needsToBeOpen to true;
• It should be able to run as a thread;
• Create a method crossBridge() which:
◦ Opens the bridge if it is closed;
◦ Prints a message that v.getName()is sailing underneath the bridge; ◦ Sleeps for 150ms;
• Implement the run method for this thread so that it: ◦ Loops 3 times:
◦ Waits for the bridge to be clear;
◦ Executes crossBridge()
◦ Signals that the bridge is clear.
d) Write a main method that creates 3 ships, named ship1, ship2 and ship3. Creates
5 Cars named car1, car2,...car5. Runs all of these as threads.
• Test your code to make sure only one vehicle can use/pass underneath the
bridge at once.
6. Modify your code above so that 3 cars can use the bridge at once, but only one ship can go under the bridge at once.
• Hint 1: you need only modify the car class;
• Hint 2: you will need a class variable to count the number of cars on the bridge and you
will need to use a semaphore to protect access/modification to it.
7. Merge sort splits an array into two parts, sorts each of these parts and then merges the resulting sorted arrays. Implement merge sort in Java using threads and Semaphores.
Code for class Vehicle:
public abstract class Vehicle {
protected boolean needsBridgeOpen;
protected String name;
public Vehicle(String nameIn, boolean landBasedIn){
name = nameIn;
needsBridgeOpen = !landBasedIn;
}
}
1.
Solutions to Week 73 Exercises integer npß 0, integer nqß0
p
q
loop_forever
p1: non-critical section p2: np ßnq + 1
p3: await nq = 0 or np <= nq p4: critical section
p5: np ß0
loop_forever
q1: non-critical section q2: nq ßnp + 1
q3: await np = 0 or nq < np q4: critical section
p5: nq ß0
i. Mutual Exclusion:
◦ For this to be violated we must get to the state (p4,q4,_,_);
◦ Either p is at p4 and q transitions to q4 (symmetrical processes modulo <= so let's
assume p is at p4, and q is elsewhere:
▪ When p transitioned to p4 either nq = 0 or np <=nq;
▪ Further np > 0 since it was assigned to nq + 1 at p2 (neither np or nq can be
decreased below 1, strucutre of the program: values are only ever assigned to zero or
increased) and q does not change the value of np.
▪ For q to transition to q4 we require np = 0 or nq < np;
• We know np != 0 from above.
• So now show it cannot be the case that nq < np:
◦ Since p must remain at p4 np cannot change (it is only modified by p), further note that the only thread that can change the value of nq is q.
◦ nq can change either by execution of line q5 or q2.
◦ If q executes q5 this makes nq < np (as np always >= 0, and np !=0) but then
q2 must execute before q reaches q3, which ensures that nq > np, hence the condition nq < np cannot be satisfied.
◦ A symmetric argument exists the other way round for q at q4 and p transitioning to p4.
ii. Freedom from Deadlock
• For this to be violated both processes must be waiting, that is p must be at p3 and q
at q3.
• In this case the following conditions must be simultaneously true:
◦ (nq != 0 and np > nq) and (np != 0 and np <= nq)
◦ This means we require: np > nq and np <= nq simultaneously which is trivially
false.
◦ Therefore deadlock cannot occur.
iii. Freedom from Starvation:
• For a process to starve it must be continuously executing its pre-protocol and unable to
enter its critical section. The processes are symmetrical apart from a tie break in favour
of p, therefore if we show that q cannot starve then we know neither can.
• For q to starve it must be waiting indefinitely at q3: await np = 0 or nq < np.
◦ If p is at p3 and q is waiting at q3 then p can proceed to p4 (absence of deadlock proof above); so follow reasoning below.
◦ If p is at p4 it will exit its critical section in finite time (standard assumption),
therefore it will execute p5 setting np to zero.
◦ P can then execute p1 where it can either terminate (q can run since np = 0), remain
infinitely (again q can run) or continue to p2.
◦ If p proceeds to p2 then np = nq + 1 so, since nq cannot be changed (it is only
changed by q at q2 and q5 and q remains at q3 if it is starved). Therefore np <= nq
and p must wait at p3.
◦ From the deadlock proof above we now know that q cannot be forced to wait at q3
so it will proceed.
2. Because lock is only ever set to false non atomically it doesn't matter whether two processes do this at the same time interleaving instructions as the final result will always be that lock = false. Test and set is atomic so the value of lock cannot be updated part way though the execution of this statement. The same argument applies to swap.
3. Test and Set:
i. Test and set changes the value of a variable (e.g. lock) to true, and returns what its
original value was. So if test and set returns false then we know lock was false (no one was in their critical section) but now it is true so we can go into our critical section, having set it to true to stop others doing the same. Lock must be set to false on exiting the critical section so that other processes are able to progress into their critical sections.
await(!testAndSet(lock))
critical section
lock = false;
ii. Swap exchanges the values of two boolean variables (e.g. lock and key). If we want to know that lock was false in order to enter the critical section then we can set key to true and call swap(lock,key). Key will remain true if lock is true, so we know that only when key becomes false we can proceed into our critical section. Further, since we know that key was true, and we just swapped its value with that of lock (which was false) then we know that lock is now true, so no other processes will enter their critical section.
key = true
while(key = true) swap(lock,key);
critical section
lock = false;
4. Either no output or an infinite number of *s
5. See source code.
6. No answer given, if you have an answer written and you want it checked please email it to amanda.coles@kcl.ac.uk.
Home
Discover Kahoots Reports Upgrade Create now
Questions (7) Hide answers
1 - Quiz
A race condition is
30 sec
Week-3 QUIZ 0 favorites 1 play
results when several threads try to access the same data concurrently
results when several threads try to modify the same data concurrently
result only when the outcome of execution not depend on instruction order
55 players Play Challenge
none of the above
2 - Quiz
A public kahoot
sahardalsudani
Created 2 months ago
Kahoots Reports Upgrade Create now
must consist of only one machine instruction cannot be used to solve the critical section problem executes as a single, uninterruptible unit
Home Discover
None of the above
3 - Quiz
A semaphore is :
30 sec
is essentially an integer variable
is accessed through only one standard operation
can be modified simultaneously by multiple threads
Home Discover4 - Quiz Kahoots Reports Upgrade Create now
In Peterson's solution, what variable indicates if a process is ready to enter its critical section
turn lock flag[I]
30 sec
turn[I]
5 - Quiz
The first readers-writers problem requires that
30 sec
once a writer is ready, that writer performs its write as soon as possible.
no reader will be kept waiting unless a writer has already obtained permiss
Home Discover6 - Quiz Kahoots Reports Upgrade Create now
A solution to the critical section problem does not have to satisfy which of the following requirements?
20 sec
mutual exclusion progress bounded waiting
atomicity
7 - Quiz
How many philosophers may eat simultaneously in the Dining Philosophers problem with 5 philosophers?
30 sec
1 2 3
Home Discover Kahoots Reports Upgrade Create now
i. ii.
takeForks(i)
eat
releaseForks(i)
Show how this monitor can be implemented in pseudocode. Implement your solution in Java.
Monitors and Concurrency in Java Exercises
1. What problems might occur if threads were allowed to call notifyAll outside of the monitor?
2. The dining philosophers problem can be solved using a monitor. The monitor maintains an array of ints representing fork availability and a pair of methods, one that allows philosophers to pick up both forks if both are free and another that allows a philosopher to put down both forks. You may assume each philosopher operates as follows:
loop forever:
think
3. Implement the final version of the readers and writers monitor solution from the lecture slides in Java.
4. Write pseudocode (or modify your Java code) to implement the following policies in the readers and writers problem. You should do these as three independent exercises, not implement one program that does them all.
i. Readers always have priority over writers.
ii. No more than 4 readers can read from the database at once, but writers have priority.
iii. If any readers are waiting no more than two writers will be allowed to write before they
are allowed to read.
5. The London River Bus is a boat service on the Thames. In this question you will complete a model of the route of a single river bus service. The service in question calls at the following piers from east to west: Bankside, Blackfriars, Embankment, Millbank, St George Wharf. Download the Java source files for this question from KEATS, then answer the questions (the code is pasted below for your convenience):
i. Run the code, you will notice that passengers can board and leave the boat when it is sailing between ports. By changing only the headers of methods, and not the body, modify the code so that passengers cannot board or disembark whilst the ferry is sailing. Other problems will be addressed in subsequent parts of the question.
ii. Now we need to address the problem that passengers can only board the boat when it is at the stop they are waiting at, and should only leave the boat when it is at their desired destination. Modify the board and leave methods in the RiverBus class to force passengers to wait until the boat is in the correct location. (Don't forget: you will also need to add an appropriate statement elsewhere to cause waiting threads to wake up when the boat arrives at a new location).
iii. You may now notice that passengers can board the boat even if it is not travelling in the correct direction. Since the boats can get full at busy hours, tickets do not permit passengers to board the boat and take the long way round, instead they may only board if the boat is going in the right direction. Impose this restriction in your code (Hint: if a passenger is going eastbound their location will be greater than their destination).
iv. It is generally considered polite to allow passengers to leave before new passengers board. Implement within your code the restriction that all passengers who are boarding at a given destination must allow anyone leaving to leave first (Hint: you will need to store an array of counters of the number of passengers currently on board who will be leaving at each destination).
public class RiverBus {
private String[] locations = {"Bankside", "Blackfriars", "Embankment", "Millbank",
"St George Wharf"};
boolean eastbound = true;
int currentLocation = 0;
int numberOfPassengers = 0;
public void sail(){
System.out.print("River Bus Leaving " + locations[currentLocation] + " with " +
numberOfPassengers + " passengers going");
if(eastbound){
System.out.println(" eastbound");
} else {
System.out.println(" westbound");
}
try{
Thread.sleep(100);
} catch(InterruptedException e){
e.printStackTrace();
}
if(eastbound){
++currentLocation;
} else {
--currentLocation;
}
if(currentLocation == 0 || currentLocation == locations.length -1){
eastbound = !eastbound; //turn around if we've reached the end of the route
}
System.out.println("River Bus Arrived at " + locations[currentLocation]);
}
public void board(Passenger p){
++numberOfPassengers;
System.out.println("Passenger " + p.getPassengerName() + " travelling " +
p.getDirection() + " from " + locations[p.getLocation()] + " to " +
locations[p.getDestination()] + " boarding the bus");
}
public void leave(Passenger p){
--numberOfPassengers;
System.out.println("Passenger " + p.getPassengerName() + " travelling " +
p.getDirection() + " from " + locations[p.getLocation()] + " to " +
locations[p.getDestination()] + " leaving the bus");
} }
public class Timetable extends Thread {
RiverBus boat;
public Timetable(RiverBus toControl) {
boat = toControl;
}
public void run (){
for(int i = 0; i < 20; ++i){
try{
Thread.sleep(20);
} catch(Exception e){
e.printStackTrace();
}
boat.sail();
} }
}
public class Passenger extends Thread {
private String name;
private int location;
private int destination;
RiverBus boat;
public Passenger(RiverBus transport, String nameIn, int from, int to){
name = nameIn;
location = from;
destination = to;
boat = transport;
}
public void run(){
boat.board(this);
boat.leave(this);
}
public String getPassengerName() {
return name;
}
public int getDestination() {
return destination;
}
public int getLocation() {
return location;
}
public String getDirection(){
if(location > destination){
return “eastbound”;
}
return “westbound”;
}
}
import java.util.Random;
public class RiverBusSimulation {
public static void main(String[] args) {
RiverBus boat = new RiverBus();
Timetable t = new Timetable(boat);
t.start();
Random r = new Random();
for(int i = 0; i < 20; ++i){
int from = r.nextInt(5);
int to = r.nextInt(5);
while(from == to){
to = r.nextInt(5);
}
Passenger p = new Passenger(boat, "passenger" + i, from, to);
p.start(); }
} }
Week 5 Exercises
From “Operating Systems Concepts” Silberschatz, Galvin and Gagne.
1. Consider the gridlock example in the lecture slides:
a) Show that each of the four necessary conditions for deadlock hold;
b) State a simple rule for avoiding deadlocks in this system.
2. Consider the deadlock situation that could occur in the dining philosophers problem when the philosophers obtain the forks one at a time.
a) Show that each of the four necessary conditions for deadlock hold in this setting.
b) Show how deadlocks could be avoided by eliminating any one of the four conditions.
3. A possible solution for preventing deadlocks is to have a single, higher order resource that must be requested before any other resource. For example, if multiple threads attempt to access the synchronization objects A • • • E, deadlock is possible. (Such synchronization objects may include mutexes, semaphores, condition variables, etc.). We can prevent the deadlock by adding a sixth object F. Whenever a thread wants to acquire the synchronization lock for any object A • •• E, it must first acquire the lock for object F. This solution is known as containment: The locks for objects A • • • E are contained within the lock for object F. Compare this scheme with the circular-wait scheme (imposing a total order on all resources and requiring that they are all requested in this order).
4. Compare the circular-wait scheme with the various deadlock-avoidance schemes (like the banker's algorithm) with respect to the following issues:
a) Runtime overheads.
b) System throughput.
5. In a real computer system, neither the resources available nor the demands of processes for resources are consistent over long periods (months). Resources break or are replaced, new processes come and go, new resources are bought and added to the system. If deadlock is controlled by the banker's algorithm, which of the following changes can be made safely (without introducing the possibility of deadlock), and under what circumstances?
a) Increase Available (new resources added).
b) Decrease Available (resource permanently removed from system).
c) Increase Max for one process (the process needs or wants more resources than allowed).
d) Decrease Max for one process (the process decides it does not need that many resources).
e) Increase the number of processes.
f) Decrease the number of processes.
6. Consider a system consisting of four resources of the same type that are shared by three processes, each of which needs at most two resources. Show that the system is deadlock free.
7. Consider the dining-philosophers problem where the chopsticks are placed at the centre of the table and any two of them could be used by a philosopher. Assume that requests for chopsticks are made one at a time. Describe a simple rule for determining whether a particular request could be satisfied without causing deadlock given the current allocation of chopsticks to philosophers.
8. Consider the same setting as the previous problem. Assume now that each philosopher requires three chopsticks to eat and that resource requests are still issued separately. Describe some simple rules for determining whether a particular request could be satisfied without causing deadlock given the current allocation of chopsticks to philosophers.
9. We can obtain the banker's algorithm for a single resource type from the general banker's algorithm simply by reducing the dimensionality of the various arrays by 1. Show through an example that the multiple resource-type banker's scheme cannot be implemented by individual application of the single-resource-type scheme to each resource type.
10. Consider the following snapshot of a system: then answer the following questions using the banker's algorithm:
a) Is the system in a safe state?
b) If a request from process Pi arrives for (0,4,2,0), can the request be granted immediately?
Allocated Max Available
A
B
C
D
P1
0
0
1
2
P2
1
0
0
0
P3
1
3
5
4
P4
0
6
3
2
P5
0
0
1
4
A
B
C
D
P1
0
0
1
2
P2
1
7
5
0
P3
2
3
5
6
P4
0
6
5
2
P5
0
6
5
6
A
B
C
D
1
5
2
0
11. What is the optimistic assumption made in the deadlock-detection algorithm? How could this assumption be violated?
Solutions to Week 5 Exercises
1. Answer:
a) The four necessary conditions for a deadlock are mutual exclusion; hold-and-wait; no preemption;
and circular wait. The mutual exclusion condition holds as only one car can occupy a space in the roadway. Hold-and-wait occurs where a car holds onto their place in the roadway while they wait to advance in the roadway. A car cannot be removed (i.e. preempted) from its position in the roadway. Lastly, there is indeed a circular wait as each car is waiting for a subsequent car to advance. The circular wait condition is also easily observed from the graphic. b. A simple rule that would avoid this traffic deadlock is that a car may not advance into an intersection if it is clear they will not be able to immediately clear the intersection.
2. Deadlock is possible because the four necessary conditions hold in the following manner: 1) mutual exclusion is required for chopsticks, 2) the philosophers tend to hold onto the chopstick in hand while they wait for the other chopstick, 3) there is no pre-emption of chopsticks in the sense that a chopstick allocated to a philosopher cannot be forcibly taken away, and 4) there is a possibility of circular wait. Deadlocks could be avoided by overcoming the conditions in the following manner: 1) allow simultaneous sharing of chopsticks, 2) have the philosophers relinquish the first chopstick if they are unable to obtain the other chopstick, 3) allow for chopsticks to be forcibly taken away if a philosopher has had a chopstick for a long period of time, and 4) enforce a numbering of the chopsticks and always obtain the lower numbered chopstick before obtaining the higher numbered one.
3. This is probably not a good solution because it yields too large a scope. It is better to define a locking policy with as narrow a scope as possible.
4. A deadlock-avoidance scheme tends to increase the runtime overheads due to the cost of keep track of the current resource allocation. However, a deadlock-avoidance scheme allows for more concurrent use of resources than schemes that statically prevent the formation of deadlock. In that sense, a deadlock- avoidance scheme could increase system throughput.
5.
a) Increase Available (new resources added) - This could safely be changed without any problems.
b) Decrease Available (resource permanently removed from system) - This could have an effect on the
system and introduce the possibility of deadlock as the safety of the system assumed there were a
certain number of available resources.
c) Increase Max for one process (the process needs more resources than allowed, it may want more) -
This could have an effect on the system and introduce the possibility of deadlock.
d) Decrease Max for one process (the process decides it does not need that many resources) - This could
safely be changed without any problems.
e) Increase the number of processes - This could be allowed assuming that resources were allocated to
the new process(es) such that the system does not enter an unsafe state.
f) Decrease the number of processes - This could safely be changed without any problems.
6. Suppose the system is deadlocked. This implies that each process is holding one resource and is waiting for one more. Since there are three processes and four resources, one process must be able to obtain two resources. This process requires no more resources and, therefore it will return its resources when done.
7. The following rule prevents deadlock: when a philosopher makes a request for the first chopstick, do not satisfy the request only if there is no other philosopher with two chopsticks and if there is only one chopstick remaining.
8. When a philosopher makes a request for a chopstick, allocate the request if: 1) the philosopher has two chopsticks and there is at least one chopstick remaining, 2) the philosopher has one chopstick and there is at least two chopsticks remaining, 3) there is at least one chopstickremaining, and there is at least one philosopher with three chopsticks, 4) the philosopher has no chopsticks, there are two chopsticks remaining, and there is at least one other philosopher with two chopsticks assigned.
9. Consider a system with resources A, B, and C and processes P0, P1, P2, P3, and P4 with the following values of Allocation:
ABC
P0 0 1 0
P1 3 0 2
P2 2 0 2
P3 2 1 1
P4
002
And the following remaining need for resources (max need – allocation): ABC
If the value of Available is (2 3 0), we can see that a request from process P0 for (0 2 0) cannot be satisfied as this lowers Available to (2 1 0) and no process could safely finish.
However, if we were to treat the three resources as three single-resource types of the banker’s algorithm, we get the following: For resource A(which we have 2 available):
P0 7 4 3
P1 0 2 0
P2 6 0 0
P3 0 1 1
P4
431
Allocated Need
P0 0 7
P1 3 0
P2 3 6
P3 2 0
P4
04
Processes could safely finish in the order of P1, P3, P4, P2, P0. For resource B (which we now have 1 available as 2 were assumed assigned to process P0),
Allocated Need
P0 3 2
P1 0 2
P2 0 0
P3 1 1
P4
03
Processes could safely finish in the order of P2, P3, P1, P0, P4. And finally, for For resource C (which we have 0 available),
Allocated Need
P0 0 3
P1 2 0
P2 2 0
P3 1 1
P4
21
Processes could safely finish in the order of P1, P2, P0, P3, P4.
As we can see, if we use the banker’s algorithm for multiple resource types, the request for resources (0 2 0) from process P0 is denied as it leaves the system in an unsafe state. However, if we consider the banker’s algorithm for the three separate resources where we use a single resource type, the request is granted. Therefore, if we have multiple resource types, we must use the banker’s algorithm for multiple resource types.
10.
a) Yes. with Available being equal to (1,5, 2, 0), either process P0 or P3 could run. Once process P3 runs,
it releases its resources which allow all other existing processes to run.
b) Yes. This results in the value of Available being (1, 1, 0, 0).One ordering of processes that can finish
is P0, P2, P3, P1, and P4.
11. The optimistic assumption is that there will not be any form of circular-wait in terms of resources allocated and processes making requests for them. This assumption could be violated if a circular-wait does indeed in practice.
Answer:
Operating Systems Week 6 – Solutions
Q1. Briefly explain why paging suffers from internal fragmentation but not external fragmentation.
Q2. Given five memory partitions of 100 KB, 500 KB, 200 KB, 300 KB, and 600 KB (in order), how would each of the first-fit, best-fit, and worst-fit algorithms place processes of 212 KB, 417 KB, 112 KB, and 426 KB (in order)? Which algorithm makes the most efficient use of memory?
Answer: Internal fragmentation occurs as a full frame is allocated to a page, even though
only a few bytes may be needed by the process to store the data it requested memory for.
There is no external fragmentation as there are no gaps between pages: each frame is
exactly one page in size, so they are stored adjacently in memory.
– First-fit:
212K is put in 500K partition (leaving a 288K partition)
417K is put in 600K partition
112K is put in 288K partition
426K must wait (cannot be allocated)
– Best-fit:
212K is put in 300K partition 417K is put in 500K partition
112K is put in 200K partition
426K is put in 600K partition
– Worst-fit:
212K is put in 600K partition (leaving a 388K partition)
417K is put in 500K partition
112K is put in 388K partition
426K must wait (cannot be allocated)
Q3.
Size of the objects: 4 bytes per variable x 3 variables = 12 bytes Memory used after the loop (which creates 16 objects)
= 16 * 12
= 192 bytes
Using a contiguous memory allocation model: 16 holes were created, i.e. one per object. These are 16 bytes each (it says in the question). 16 * 16 = 256 bytes for memory control blocks.
NB. The importance of the 10000 was just to make sure it was a very large hole. If the hole was 192 + 15 * 16 = 432 bytes, it still would have been just large enough, and in fact, only 15 memory control blocks would have been created: after storing the 16th Coordinate, there would be literally 0 memory left, so no space for another memory control block.
Q4. Consider a paging system with the page table stored in memory.
a. If a memory reference takes 200 nanoseconds, how long does a paged memory
reference take?
b. If we add associative registers, and 75 percent of all page-table references are found in
the associative registers, what is the effective memory reference time? (Assume that finding a page-table entry in the associative registers takes zero time, if the entry is there.)
access the word in memory.
Answer:
a. 400 nanoseconds; 200 nanoseconds to access the page table and 200 nanoseconds to
b. Effective access time = 0.75 (200 nanoseconds) + 0.25 (400 nanoseconds) = 250
nanoseconds.
Q5
The object pool is of size 10, so for the 16 coordinates, 2 pools are created. Thus:
a) Memory for memory control blocks = one per pool = 2 * 16 bytes = 32 bytes
b) Coordinate objects that are in use = 16 * 12 bytes = 192 bytes, as before
c) Spare coordinate objects: 20 were created, 16 were used, so 4 spare 4 * 12 = 48 bytes d) Single-linked list nodes: one for each spare = 4 * 8 = 32 bytes
e) Total overhead = 32 + 32 + 48 = 112 bytes
(Compare this to Q3 where the overheads were 256 bytes.)
Activity- 4.
Consider the following Java code:
1236: public class example {
1237: public static int x = 7;
1238: public static int y = 10;
1239: public static void a() {
1240: int t = 3;
1241: x = x + t;
1242: t = t * 2;
1243: y = y + t;
1244:
b(t);
1245: System.out.println(x + y);
1246: }
1247: public static void b(int param) {
1248: int s = 7;
1249: for (int i = 0; i < 3; ++i) {
1250: s = s + param;
1251: }
1252:
1253:
1254:
x = x + 1;
// What is on the stack here?
}
1255: public static void main(String[] args) {
1256: int r = 30;
1257: r = r -4;
1258: a();
1259: System.exit(0);
1260: }
1261: }
What is printed to the screen when this code executes?
What values are on the stack when execution reaches line 1253?
Activity- 4.
Consider the following Java code:
1236: public class example {
1237: public static int x = 7;
1238: public static int y = 10;
1239: public static void a() {
1240: int t = 3;
1241: x = x + t;
1242: t = t * 2;
1243: y = y + t;
1244:
b(t);
1245: System.out.println(x + y);
1246: }
1247: public static void b(int param) {
1248: int s = 7;
1249: for (int i = 0; i < 3; ++i) {
1250: s = s + param;
1251: }
1252:
1253:
1254:
x = x + 1;
// What is on the stack here?
}
1255: public static void main(String[] args) {
1256: int r = 30;
1257: r = r -4;
1258: a();
1259: System.exit(0);
1260: }
1261: }
What is printed to the screen when this code executes? 27 What values are on the stack when execution reaches line 1253?
25 1245 6 6 1259 26
5CCS2OSC Lecture 7 – Exercises
Q1. Discuss the hardware support required to support virtual memory via paging.
Q2. A certain computer provides its users with a virtual-memory space of 2 32 bytes. The computer has 218 bytes of physical memory. The virtual memory is implemented by paging, and the page size is 4096 bytes.
• Which bits in a virtual-memory address are page offsets and which are used to index into the page table?
• If each physical memory address is stored in 32-bits, how large is the page table?
Q3. Consider a system with 3 frames and the following sequence of page accesses:
1, 2, 3, 1, 2, 4, 1, 2, 3, 1, 2, 4, 1, 2, 3.
When do page faults occur using FIFO, LRU and optimal replacement algorithms?
Q4. Consider a system with virtual memory, and the following time-measured statistics:
CPU utilization 20% Swapping disk 97.7%
Other I/O devices 5%
Which (if any) of the following will (probably) improve CPU usage? Explain your answer.
a. Install a faster CPU.
b. Install a bigger paging disk.
c. Increase the degree of multiprogramming.
d. Decrease the degree of multiprogramming.
e. Install more main memory.
f. Install a faster hard disk or multiple controllers with multiple hard disks. g. Add pre-paging to the page fetch algorithms.
h. Increase the page size.
Exercises for Lecture 7 – Solutions
Q1. Discuss the hardware support required to support demand paging.
Q2. A certain computer provides its users with a virtual-memory space of 232 bytes. The computer has 218 bytes of physical memory. The virtual memory is implemented by paging, and the page size is 4096 bytes. Which bits in a virtual-memory address are page offsets and which are used to
index into the page table? How large is the page table?
220 2
Q3. Consider a system with 3 frames and the following sequence of page accesses: 1, 2, 3, 1, 2, 4, 1, 2, 3, 1, 2, 4, 1, 2, 3 .
When do page faults occur using FIFO, LRU and optimal replacement algorithms?
Answer:
Q4. Consider a demand-paging system with the following time-measured utilizations: CPU utilization 20%
Paging disk 97.7% Other I/O devices 5%
Which (if any) of the following will (probably) improve CPU utilization? Explain your answer.
a. Install a faster CPU.
No – the CPU would just sit at e.g. 10%, whilst still waiting for paging.
b. Install a bigger paging disk.
No – the paging disk would just have more free space, which doesn't make paging any faster.
c. Increase the degree of multiprogramming.
No – this would increase the total size of the processes' working sets, making thrashing worse.
d. Decrease the degree of multiprogramming. e. Install more main memory.
Answer: For every memory access operation, the page table needs to be consulted to check whether
the corresponding page is resident or not and whether the program has read or write privileges for
accessing the page. These checks would have to be performed in hardware. A TLB could serve as a
cache and improve the performance of the lookup operation.
Answer: Since the page size is 4096 bytes, the low-order 12 bits of an address are used to offset into
the page. The remaining 20 bits are used for the page number: an index into the page table.
The page table has up to
address translation entries. Each is 32-bits (4 bytes), so the page table
size is 4 *
20 bytes = 4MB
1
2
3
1
2
4
1
2
3
1
2
4
1
2
3
FIFO
F
F
F
F
F
F
F
F
F
F
F
LRU
F
F
F
F
F
F
F
OPT
F
F
F
F
F
F
F
Yes – less multiprogramming → smaller total working set size → less paging → if reduced enough,
eliminate thrashing.
Yes – unless memory usage is obscene, this will help, as it increases the likelihood that a process's
working set will be in memory.
f. Install a faster hard disk or multiple controllers with multiple hard disks.
g. Add prepaging to the page fetch algorithms. h. Increase the page size.
Yes – This reduces the page-fault-handling time. (Though in practice, adding more memory is
usually better.)
Maybe – if the page accesses can be anticipated, this will help, as rather than page-faulting, the
pages will already be in memory at the point they are needed.
Could go either way. This increases the working set size of a process, if it needs data scattered
across several pages – e.g. if it needs data on 20 pages, it needs all 20 in memory, and if they are
4MB not 4KB, that increases the amount of memory it needs. If the data it needs is all on one page,
it could help, as it is quicker to swap in/out one large page than several smaller pages, reducing
page-fault-handling time. As we don't know a priori which of these two cases is true, we can't
predict what will happen.
Week-8 Tutorials
Q1. Explain the meaning of the term object as it relates to protection in a computer system. What are the two general types of objects in a system?
Q2. A process is said to operate within a protection domain which specifies the resources that the process may access. List the ways that a domain can be realized.
Q3. What is an access matrix and how can it be implemented?
Q4. Why is a global table implementation of an access matrix not typically implemented?
Q5. Explain how Java provides protection through type safety.
Week-8 Tutorials
Q1. Explain the meaning of the term object as it relates to protection in a computer system. What are the two general types of objects in a system?
Ans: A computer system is a collection of processes and objects. Each object has a unique name that differentiates it from all other objects in the system, and each can be accessed only through well-defined and meaningful operations. Objects are essentially abstract data types and include hardware objects (such as the CPU, memory segments, printer, and disks) and software objects (such as files, programs, and semaphores).
Q2. A process is said to operate within a protection domain which specifies the resources that the process may access. List the ways that a domain can be realized.
Ans: A domain may be realized where each user, process, or procedure may be a domain. In the first case, the set of objects that can be accessed depends on the identity of the user. In the second case, the set of objects that can be accessed depends upon the identity of the process. Finally, the third case specifies that the set of objects that can be accessed depends on the local variables defined with the procedure.
Q3. What is an access matrix and how can it be implemented?
Ans: An access matrix is an abstract model of protection where the rows represent domains and the columns represent objects. Each entry in the matrix consists of a set of access rights. Access matrices are typically implemented using a global table, an access list for objects, a capability list for domains, or a lock-key mechanism.
Q4. Why is a global table implementation of an access matrix not typically implemented?
Ans: The global table implementation suffers from a couple of drawbacks that keep it from being a popular implementation type. The first drawback is that the table is usually large and cannot be stored in main memory. If the table cannot be stored in main memory, extra I/O must be used to access this table. In addition, a global table makes it difficult to take advantage of special groupings of objects or domains.
Q5. Explain how Java provides protection through type safety.
Ans: Java's load-time and run-time checks enforce type safety of Java classes. Type safety ensures that classes cannot treat integers as pointers, write past the end of an array, or otherwise access memory in arbitrary ways. Rather, a program can access an object only via the methods defined on that object by its class. This enables a class to effectively encapsulate its data and methods from other classes loaded in the same JVM.
Week-9 Tutorial
1. What are the four levels of security measures that are necessary for system protection? 2. What is a trap door? Why is it problematic?
3. How does a virus differ from a worm?
4. What is the most common way for an attacker outside of the system to gain unauthorized access to the target system?
5. What are the two main methods used for intrusion detection? 6. What is port scanning and how is it typically launched?
7. What role do keys play in modern cryptography?
8. What is the difference between symmetric and asymetric encryption?
9. What are the two main varieties of authentication algorithms? 10. What is the practice of safe computing? Give two examples.
Week-9 Tutorial
1. What are the four levels of security measures that are necessary for system protection?
Ans: To protect a system, security measures must take places at four levels: physical (machine rooms, terminals, and workstations); human (user authorization, avoidance of social engineering); operating system (protection against accidental and purposeful security breaches); and network (leased, Internet, and wireless connections).
Section: 15.1
2. What is a trap door? Why is it problematic?
Ans: A trap door is an intentional hole left in software by the designer of a program or system. It can allow circumvention of security features for those who know about the hole. Trap doors pose a difficult problem because, to detect them, we have to analyze all the source code for all components of a system.
Section: 15.15.2.2
3. How does a virus differ from a worm?
Ans: A worm is structured as a complete, standalone program whereas a virus is a fragment of code embedded in a legitimate program.
Section: 15.3
4. What is the most common way for an attacker outside of the system to gain unauthorized access to the target system?
Ans: The stack- or buffer-overflow attack is the most common way for an attacker outside the system to gain unauthorized access to a system. This attack exploits a bug in the software in order to overflow some portion of the program and cause the execution of unauthorized code. Section: 15.2.4
5. What are the two main methods used for intrusion detection?
Ans: The two most common methods employed are signature-based detection and anomaly detection. In signature-based detection, system input or network traffic is examined for specific behavior patterns known to indicate attacks. In anomaly detection, one attempts, through various techniques, to detect anomalous behavior within computer systems.
Section: 15.6.3
6. What is port scanning and how is it typically launched?
Ans: Port scanning is not an attack but rather is a means for a cracker to detect a system's vulnerabilities to attack. Port scanning typically is automated, involving a tool that attempts to create a TCP/IP connection to a specific port or a range of ports. Because port scans are detectable, they are frequently launched from zombie systems.
Section: 15.3.2
7. What role do keys play in modern cryptography?
Ans: Modern cryptography is based on secrets called keys that are selectively distributed to computers in a network and used to process messages. Cryptography enables a recipient of a message to verify that the message was created by some computer possessing a certain key - the key is the source of the message. Similarly, a sender can encode its message so that only a computer with a certain key can decode the message, so that the key becomes the destination. Section: 15.4
8. What is the difference between symmetric and asymetric encryption?
Ans: In a symmetric encryption algorithm, the same key is used to encrypt and to decrypt. In an asymetric encryption algorithm, there are different encryption and decryption keys. Asymmetric encryption is based on mathematical functions instead of the transformations used in symmetric encryption, making it much more computationally expensive to execute.
Section: 15.4.1
9. What are the two main varieties of authentication algorithms?
Ans: The first type of authentication algorithm, a message-authentication code (MAC), uses symmetric encryption. In MAC, a cryptographic checksum is generated from the message using a secret key. The second type of authentication algorithm, a digital-signature algorithm, uses a public and private key. The authenticators thus produced are called digital signatures.
Section: 15.4.1
10. What is the practice of safe computing? Give two examples.
Ans: The best practice against computer viruses is prevention, or the practice of safe computing. Purchasing unopened software from vendors and avoiding free or pirated copies from public sources or disk exchange offer the safest route to preventing infection. Another defense is to avoid opening any e-mail attachments from unknown users.
Section: 15.6.4
Introduction
Week-9 Security - Additional Notes
In many applications, ensuring the security of the computer system is worth considerable effort. For example Large commercial systems containing payroll or other financial data could be target to theft. Systems that contain data dedicated to companies operations may be of interest to their competitors.
Furthermore, loss of such data, whether by accident or fraud, can seriously impair the ability of the corporation to function. We say that a system is secure if its resources are used and accessed as intended under all circumstances.
Security Violations:
1. Breach of confidentiality
This type of violation involves unauthorized reading of data (or theft of information). Typically, a breach of confidentiality is the goal of an intruder. Capturing secret data from a system or a data stream, such as credit-card information or identity information for identity theft, can result directly in money for the intruder.
2. Breach of integrity
This violation involves unauthorized modification of data. Such attacks can, for example, result in passing of liability to an innocent party or modification of the source code of an important commercial application.
3.Breach of availability
This violation involves unauthorized destruction of data. Some crackers would rather wreak havoc and gain status or bragging rights than gain financially. Web-site defacement is a common example of this type of security breach. When the cracker break the rerver and change the visual appearance of the page.
4.Theft of service This violation involves unauthorized use of resources. For example, an intruder (or intrusion program) may install a daemon on a system that acts as a file server
5.Denial of service This violation involves preventing legitimate use of the system. Denial-of-service, or DOS, attacks are sometimes accidental. The original Internet worm turned into a DOS attack when a bug failed to delay its rapid spread.
Methods :
1. Masquerading: in which one participant in a communication pretends to be someone else (another host or another person). By masquerading, attackers breach authentication, the correctness of identification; they can then gain access that they would not normally be allowed or escalate their privileges—obtain privileges to which they would not normally be entitled.
2. Replay attack consists of the malicious or fraudulent repeat of a valid data transmission. Sometimes the replay comprises the entire attack— for example, in a repeat of a request to transfer money. But frequently it is done along with message modification, again to escalate privileges.
3. Man-in-the-middle: in which an attacker sits in the data flow of a communication, masquerading as the sender to the receiver, and vice versa. In a network communication, a man-in-the-middle attack may be preceded by a session hijacking, in which an active communication session is intercepted.
Programs Threats
Processes, along with the kernel, are the only means of accomplishing work on a computer. Therefore, writing a program that creates a breach of security, or causing a normal process to change its behavior and create a breach, is a common goal of crackers. In fact, even most nonprogram security events have as their goal causing a program threat. For example, while it is useful to log in to a system without authorization, it is quite a lot more useful to leave behind a back-door daemon that provides information or allows easy access even if the original exploit is blocked. In this section, we describe common methods by which programs cause security breaches.
we describe common methods by which programs cause security breaches.
1. Trojan Horse: A text-editor program, for example, may include code to search the file to be edited for certain keywords. If any are found, the entire file may be copied to a special area accessible to the creator of the text editor.
A variation of the Trojan horse is a program that emulates a login program. An unsuspecting user starts to log in at a terminal and notices that he has apparently mistyped his password. He tries again and is successful. What has happened is that his authentication key and password have been stolen by the login emulator, which was left running on the terminal by the thief.The emulator stored away the password, printed out a login error message, and exited; the user was then provided with a genuine login prompt. This type of attack can be defeated by having the operating system print a usage
message at the end of an interactive session or by a non-trappable key sequence.
Another variation on the Trojan horse is spyware. Spyware sometimes accompanies a program that the user has chosen to install. Most frequently, it comes along with freeware or shareware programs, but sometimes it is included with commercial software. The goal of spyware is to download ads to display on the user’s system, create pop-up browser windows when certain sites are visited, or capture information from the user’s system and return it to a central
site. This latter practice is an example of a general category of attacks known as covert channels, in which surreptitious communication occurs. For example, the installation of an innocuous-seeming program on a Windows system could result in the loading of a spyware daemon. The spyware could contact a central site, be given a message and a list of recipient addresses, and deliver the spam message to those users from the Windows machine. This process continues until the user discovers the spyware.
2. Trap door : The designer of a program or system might leave a hole in the software that only he is capable of using. This type of security breach (or trap door) was shown in the movieWarGames. For instance, the code might check for a specific user ID or password, and it might circumvent normal security procedures. Programmers have been arrested for embezzling from banks by including rounding errors in their code and having the occasional half-cent credited to their accounts. This account crediting can add up to a large amount of money, considering the number of transactions that a large bank executes.
A clever trap door could be included in a compiler. Trap doors pose a difficult problem because, to detect them, we have to analyze all the source code for all components of a system.
3. Logic Bomb: A programmer, for example, might write code to detect whether she was still employed; if that check failed, a daemon could be spawned to allow remote access, or code could be launched to cause damage to the site.
4. Stack or buffer overflow: attack exploits a bug in a program. The bug can be a simple case of poor programming, in which the programmer neglected to code bounds checking on an input field. In this case, the attacker sends more data than the program was expecting. By using trial and error, or by examining the source code of the attacked program if it is available, the attacker determines the vulnerability and writes a program to do the following:
1. Overflow an input field, command-line argument, or input buffer—for example, on a network daemon—until it writes into the stack.
2. Overwrite the current return address on the stack with the address of the exploit code loaded in step 3.
3. Write a simple set of code for the next space in the stack that includes the commands that the attacker wishes to execute—for instance, spawn a shell.
Consider the simple C program shown in slide-9. This program creates a character array of size BUFFER SIZE and copies the contents of the parameter provided on the command line—argv[1]. As long as the size of this parameter is less than BUFFER SIZE (we need one byte to store the null terminator), this program works properly. But consider what happens if the parameter provided on the command line is longer than BUFFER SIZE. In this scenario, the strcpy() function will begin copying from argv[1] until it encounters a null terminator (\0) or until the program crashes. Thus, this program suffers from a potential buffer-overflow problem in which copied data overflow the buffer array.
Note that a careful programmer could have performed bounds checking on the size of argv[1] by using the strncpy() function rather than strcpy(), replacing the line “strcpy(buffer, argv[1]);” with “strncpy(buffer, argv[1], sizeof(buffer)-1);
Virus is a fragment of code embedded in a legitimate program. Viruses are self-replicating and are designed to “infect” other programs. They can wreak havoc in a system by modifying or destroying files and causing system crashes and program malfunctions.
Viruses are a particular problem for users of PCs. UNIX and other multiuser operating systems generally are not susceptible to viruses because the executable programs are protected from writing by the operating system.
File. A standard file virus infects a system by appending itself to a file. It changes the start of the program so that execution jumps to its code. After it executes, it returns control to the program so that its execution is not noticed. File viruses are sometimes known as parasitic viruses, as they leave no full files behind and leave the host program still functional.
Boot. A boot virus infects the boot sector of the system, executing everytime the system is booted and before the operating system is loaded. It watches for other bootable media (that is, floppy disks) and infects them. These viruses are also known as memory viruses, because they do not appear in the file system. Figure 15.5 shows how a boot virus works.
Macro.Most viruses are written in a low-level language, such as assembly or C. Macro viruses are written in a high-level language, such as Visual Basic. These viruses are triggered when a program
capable of executing the macro is run. For example, a macro virus could be contained in a spreadsheet file.
Source code. A source code virus looks for source code and modifies it to include the virus and to help spread the virus. Polymorphic. A polymorphic virus changes each time it is installed to avoid detection by antivirus software. The changes do not affect the virus’s functionality but rather change the virus’s signature. A virus signature is a pattern that can be used to identify a virus, typically a series of bytes that make up the virus code.
Encrypted. An encrypted virus includes decryption code along with the encrypted virus, again to avoid detection. The virus first decrypts and then executes.
Stealth. This tricky virus attempts to avoid detection by modifying parts of the system that could be used to detect it. For example, it could modify the read system call so that if the file it has modified is read, the original form of the code is returned rather than the infected code.
Tunneling. This virus attempts to bypass detection by an antivirus scanner by installing itself in the interrupt-handler chain. Similar viruses install themselves in device drivers.
Multipartite.Avirus of this type is able to infect multiple parts of a system, including boot sectors, memory, and files. This makes it difficult to detect and contain.
Armored. An armored virus is coded to make it hard for antivirus researchers to unravel and understand. It can also be compressed to avoid detection and disinfection. In addition, virus droppers and other full files that are part of a virus infestation are frequently hidden via file attributes or unviewable file names.
System and network threats involve the abuse of services and network connections. System and network threats create a situation in which operating-system resources and user files are misused.
The more open an operating system is—the more services it has enabled and the more functions it allows—the more likely it is that a bug is available to exploit. Increasingly, operating systems strive to be secure by default. For example, Solaris 10 moved from a model in which many services (FTP,telnet, and others) were enabled by default when the system was installed to a model in which almost all services are disabled at installation time and must specifically be enabled by system administrators.
A worm is a process that uses the spawn mechanism to to duplicate itself.
Worms The worm spawns copies of itself, using up system resources and perhaps locking out all other processes. On computer networks, worms are particularly potent, since they may reproduce themselves among systems and thus shut down an entire network. Such an event occurred in 1988 to UNIX systems on the Internet, causing the loss of system and system-administrator time worth millions of dollars.
Port Scanning : Port scanning is not an attack but rather a means for a cracker to detect a system’s vulnerabilities to attack.
For example, suppose there is a known vulnerability (or bug) in sendmail. A cracker could launch a port scanner to try to connect, say, to port 25 of a particular system or to a range of systems. If the connection was successful, the cracker (or tool) could attempt to communicate with the answering service to determine if the service was indeed sendmail and, if so, if it was the version with the bug.
There are tools that perform subsets of that functionality. For example, nmap (from http://www.insecure.org/nmap/) is open-source utility for network exploration and security auditing.
When pointed at a target, it will determine what services are running, including application names and versions. It can identify the host operating system. It can also provide information about defenses, such as what firewalls are defending the target. It does not exploit any known bugs.
In computing, a zombie is a computer connected to the Internet that has been compromised by a hacker, computer virus or trojan horse program and can be used to perform malicious tasks of one sort or another under remote direction.
Denial of Service : denial-of-service attacks are aimed not at gaining information or stealing resources but rather at disrupting legitimate use of a system or facility. For example a Web-site click could download a Java applet that proceeds to use all available CPU time or to pop up windows infinitely. The second category involves disrupting the network of the facility. There have been several successful denial-of-service attacks of this kind against major Web sites. These attacks result from abuse of some of the fundamental functionality of TCP/IP. For instance, if the attacker sends the part of the protocol that says “I want to start a TCP connection,” but never follows with the standard “The connection is now complete,” the result can be partially started TCP sessions.
It is impossible to prevent denial-of-service attacks. The attacks use the same mechanisms as normal operation. Even more difficult to prevent and resolve are distributed denial-of-service attacks (DDOS). These attacks are launched from multiple sites at once, toward a common target, typically by zombies. DDOS attacks have become more common and are sometimes associated with blackmail attempts. A site comes under attack, and the attackers offer to halt the attack in exchange for money.
Morris Internet Worm : In November 1988 a first-year Cornell graduate student Robert Tappan Morris , released a worm program on one or more hosts connected to the Internet. Targeting workstation running UNIX. The worm quickly spread over great distances; within a few hours of its release, Although Robert Morris designed the self-replicating program for rapid reproduction and distribution, some of the features of the UNIX networking environment provided the means to propagate the worm throughout the system., the worm program exploited flaws in the UNIX operating system’s security routines and took advantage of UNIX utilities that simplify resource sharing in local-area networks to gain unauthorized access to thousands of other connected sites. Morris’s methods of attack are outlined next. The worm was made up of two programs, a grappling hook (also called a bootstrap or vector) program and the main program.Named l1.c, the grappling hook consisted of 99 lines of C code compiled and run on each machine it accessed. Once established on the computer system under attack, the grappling hook connected to the machine where it originated and uploaded a copy of the main worm onto the hooked . The main program proceeded to search for other machines to which the newly infected system could connect easily. In these actions, Morris exploited the UNIX networking utility rsh for easy remote task execution. By setting up special files that list host–login name pairs, users can omit entering a password each time they access a remote account on the paired list. The worm searched these special files for site names that would allow remote execution without a password.
The bug exploited in sendmail also involved using a daemon process or malicious entry. sendmail sends, receives, and routes electronic mail.
Debugging code in the utility permits testers to verify and display the state of the mail system. The debugging option was useful to system administrators
and was often left on. Morris included in his attack arsenal a call to debug that —instead of specifying a user address, as would be normal in testing—issued a set of commands that mailed and executed a copy of the grappling-hook program.
The action has been characterized as both a harmless prank gone awry and a serious criminal offense. Based on the complexity of the attack, it is unlikely that the worm’s release or the scope of its spread was unintentional. The worm program took elaborate steps to cover its tracks and to repel efforts to stop its spread. Yet the program contained no code aimed at damaging or destroying the systems on which it ran. A federal court convicted Morris and handed down a sentence of three years’ probation, 400 hours of community service, and a $10,000 fine. Morris’s legal costs probably exceeded $100,000.
Cryptography is used to constrain the potential senders and/or receivers of a message. Modern cryptography is based on secrets called keys that are selectively distributed to computers in a network and used to process messages. Cryptography enables a recipient of a message to verify that the message was created by some computer possessing a certain key—the key is the source of the message. Similarly, a sender can encode its message so that only a computer with a certain key can decode the message, so that the key becomes the destination.
Data Encryption Standard : DES works by taking a 64-bit value and a 56-bit key and performing a series of transformations. Messages longer than 64 bits are broken into 64-bit chunks. Because DES works on a chunk of bits at a time, is known as a block cipher. If the same key is used for encrypting an extended amount of data, it becomes vulnerable to attack. Consider, for example, that the same source block would result in the same ciphertext if the same key and encryption algorithm were used. Therefore, the chunks are not just encrypted but also exclusive-or’ed (XORed) with the previous ciphertext block before encryption. This is known as cipher-block chaining.
DES is now considered insecure for many applications because its keys can be exhaustively searched with moderate computing resources. Rather than giving up on DES, though, NIST created a modification called triple DES, in which the DES algorithm is repeated three times (two encryptions and one decryption) on the same plaintext using two or three keys—for example, c = E(k3)(D(k2)(E(K1)(m))). When three keys are used, the effective key length is 168 bits. Triple DES is in widespread use today.
AES can use key lengths of 128, 192, and 256 bits and works on 128-bit blocks. It works by performing 10 to 14 rounds of transformations on a matrix
formed from a block. Generally, the algorithm is compact and efficient.
The twofish algorithm is fast, compact, and easy to implement. It can use a variable key length of up to 256 bits and works on 128-bit blocks.
A stream cipher is designed to encrypt and decrypt a stream of bytes or bits rather than a block. This is useful when the length of a communication would make a block cipher too slow.
The key is input into a pseudo–random-bit generator, which is an algorithm that attempts to produce random bits. The output of the generator when fed a key is a keystream. A keystream is an infinite set of keys that can be used for the input plaintext stream. RC4 is used in encrypting steams of data, such as in WEP, the wireless LAN protocol
Week-10 Tutorial
Q1. Linux runs on a variety of hardware platforms. What steps must the Linux developers take to ensure that the system is portable to different processors and memory-management architectures, and to minimize the amount of architecture- specific kernel code?
Q2. What are the advantages and disadvantages of making only some of the symbols defined inside a kernel accessible to a loadable kernel module?
Q3. Discuss how the clone() operation supported by Linux is used to support both processes and threads.
Week-10 Tutorial
Q1. Linux runs on a variety of hardware platforms. What steps must the Linux developers take to ensure that the system is portable to different processors and memory-management architectures, and to minimize the amount of architecture-specific kernel code?
Answer: The organization of architecture-dependent and architecture independent code in the Linux kernel is designed to satisfy two design goals: to keep as much code as possible common between architectures and to provide a clean way of defining architecture-specific properties and code. The solution must of course be consistent with the overriding aims of code maintainability and performance. There are different levels of architecture dependence in the kernel, and
different techniques are appropriate in each case to comply with the design requirements. These levels include:
a. CPU word size and endianness. These are issues that affect the portability of all software written in C, but especially so for an operating system, where the size and alignment of data must be carefully arranged.
b. CPU process architecture. Linux relies on many forms of hardwaresupport for its process and memory management. Differentprocessors have their own mechanisms for changing between protection domains (e.g., entering kernel mode from user mode), rescheduling processes, managing virtual memory, and handling
incoming interrupts.
Q2. What are the advantages and disadvantages of making only some of the symbols defined inside a kernel accessible to a loadable kernel module?
Answer: The advantage of making only some of the symbols defined inside a kernel accessible to a loadable kernel module is that there is a fixed set of entry points made available to the kernel module. This ensures that loadable modules cannot invoke arbitrary code within the kernel and thereby interfere with the kernel’s execution. By restricting the set of entry points, the kernel is guaranteed that the interactions with the module take place at controlled points where certain invariants hold. The disadvantage of making only a small set of the symbols defined accessible to the kernel module is the loss in flexibility and might sometimes lead to a performance issue as some of the details of the kernel are hidden from the module.
Q3. Discuss how the clone() operation supported by Linux is used to support both processes and threads.
Answer: In Linux, threads are implemented within the kernel by a clone mechanism that creates a new process within the same virtual address space as the parent process. Unlike some kernel-based thread packages, the Linux kernel does not make any distinction between threads and processes: a thread is simply a process that did not create a new virtual address space when it was initialized. The main advantage of implementing threads in the kernel rather than
in a user-mode library are that:
• kernel-threaded systems can take advantage of multiple processors if they are available; and
• if one thread blocks in a kernel service routine (for example, a system call or page fault), other threads are still able to run.