HW5: Extensions to the Multitasking Kernel
Assigned: 29 November 2017 Due: class time 13 December 2017
- Introduction
The objective of this assignment is to extend the multitasking kernel obtained in hw3 to include the preemptive scheduler. Copy the files to your cs444/hw5 dir. Provide a README with guide to your sources.
- Preemptive scheduler
Convert the scheduler to be preemptive, and let process 0 run an idle loop at normal process level, preempted as necessary for the user processes. Of course this requires a clock ticking.
1) You will need to set and run the system timer, the “PIT”. An example of how to do this is in $pcex/timetest.c. The timer(or “clock”) in timetest.c will take up to approximately 55 milliseconds to run down and cause an interrupt. Change it to interrupt every 10 ms, like Linux. When you first fold this in, it will print a “.” for every tick–leave that in for a while, to make sure it’s ticking away, but take out the dots before the final testing.
2) Add a CPU quantum to each process. Use a quantum between 4x 10 to 10×10 ms. Decrement it in the tick handler and call schedule when it’s gone. You should confirm that when there are two cpu bound processes, they alternate in execution. The preemptions of user processes are recorded in the debug_log and printed out at the end of the run along with the marks for input, output, and process-switch from zombie.
3) Keep counts of how much cpu each uprog uses and how many characters each outputs. To do this add appropriate fields to the proc structure. At each tick, increment the cpu field of the running process.
Also, each time there is a successful enqueue, increment the output character count. Print the counts when the whole program exits (that is, when you print the exit values).
4) As a debugging tool add an int parameter to the scheduler function. Each call site to the scheduler should use a different integer for this parameter. (e.g. if there are 3 call sites, use 0, 1 and 2). Create
an int array in the kernel data area whose size is the number of integers used. The idea is that when the scheduler is called from call site 0, you will increment the 0th element of the array by 1, when called from call site 1, increment the 1st element, etc. Print output the accumulated values at the finish. This will allow you to confirm that the scheduler is being called from the clock interrupt handler.
There may be new race conditions in the kernel due to preemption in the kernel. For example, “number_of_zombie++” in sysexit needs protection unless we can prove it is implemented with just one instruction. Otherwise, between the two, there could be a preemption causing another process to run and exit, reading the old count. They both end up writing the same count, losing one increment. This is called the lost update problem. Most of the ++’s are acting on local variables– these are process-private and thus immune from such shared access.
- Scheduler Testing
Testing: all scripts should show debug_log output.
- Using the same programs as in hw3, now all in uprog123.c, confirm that now process 1 is preempted during its big idle loop. The scheduler as provided will debug_log “|(1-2)” or “|(1-3)” to mark this preemption. Make sure your quantum is large enough to give a process at least 40 ms of CPU, but no more than 100ms. Make a script of a run showing this preemption in uprog123.script. This script may have tick reports as well if you want (see next paragraph.)
- There are two cpu intensive program in this directory to use in place of two of the uprogs of hw3. This pair of programs, plus the old uprog1 is in sequences.c. You should see the two programs get different shares of the cpu, while the program with output is also able to run. Add debug reports such as “*2” for a tick (while process 2 is running) to the debug_log. Then it will be obvious how many ticks each process gets to run. Make a script of this run in sequences.script.
Note: all new fields to the proc structure should be placed at the end, to ensure that asmswtch will still work properly.