Overview
In this assignment you are required to take the protected mode code you developed from MemOS (notably memos-2) and develop a new version of the system, which we will call FIFOS (the First-In-First-Out System). FIFOS extends memos-2 with capabilities to schedule threads in a FIFO (or FCFS) order. That is, any newly created or awoken threads are added to the back of a ready queue and when the scheduler is invoked, it picks the thread at the front of the ready queue for dispatching. This assignment requires you to build a standalone system rather than a solution on top of an existing system, so you cannot use pre-existing thread libraries such as Pthreads or a runtime environment such as a Java Virtual Machine (although you can implement your own versions of Pthreads or a JVM as part of your system if you want to give yourself a challenge!).
Details
Using code from memos-2 that you developed for your primer assignment, you should start by adding functionality to support thread scheduling. For those who did not complete memos-2 we will provide template source code. You should extend the code to allow threads to work within the default protection domain that is created when GRUB passes control to your system. For simplicity, you can assume a pool of N threads is created statically (where N is specified in a header or some boot parameter of your choosing. You can assume the thread pool exists at boot time, and no further threads need to be created dynamically.
For each thread, you should define a thread control block (TCB), which maintains appropriate state information. Example state information includes copies of machine register values such as the stack and instruction pointers (possibly others too), and a thread ID (TID). Each TID should be a unique integer value in the range [1,N] where N is the number of threads in your pool. The pool itself might simply be an array of pointers to TCBs, with some flag identifying whether or not a thread in the pool has been assigned work.
Initially, threads in your thread pool are all idle and they all have the same priority. You can then assign threads specific work by calling a function thread_create(). The function should look something like:
int thread_create (void *stack, void *func);
In the above, the thread creation function binds a thread in the pool to specific stack and function addresses. It then returns the TID of the thread associated with this call. For simplicity you can assume threads are taken from the pool in order from lowest to highest TID until all threads have been assigned work. Each thread assigned work is then marked as busy using a flag in the
corresponding TCB. If all threads are busy, you cannot create any more threads and should deal with this case accordingly. If a thread returns from the function it is associated with, you can place it back into an idle state, so that it can be assigned new functionality via a subsequent thread_create() call.
Scheduling
You should define a ready queue for a scheduler that simply operates in FIFO order. That is, each time an idle thread becomes busy, it is
added to the FIFO queue in the order of calls to thread_create(). Any thread that later becomes idle is not put back in the ready queue.
First step
You should start by implementing threads that run to completion in a non- preemptive manner and are stackless. That is, they have no stack for local variables, arguments, or return addresses for function calls. They call a simple function that performs some task and then return to the idle state. To tackle this, you should look for information on programming “protothreads” or coroutines. Sample code is available on the course webpage but you are free to search the Internet for additional sources of information. Make sure, however, that any help you find is properly cited.
Second step
If you complete the first step, you should try to add support for threads with their own stacks, so they can operate on local variables and/or call other functions. Here, you need to create a region of memory to act as a stack for each thread, and then you need to set the stack pointer to reference the stack when setting up the execution context for the next thread to run.
Third step
If you get this far, congratulations! The next step is to add support for preemption. You can do this cooperatively, by implementing a yield() function, that suspends the calling thread and picks the next busy thread in the FIFO queue. Any yielded thread simply adds itself to the back of the FIFO queue. Testing
For testing purposes, assume the functions invoked by your thread creation routing simply write a string to the screen, stating which thread is running. For example, “Running thread <1>” or just “<1>” will suffice for output from the execution of a thread with ID=1. If desired, you can model the execution time of a thread by a repeated loop that prints multiple text strings to the display. Alternatively, you can implement a function of your choosing to consume CPU time when a thread runs. Once a thread terminates, you should print a message such as “Done
It is highly recommended that you develop your code using the Puppy Linux environment and QEMU. Specifically, you can test the execution of fifos using qemu-system-i386 within Puppy Linux.
Adding Preemption
If you get this far, congratulations! You should now attempt to implement a second version of FIFOS, which supports a preemptive thread scheduler using a timeout mechanism. This way, we convert FIFO scheduling to a form of round-robin scheduling, where every thread has a pre-defined timeslice (say, 100 milliseconds). You need to program the 8253/4 programmable interval timer (PIT) chip to do this, along with setting up an interrupt descriptor table (IDT) for the 8259 programmable interrupt controller (PIC) interrupt. The timer interrupt handler should, on timeout, switch control to the thread at the head of the FIFO queue and place the current thread to the back of the queue.
To differentiate from the FCFS implementation, call the second version of FIFOS “fifos-2”.
Style and Extra Credit
Extra credit will be given to elegant solutions, especially those that support “pluggable” scheduling policies. That is, if your code is well-written it should be easy to replace the scheduler function with a different policy.
As a test of the ability to implement a different scheduler, bonus will be given to anyone who implements “processor capacity reserves”. A processor capacity reserve is a constraint (Ci,Ti) for a thread
¦Ó
i
\tau_i
, such that ¦Ói is limited to a budget of at most Ci units of CPU time every period Ti. If the thread blocks before the end of its period, having consumed Ai actual time (where Ai