CITS2002 Systems Programming
1 next ¡ú CITS2002 CITS2002 schedule
The Principle of Referential Locality
Numerous studies of the memory accesses of processes have observed that memory references cluster in certain parts of the program: over long periods, the centres of the clusters move, but over shorter periods, they are fairly static.
For most types of programs, it is clear that:
Except for infrequent branches and function/procedure invocation, program execution is sequential. The next instruction to be fetched usually follows the last one executed.
Programs generally operate at the same “depth” of function-invocation. References to instructions cluster within (and between) a small collection of functions.
Most iterative control flow (looping) is over short instruction sequences. Instructions from the same memory locations are fetched several times in succession.
Access to memory locations holding data is, too, constrained to a few frequently required data structures, or sequential steps through memory (e.g. when traversing arrays).
With reference to paging schemes, this locality of reference suggests that, within a process, the next memory reference will very likely be from the same pageas the last memory reference.
This will impact heavily on our next enhancement to memory management: the use of virtual memory.
CITS2002 Systems Programming, Lecture 12, p1, 31st August 2021.
CITS2002 Systems Programming
¡ûprev 2 next¡ú CITS2002 CITS2002schedule
Paging vs Partitioning
When we compare paging with the much simpler technique of partitioning, we see two clear
As processes are swapped-out and then back in, they may occupy different regions of physical memory.
This is possible because hardware efficiently translates each logical address to a physical address, at run-time.
The operating system’s memory management software manipulates the hardware (page table registers) to facilitate the translation.
A process is broken into pages and these need not be contiguous in physical memory. In combination with the principle of program locality, we now have a significant breakthrough:
If the above two characteristics are present, then it is not necessary for all pages of a process to be in memory at any one time during its execution.
CITS2002 Systems Programming, Lecture 12, p2, 31st August 2021.
CITS2002 Systems Programming
¡ûprev 3 next¡ú CITS2002 CITS2002schedule
Advantages of Paging
Execution of any process can continue provided that the instruction it next wants to execute, or the data location it next wants to access, is in physical memory.
If not, the operating system must load the required memory from the swapping (or paging) space before execution can continue.
However, the swapping space is generally on a slow device (a disk), so the paging I/O request forces the process to be Blocked until the required page of memory is available. In the interim, another process may be able to execute.
Before we consider how we can achieve this, and introduce additional efficiency, consider what advantages are now introduced:
More (pieces of) processes may be maintained in main physical memory (either Ready or Running).
Most processes do not require all of their memory before they can execute: memory may be loaded on demand.
If the swapping space is larger than the physical memory, any single process may now demand more memory than the amount of physical memory installed.
This last aspect gives the technique its name: virtual memory.
CITS2002 Systems Programming, Lecture 12, p3, 31st August 2021.
CITS2002 Systems Programming
¡ûprev 4 next¡ú CITS2002 CITS2002schedule
Virtual Memory and Resident Working Sets
The principle of program locality again tells us that at any time, only a small subset of a process’s instructions and data will be required.
We define a process’s set of pages, or segments, in physical memory, as its resident (or working) memory set.
PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
chris 3865 0.0 0.6 2924 1644 pts/1 S Sep15 0:01 -zsh
chris 25366 0.0 6.0 23816 15428 ? S 14:34 0:06 /usr/bin/firefox chris 25388 0.0 1.4 17216 3660 ? S 14:34 0:00 (dns helper) chris 26233 0.0 0.2 2604 688 pts/1 R 19:11 0:00 ps aux
In the steady state, the memory will be fully occupied by the working sets of the Ready and Running processes, but:
If the processes’ working sets are permitted to be too large, fewer processes can ever be Ready.
If the processes’ working sets are forced to be too small, then additional requests must be made of the swapping space to retrieve required pages or segments.
All modern operating systems employ virtual memory based on paging (Q: can you determine the macOS or Linux page size?). Windows systems also employ virtual memory based on segmentation.
CITS2002 Systems Programming, Lecture 12, p4, 31st August 2021.
4 0.0 0.0 0 0 ? SW Sep12 0:04 [kswapd]
692 0.0 0.2 1576 604 ? S Sep12 0:00 crond
742 0.0 0.8 5212 2228 ? S Sep12 0:23 xfs -droppriv -da 749 0.0 0.1 1344 340 tty1 S Sep12 0:00 /sbin/mingetty tt
1 0.0 0.1 1372 432 ? S Sep12 0:04 init
CITS2002 Systems Programming
¡ûprev 5 next¡ú CITS2002 CITS2002schedule
Virtual Memory Hardware using Page Tables
We saw that with simple paging, each process has its own page table entries. When a process’s (complete) set of pages were loaded into memory, the current (hardware) page tables were saved and restored by the operating system.
Using virtual memory, the same approach is taken, but the contents of the page tables becomes more complex. Page table entries must include additional control information, indicating at least:
if the page is present in physical memory (a P bit), and
if the page has been modified since it was brought into physical memory (an M bit).
Address Translation in a Paging System
CITS2002 Systems Programming, Lecture 12, p5, 31st August 2021.
CITS2002 Systems Programming
¡ûprev 6 next¡ú CITS2002 CITS2002schedule
Virtual Memory Hardware using Page Tables, continued
The total size of the page table entries also becomes an issue, because the number of pages
that a process may access greatly exceeds the number of actual frames. This is addressed using a two-level addressing scheme:
Address Translation in a Two-Level Paging System
CITS2002 Systems Programming, Lecture 12, p6, 31st August 2021.
CITS2002 Systems Programming
¡ûprev 7 next¡ú CITS2002 CITS2002schedule
Virtual Memory Page Replacement
When the Running process requests a page that is not in memory, apage fault results, and (if the memory is currently ‘full’) one of the frames currently in memory must be replaced by the required page.
To make room for the required page, one or more existing pages must be “evicted” (to the swap space). Clearly, the working set of some process must be reduced.
However, if a page is evicted just before it is required (again), it’ll just need to be paged back in! If this continues, the activity of page thrashing is observed.
We hope that the operating system can avoid thrashing with an intelligent choice of the page to discard.
CITS2002 Systems Programming, Lecture 12, p7, 31st August 2021.
CITS2002 Systems Programming
¡ûprev 8 next¡ú CITS2002 CITS2002schedule
Virtual Memory Implementation Considerations
The many different implementations of virtual memory differ in their treatment of some common considerations:
1. When should a process’s pages be fetched?
A process initially requires the first page containing its starting address (and some initial
data structures), but thereafter when should each page be allocated?
A VM system can employ demand paging in which a page is allocated only when a reference to it is made, or predictive pre-paging where pages are “intelligently” allocated before they are required.
2. Where in physical memory should pages be allocated?
Should we use policies such as first-fit, best-fit, or next-fit (which we saw when discussing
basic memory partitioning)? Does it matter?
CITS2002 Systems Programming, Lecture 12, p8, 31st August 2021.
CITS2002 Systems Programming
¡û prev 9 CITS2002 CITS2002 schedule
Virtual Memory Implementation Considerations, continued 3. Which existing blocks should be replaced?
i.e. what is the replacement policy?
To avoid thrashing, we wish to replace only pages unlikely to be required soon, but this must be balanced against how many frames can be allocated to a process, and if the Running process’s pages should be displaced (a local policy) or if other processes’ pages can be displaced (a global policy).
A number of replacement algorithms exist (seemingly a preoccupation of 1970’s OS research) which select pages to be replaced.
Fundamental algorithms include first-in, first-out (obvious, but disadvantages long-running programs with high locality) and least-recently-used (almost ideal, but requires time- expensive hardware to maintain time-stamps on page usage).
4. How many processes to admit to the Ready and Running states?
The degree of multi-programming permitted must balance processor utilisation (minimising idle time due to I/O blocking) against utility (many processes executing with small resident set sizes and possible thrashing).
No-one, yet, claims memory management is easy.
CITS2002 Systems Programming, Lecture 12, p9, 31st August 2021.