Chapter 10: Virtual Memory
Operating System Concepts – 10th Edition 10.1 Silberschatz, Galvin and Gagne ©2018
Entire program code not needed at same time
Consider ability to execute partially-loaded program
Background
Code needs to be in memory to execute, but entire program rarely used Errorcode,unusualroutines,largedatastructures
Programnolongerconstrainedbylimitsofphysicalmemory
Eachprogramtakeslessmemorywhilerunning->moreprograms run at the same time
IncreasedCPUutilizationandthroughputwithnoincreasein response time or turnaround time
LessI/Oneededtoloadorswapprogramsintomemory->eachuser program runs faster
Operating System Concepts – 10th Edition 10.2 Silberschatz, Galvin and Gagne ©2018
Background
Virtual memory – separation of user logical memory from physical memory
Only part of the program needs to be in memory for execution
Logicaladdressspacecanthereforebemuchlargerthanphysical address space
Allowsaddressspacestobesharedbyseveralprocesses
Allowsformoreefficientprocesscreation
Moreprogramsrunningconcurrently
LessI/Oneededtoloadorswapprocesses
Virtual address space – logical view of how process is stored in memory Usuallystartataddress0,contiguousaddressesuntilendofspace Meanwhile,physicalmemoryorganizedinpageframes
MMUmustmaplogicaltophysical
Virtual memory can be implemented via: Demandpaging
Demandsegmentation
Operating System Concepts – 10th Edition 10.3 Silberschatz, Galvin and Gagne ©2018
Shared Library Using Virtual Memory
Operating System Concepts – 10th Edition 10.4 Silberschatz, Galvin and Gagne ©2018
Page is needed reference to it
invalid reference abort
not-in-memory bring to memory
Demand Paging
Bring a page into memory only when it is needed Less I/O needed
Less memory needed
Faster response
More users
Lazy swapper – never swaps a page into memory unless page will be needed
Swapper that deals with pages is a pager
Operating System Concepts – 10th Edition 10.5 Silberschatz, Galvin and Gagne ©2018
Transfer of a Paged Memory to Contiguous Disk Space
Operating System Concepts – 10th Edition 10.6 Silberschatz, Galvin and Gagne ©2018
Valid-Invalid Bit
With each page table entry a valid–invalid bit is associated (v in-memory, i not-in-memory)
Initially valid–invalid bit is set to i on all entries
Example of a page table snapshot:
Frame #
valid-invalid bit
….
During address translation, if valid–invalid bit in page table entry is I page fault
page table
v v v
v i
i
i
Operating System Concepts – 10th Edition 10.7 Silberschatz, Galvin and Gagne ©2018
Page Table When Some Pages Are Not in Main Memory
Operating System Concepts – 10th Edition 10.8 Silberschatz, Galvin and Gagne ©2018
Find an empty frame in memory.
Load desired page from disk into empty frame.
Page Fault
Is reference a valid memory access? If no, terminate process.
Suspend process (context switch to another process).
Modify page table entry to point to this frame, valid bit = 1.
Restart process at the instruction that caused the page fault. The process can now access the page as though it had always been in memory.
Operating System Concepts – 10th Edition 10.9 Silberschatz, Galvin and Gagne ©2018
Steps in Handling a Page Fault
Operating System Concepts – 10th Edition 10.10 Silberschatz, Galvin and Gagne ©2018
Performance of Demand Paging
– pisthepage-faultrate(0≤p≤1)
– m is the memory access time
– f is the page-fault overhead Then:
Effective memory access time: EAT = (1 – p) × m + p × f
Example: m =100 ns; f = 25 ms.
EAT = 100 + 24,999,900 × p (nanoseconds)
If p = 1/1000, then EAT = 24,999 nanoseconds, a slowdown factor of 250!
To achieve a slowdown factor of 10%, we can only allow less than 1 memory access out of 2,500,000 to page fault!
Operating System Concepts – 10th Edition 10.11 Silberschatz, Galvin and Gagne ©2018
Process Creation
Virtual memory allows other benefits during process creation: – Copy-on-Write
– Memory-Mapped Files (later)
Operating System Concepts – 10th Edition 10.12 Silberschatz, Galvin and Gagne ©2018
Copy-on-Write
Copy-on-Write (COW) allows both parent and child processes to initially share the same pages in memory
If either process modifies a shared page, only then is the page copied
COW allows more efficient process creation as only modified pages are copied
Free pages are allocated from a pool of zeroed-out pages
Operating System Concepts – 10th Edition 10.13 Silberschatz, Galvin and Gagne ©2018
Before Process 1 Modifies Page C
Operating System Concepts – 10th Edition 10.14 Silberschatz, Galvin and Gagne ©2018
After Process 1 Modifies Page C
Operating System Concepts – 10th Edition 10.15 Silberschatz, Galvin and Gagne ©2018
What happens if there is no free frame?
Page replacement – find some page in memory, but not really in use, swap it out
algorithm
performance – want an algorithm which will result in
minimum number of page faults
Same page may be brought into memory several times
Operating System Concepts – 10th Edition 10.16 Silberschatz, Galvin and Gagne ©2018
Page Replacement
Prevent over-allocation of memory by modifying page-fault service routine to include page replacement
Use modify (dirty) bit to reduce overhead of page transfers – only modified pages are written to disk
Operating System Concepts – 10th Edition 10.17 Silberschatz, Galvin and Gagne ©2018
Need For Page Replacement
Operating System Concepts – 10th Edition 10.18 Silberschatz, Galvin and Gagne ©2018
Basic Page Replacement
1. Find the location of the desired page on disk
2. Find a free frame:
– If there is a free frame, use it
– If there is no free frame, use a page replacement
algorithm
to select a victim frame
3. Bring the desired page into the (newly) free frame; update the page
and frame tables
4. Restart the process
Operating System Concepts – 10th Edition 10.19 Silberschatz, Galvin and Gagne ©2018
Page Replacement
Operating System Concepts – 10th Edition 10.20 Silberschatz, Galvin and Gagne ©2018
H
1 Load M
0
3v 4 v
swap out victim
logical memory For user 1
3 4
H
load M 3
D
M
0 A
i change 6 A
desired page in
frame
valid/invalid bit
Page Replacement
2 J 5v resetpage1 page 3M 2v4tablefor2victim1
page table for user 1
new page
B
6v 5Jswap
1 B i 2 to invalid 2D7v7E
3 E page table for user 2
physical memory
logical memory For user 2
Operating System Concepts – 10th Edition 10.21
Silberschatz, Galvin and Gagne ©2018
0 monitor
Page Replacement Algorithms
Want lowest page-fault rate
Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string
In all our examples, the reference string is 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Operating System Concepts – 10th Edition 10.22 Silberschatz, Galvin and Gagne ©2018
Graph of Page Faults Versus The Number of Frames
Operating System Concepts – 10th Edition 10.23 Silberschatz, Galvin and Gagne ©2018
4 frames
First-In-First-Out (FIFO) Algorithm
Referencestring:1,2,3,4,1,2,5,1,2,3,4,5
3 frames (3 pages can be in memory at a time per process)
11
45 1 3
2 33
11
54 1 5 2
2
2
10 page faults
33
2
9 page faults
443
24
Belady’s Anomaly: more frames more page faults
Operating System Concepts – 10th Edition 10.24 Silberschatz, Galvin and Gagne ©2018
FIFO Page Replacement
Operating System Concepts – 10th Edition 10.25 Silberschatz, Galvin and Gagne ©2018
Optimal Algorithm
Replace page that will not be used for longest period of time 4 frames example
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
1 2 3
4
How do you know this?4
Used for measuring how well your algorithm performs
5
Operating System Concepts – 10th Edition 10.26 Silberschatz, Galvin and Gagne ©2018
6 page faults
Optimal Page Replacement
Operating System Concepts – 10th Edition 10.27 Silberschatz, Galvin and Gagne ©2018
Least Recently Used (LRU) Algorithm
Referencestring: 1,2,3,4,1,2,5,1,2,3,4,5
11115 22222 35544 44333
Counter implementation
Every page entry has a counter; every time page is referenced
through this entry, copy the clock into the counter
When a page needs to be changed, look at the counters to determine which are to change
Need to search through all entries to find LRU page.
Operating System Concepts – 10th Edition 10.28 Silberschatz, Galvin and Gagne ©2018
LRU Page Replacement
Operating System Concepts – 10th Edition 10.29 Silberschatz, Galvin and Gagne ©2018
LRU Algorithm (Cont.)
Stack implementation – keep a stack of page numbers in a double link form: Page referenced:
move it to the top
requires 6 pointers to be changed
No search for replacement
A tail pointer points to the bottom of the stack, which is the LRU page.
The updating of the clock fields or stack must be done for every memory reference. Without hardware assistance,
It would slow every memory reference by a factor of at least 10. But few systems provide sufficient hardware for true LRU page replacement.
Operating System Concepts – 10th Edition 10.30 Silberschatz, Galvin and Gagne ©2018
Use Of A Stack to Record The Most Recent Page References
Operating System Concepts – 10th Edition 10.31 Silberschatz, Galvin and Gagne ©2018
LRU Approximation Algorithms
Reference bit
With each page associate a bit, initially = 0 When page is referenced bit set to 1
Replace the one which is 0 (if one exists)
We do not know the order, however Additional-Reference-Bits Algorithm
– eight bits per page, initially all zero.
– at periodic intervals, a timer interrupt transfers control to the OS. The OS shifts reference bit into high-order bit, shift other bits right 1 bit, discarding low-order bit. The OS resets each reference bit to zero.
– interpret 8 bit byte as unsigned integer; the page with the lowest number is the LRU page to be replaced.
Operating System Concepts – 10th Edition 10.32 Silberschatz, Galvin and Gagne ©2018
LRU Approximation Algorithms
Second chance
Need reference bit
Clock replacement
If page to be replaced (in clock order) has reference bit = 1 then:
set reference bit 0
leave page in memory
replace next page (in clock order), subject to same rules
Operating System Concepts – 10th Edition 10.33 Silberschatz, Galvin and Gagne ©2018
Second-Chance (clock) Page-Replacement Algorithm
Operating System Concepts – 10th Edition 10.34 Silberschatz, Galvin and Gagne ©2018
Counting Algorithms
Keep a counter of the number of references that have been made to each page
LFU Algorithm: replaces page with smallest count
MFU Algorithm: based on the argument that the page with the smallest count was probably just brought in and has yet to be used
Operating System Concepts – 10th Edition 10.35 Silberschatz, Galvin and Gagne ©2018
Two major allocation schemes fixed allocation
priority allocation
Allocation of Frames
Each process needs minimum number of pages
Example: IBM 370 – 6 pages to handle SS MOVE instruction:
instruction is 6 bytes, might span 2 pages 2 pages to handle from
2 pages to handle to
Operating System Concepts – 10th Edition 10.36 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
10.37
Silberschatz, Galvin and Gagne ©2018
Fixed Allocation
Equal allocation – For example, if there are 100 frames and 5 processes, give each process 20 frames.
Proportional allocation – Allocate according to the size of process
si =sizeofprocesspi
S = si
m = total number of frames
ai =allocationfor pi = si ×m S
m = 64 si =10 s2 =127
a = 10 ×64≈5
1
137
a2 =127×64≈59 137
Priority Allocation
Use a proportional allocation scheme using priorities rather than size
If process Pi generates a page fault,
select for replacement one of its frames
select for replacement a frame from a process with lower priority number
Operating System Concepts – 10th Edition 10.38 Silberschatz, Galvin and Gagne ©2018
•
Local replacement – each process selects from only its own set of allocated frames.
•
Global replacement generally results in greater throughput and is more common.
Global vs. Local Allocation
Global Global replacement – process selects a replacement frame from the set of all frames; one process can take a frame from another.
– with global replacement, a process cannot control its own fault-rate; it depends on the paging behaviour of other processes.
– with local replacement, the set of pages in memory for a process is only affected by the paging behavior of that process.
– local replacement may prevent other processes from obtaining other, less used pages of memory.
Operating System Concepts – 10th Edition 10.39 Silberschatz, Galvin and Gagne ©2018
multiprogramming
another process added to the system
Thrashing
If a process does not have “enough” pages, the page-fault rate is very high. This leads to:
low CPU utilization
operating system thinks that it needs to increase the degree of
Thrashing ≡ a process is busy swapping pages in and out
Operating System Concepts – 10th Edition 10.40 Silberschatz, Galvin and Gagne ©2018
Thrashing (Cont.)
Operating System Concepts – 10th Edition 10.41 Silberschatz, Galvin and Gagne ©2018
Demand Paging and Thrashing
Why does paging work? Locality model
A locality is a set of pages that are actively used together.
Process migrates from one locality to another. E.g. subroutine
calls.
Localities may overlap.
Why does thrashing occur?
Σ size of locality > total memory size
Operating System Concepts – 10th Edition 10.42 Silberschatz, Galvin and Gagne ©2018
Locality In A Memory-Reference Pattern
Operating System Concepts – 10th Edition 10.43 Silberschatz, Galvin and Gagne ©2018
Working-Set Model
Δ ≡ working-set window ≡ a fixed number of page references Example: 10,000 instruction
WSSi (working set of Process Pi) =
total number of pages referenced in the most recent Δ (varies in time)
if Δ too small will not encompass entire locality if Δ too large will encompass several localities if Δ = ∞ will encompass entire program
D = Σ WSSi ≡ total demand frames
if D > m Thrashing (m: total # of avail. frames)
Policy if D > m, then suspend one of the processes
Operating System Concepts – 10th Edition 10.44 Silberschatz, Galvin and Gagne ©2018
Working-set model
Use of the working set model:
– OS monitors the working set of each process and
allocates to that working set enough frames to provide it with its working set size.
– If there are enough extra frames, add another process.
– If sum of working set sizes increases, exceeding total number of available frames, then suspend a process.
– Difficulty: keeping track of the working set, as the
working-set window is a moving window.
Operating System Concepts – 10th Edition 10.45 Silberschatz, Galvin and Gagne ©2018
Keeping Track of the Working Set
Approximate with interval timer + a reference bit + in-memory bits
Example: Δ = 10,000
Timer interrupts after every 5000 time units.
Keep 2 in-memory bits for each page.
Whenever a timer interrupts for each page, the in-memory bit 1 value is copied into in-memory bit 2; the reference bit value is copied into in-memory bit 1; and then the reference bit value is set to 0.
If one of the bits = 1 immediately prior to the interrupt page in working set.
Determines whether a page was used within the last 10,000 to 15,000
references.
Not completely accurate.
Improvement: use 10 in-memory bits and interrupt every 1000 time units. But the cost to service these more frequent interrupts will be correspondingly higher.
In the following example, Δ = 10; timer interrupts after every 5 time units; 2 in- memory bits for each page.
Operating System Concepts – 10th Edition 10.46 Silberschatz, Galvin and Gagne ©2018
∆ = 10
…2615777751623412344434344413234443444
Keeping Track of the Working Set (cont.)
interrupt every 5 units
working set: {1,2,5,6,7} {1,2,5,6,7}{1,2,3,4,5,6,7}{1,2,3,4,5,6,7}{1,2,3,4,6}{1,2,3,4} {1,2,3,4}
page 1 ref_bit: 1 0 10 10 page1m_bit1:1110
1 0 10
page 1 m_bit2: 1 page 2 ref_bit: 1 0 page2m_bit1: 1 0 page 2 m_bit2: 1 page 3 ref_bit:
1
01
1
0
0 1 1
1 0 1 1 01 1 1 0 1 1
0
1
page 3 m_bit1:
page 3 m_bit2:
page 4 ref_bit:
page 4 m_bit1:
page 4 m_bit2:
page 5 ref_bit: 10 10 page 5 m_bit1: 1 1 page 5 m_bit2: 1 page 6 ref_bit: 10 1 page 6 m_bit1: 1 0 page 6 m_bit2: 1 page 7 ref_bit: 101 0 page 7 m_bit1: 1 1 page 7 m_bit2: 1
1
0 1
1
1
0 1 01
Operating System Concepts – 10th Edition
10.47
Silberschatz, Galvin and Gagne ©2018
1
10 1 0 0 1 1
1
1
1 10 1 1
0 1 0 1 0
0
0 1
0
0 1
0
0 1 0 1 1 01 1 1
0
1 1
1 1
Page-Fault Frequency Scheme
Establish “acceptable” page-fault rate.
– Ifactualratetoohigh,allocatetheprocessanotherframe;if no free frame available, suspend a process and allocate its frames to processes with high page-fault rates.
– If actual rate too low, remove a frame from the process.
Operating System Concepts – 10th Edition 10.48 Silberschatz, Galvin and Gagne ©2018
Working Sets and Page Fault Rates
Operating System Concepts – 10th Edition 10.49 Silberschatz, Galvin and Gagne ©2018
Allocating Kernel Memory
Treated differently from user memory
Often allocated from a free-memory pool
Kernel requests memory for structures of varying sizes Some kernel memory needs to be contiguous
Operating System Concepts – 10th Edition 10.50 Silberschatz, Galvin and Gagne ©2018
Memory allocated using power-of-2 allocator
Buddy System
Allocates memory from fixed-size segment consisting of physically- contiguous pages
Satisfies requests in units sized as power of 2
Request rounded up to next highest power of 2
When smaller allocation needed than is available, current chunk split into two buddies of next-lower power of 2
Continue until appropriate sized chunk available
Operating System Concepts – 10th Edition 10.51 Silberschatz, Galvin and Gagne ©2018
Buddy System Allocator
Operating System Concepts – 10th Edition 10.52 Silberschatz, Galvin and Gagne ©2018
Alternate strategy
Slab is one or more physically contiguous pages Cache consists of one or more slabs
Single cache for each unique kernel data structure
Slab Allocator
Each cache filled with objects – instantiations of the data structure When cache created, filled with objects marked as free
When structures stored, objects marked as used
If slab is full of used objects, next object allocated from empty slab
If no empty slabs, new slab allocated
Benefits include no fragmentation, fast memory request satisfaction
Operating System Concepts – 10th Edition 10.53 Silberschatz, Galvin and Gagne ©2018
Slab Allocation
Operating System Concepts – 10th Edition 10.54 Silberschatz, Galvin and Gagne ©2018
Other Issues – Prepaging and Page Size
Prepaging: each time a process is started or resumed, try to bring into memory at one time all the pages that will be needed.
Page size selection
– increasing page size increases internal fragmentation.
– increasing page size decreases page table size.
– increasing page size decreases I/O overhead.
– increasingpagesizepreventseachpagefrommatching program locality more accurately.
– increasing page size decreases the number of page faults. Trend is towards larger page sizes.
Operating System Concepts – 10th Edition 10.55 Silberschatz, Galvin and Gagne ©2018
Other Issues – TLB Reach
TLB Reach – The amount of memory accessible from the TLB TLB Reach = (TLB Size) X (Page Size)
Ideally, the working set of each process is stored in the TLB
Otherwise there is a high degree of page faults Increase the Page Size
This may lead to an increase in fragmentation as not all applications require a large page size
Provide Multiple Page Sizes
This allows applications that require larger page sizes the opportunity to use them without an increase in fragmentation
Operating System Concepts – 10th Edition 10.56 Silberschatz, Galvin and Gagne ©2018
Program 2
Other Issues – Program Structure
Program structure
Int[128,128] data;
Each row is stored in one page
Program 1
for (j = 0; j <128; j++)
for (i = 0; i < 128; i++)
data[i,j] = 0;
128 x 128 = 16,384 page faults
for (i = 0; i < 128; i++)
for (j = 0; j < 128; j++)
data[i,j] = 0; 128 page faults
Operating System Concepts – 10th Edition 10.57 Silberschatz, Galvin and Gagne ©2018
Other Issues – I/O interlock
I/O Interlock – Pages must sometimes be locked into memory
Consider I/O - Pages that are used for copying a file from a device must be locked from being selected for eviction by a page replacement algorithm
Operating System Concepts – 10th Edition 10.58 Silberschatz, Galvin and Gagne ©2018
Reason Why Frames Used For I/O Must Be In Memory
Operating System Concepts – 10th Edition 10.59 Silberschatz, Galvin and Gagne ©2018
End of Chapter 10
Operating System Concepts – 10th Edition 10.60 Silberschatz, Galvin and Gagne ©2018