Introduction to Computer Systems 15-213/18-243, spring 2009
Virtual Memory
Copyright By PowCoder代写 加微信 powcoder
Acknowledgement: These slides are based on the textbook
(Computer Systems: A Programmer’s Perspective) and its slides.
Multiprocessing: The Reality
Single processor executes multiple processes concurrently
Process executions interleaved (multitasking)
Address spaces managed by virtual memory system (in our next lecture)
Register values for nonexecuting processes saved in memory
Saved registers
Saved registers
Saved registers
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Address spaces
VM as a tool for caching
VM as a tool for memory management
VM as a tool for memory protection
Address translation
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
A System Using Physical Addressing
Used in “simple” systems
E.g., embedded microcontrollers in devices like cars, elevators, and digital picture frames
Main memory
Physical address
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
A System Using Virtual Addressing
Used in all modern servers, laptops, and smart phones
An important concept for virtual memory (see the following slides)
Main memory
management
unit (MMU)
Physical address
Virtual address
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Address Spaces
Linear address space: consecutive non-negative integer addresses:
{0, 1, 2, 3 … }
Virtual address space: use n-bit integers as virtual addresses,
total N = 2n virtual addresses:
{0, 1, 2, 3, …, N-1}
Physical address space: use m-bit integers as physical addresses,
total M = 2m physical addresses:
{0, 1, 2, 3, …, M-1}
In general, N is larger than M
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Why Virtual Memory (VM)?
Uses main memory efficiently
Use DRAM as a cache for parts of a virtual address space
Simplifies memory management
Each process gets the same uniform linear address space
Isolates address spaces
One process can’t read/write another process’s memory
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Address spaces
VM as a tool for caching
VM as a tool for memory management
VM as a tool for memory protection
Address translation
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
VM as a Tool for Caching
Conceptually, virtual memory is an array of N consecutive bytes stored on disk
The contents of the array on disk are cached in physical memory (DRAM cache)
These cache blocks are called pages (size is P = 2p bytes)
Physical memory
Virtual memory
Unallocated
Virtual pages (VPs)
stored on disk
Physical pages (PPs)
cached in DRAM
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
DRAM Cache Organization
DRAM cache organization driven by the enormous miss penalty
Disk is about 10,000x slower than DRAM
Consequences
Large page (block) size: typically 4 KB to 4 MB
Fully associative
Any virtual page (VP) can be placed in any physical page (PP)
Sophisticated replacement algorithms for DRAM cache
Beyond the scope of this course
Write-back
Writes are not immediately performed in the disk
Write to the disk when a page is evicted from DRAM cache
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Enabling Data Structure: Page Table
Page table: an array of page table entries (PTEs) that maps virtual pages to physical pages
Per-process kernel data structure in DRAM
Memory resident
page table
Physical memory
Virtual memory
Physical page
disk address
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Page hit: reference to VM word that is in physical memory (DRAM cache hit)
Memory resident
page table
Physical memory
Virtual memory
Physical page
disk address
Virtual address
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Page Fault
Page fault: reference to VM word that is not in physical memory (DRAM cache miss)
Memory resident
page table
Physical memory
Virtual memory
Physical page
disk address
Virtual address
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Handling Page Fault
1. Page miss causes page fault (an exception)
Memory resident
page table
Physical memory
Virtual memory
Physical page
disk address
Virtual address
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Handling Page Fault
1. Page miss causes page fault (an exception)
2. Page fault handler selects a victim to be evicted (here VP 4)
Memory resident
page table
Physical memory
Virtual memory
Physical page
disk address
Virtual address
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Handling Page Fault
1. Page miss causes page fault (an exception)
2. Page fault handler selects a victim to be evicted (here VP 4)
Memory resident
page table
Physical memory
Virtual memory
Physical page
disk address
Virtual address
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Handling Page Fault
1. Page miss causes page fault (an exception)
2. Page fault handler selects a victim to be evicted (here VP 4)
3. Offending instruction is restarted: page hit!
Memory resident
page table
Physical memory
Virtual memory
Physical page
disk address
Virtual address
Demand paging: Waiting until the miss to copy the page to DRAM
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Allocating Pages
Allocating a new page (VP 5) of virtual memory.
Memory resident
page table
Physical memory
Virtual memory
Physical page
disk address
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Locality to the Rescue Again!
Virtual memory works efficiently because of locality
At any point in time, programs tend to access a set of active virtual pages called the working set
Programs with better temporal locality will have smaller working sets
If (working set size < main memory size)
Good performance for one process after compulsory misses
If ( SUM(working set sizes) > main memory size )
Thrashing: Performance meltdown where pages are swapped (copied) in and out continuously
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Address spaces
VM as a tool for caching
VM as a tool for memory management
VM as a tool for memory protection
Address translation
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
VM as a Tool for Memory Management
Key idea: each process has its own virtual address space
It can view memory as a simple linear array
Mapping function scatters addresses through physical memory
Virtual Address Space for Process 1:
Space (DRAM)
(e.g., read-only
library code)
Virtual Address Space for Process 2:
translation
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
VM as a Tool for Memory Management
Simplifying memory allocation
Each virtual page can be mapped to any physical page
A virtual page can be stored in different physical pages at different times
Sharing code and data among processes
Map virtual pages to the same physical page (here: PP 6)
Virtual Address Space for Process 1:
Space (DRAM)
(e.g., read-only
library code)
Virtual Address Space for Process 2:
translation
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Address spaces
VM as a tool for caching
VM as a tool for memory management
VM as a tool for memory protection
Address translation
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
VM as a Tool for Memory Protection
Extend page table entries (PTEs) with permission bits
Memory management unit (MMU) checks these bits on each access
Process i:
Process j:
Address Space
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Address spaces
VM as a tool for caching
VM as a tool for memory management
VM as a tool for memory protection
Address translation
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
VM Address Translation
Virtual Address Space
V = {0, 1, …, N–1}
Physical Address Space
P = {0, 1, …, M–1}
Address Translation
MAP: V P U {}
For virtual address a:
MAP(a) = a’
if data at virtual address a is at physical address a’ in P
MAP(a) =
if data at virtual address a is not in physical memory
Either invalid or stored on disk
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Summary of Address Translation Symbols
Basic Parameters
N = 2n : Number of addresses in virtual address space
M = 2m : Number of addresses in physical address space
P = 2p : Page size (bytes)
Components of the virtual address (VA)
TLBI: TLB index
TLBT: TLB tag
VPO: Virtual page offset
VPN: Virtual page number
Components of the physical address (PA)
PPO: Physical page offset (same as VPO)
PPN: Physical page number
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Address Translation With a Page Table
Virtual page number (VPN)
Virtual page offset (VPO)
Physical page number (PPN)
Physical page offset (PPO)
Virtual address
Physical address
Physical page number (PPN)
Page table
base register
Page table
Physical page table
address for the current
Valid bit = 0:
Page not in memory
(page fault)
Valid bit = 1
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Address Translation: Page Hit
1) Processor sends virtual address to MMU
2-3) MMU fetches PTE from page table in memory
4) MMU sends physical address to cache/memory
5) Cache/memory sends data word to processor
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Address Translation: Page Fault
1) Processor sends virtual address to MMU
2-3) MMU fetches PTE from page table in memory
4) Valid bit is zero, so MMU triggers page fault exception
5) Handler identifies victim (and, if dirty, pages it out to disk)
6) Handler pages in new page and updates PTE in memory
7) Handler returns to original process, restarting faulting instruction
Page fault handler
Victim page
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Integrating VM and Cache
VA: virtual address, PA: physical address, PTE: page table entry, PTEA = PTE address
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Speeding up Translation with a TLB
Page table entries (PTEs) are cached in L1 like any other memory word
PTEs may be evicted by other data references
PTE hit still requires a small L1 delay
Solution: Translation Lookaside Buffer (TLB)
Small set-associative hardware cache in MMU
Maps virtual page numbers to physical page numbers
Contains complete page table entries for small number of pages
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Accessing the TLB
MMU uses the VPN portion of the virtual address to access the TLB:
TLB tag (TLBT)
TLB index (TLBI)
T = 2t sets
TLBI selects the set
TLBT matches tag of line within set
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
A TLB hit eliminates a memory access
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
A TLB miss incurs an additional memory access (the PTE)
Fortunately, TLB misses are rare. Why?
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Programmer’s view of virtual memory
Each process has its own private linear address space
Cannot be corrupted by other processes
System view of virtual memory
Uses memory efficiently by caching virtual memory pages
Efficient only because of locality
Simplifies memory management and programming
Simplifies protection by providing a convenient interpositioning point to check permissions
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
/docProps/thumbnail.jpeg
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com