CS代考计算机代写 algorithm cache Carnegie Mellon

Carnegie Mellon
Chapter 9: Virtual Memory
1

Carnegie Mellon
Today
 Address spaces
 VM as a tool for caching
 VM as a tool for memory management  VM as a tool for memory protection
 Address translation
2

Carnegie Mellon
A System Using Physical Addressing
Main memory
0: 1:
Physical address 2:
CPU 4
(PA)
3:
4: 5: 6: 7: 8:
M-1:
Data word
 Used in “simple” systems like embedded microcontrollers in devices like cars, elevators, and digital picture frames
3

Carnegie Mellon
A System Using Virtual Addressing
CPU Chip
Main memory
0: 1: 2: 3: 4: 5: 6: 7: 8:
M-1:
Virtual address (VA)
Physical address (PA)
4
CPU
MMU
4100
Data word
 Used in all modern servers, desktops, and laptops  One of the great ideas in computer science
4

Carnegie Mellon
Address Spaces
 Linear address space: Ordered set of contiguous non-negative integer
addresses:
{0, 1, 2, 3 … }
 Virtual address space: Set of N = 2n virtual addresses {0, 1, 2, 3, …, N-1}
 Physical address space: Set of M = 2m physical addresses {0, 1, 2, 3, …, M-1}
 Clean distinction between data (bytes) and their attributes (addresses)
 Eachobjectcannowhavemultipleaddresses
 Every byte in main memory:
one physical address, one (or more) virtual addresses
5

Carnegie Mellon
Why Virtual Memory (VM)?  Uses main memory efficiently
 Use DRAM as a cache for the parts of a virtual address space  Simplifies memory management
 Each process gets the same uniform linear address space
 Isolates address spaces
 One process can’t interfere with another’s memory
 User program cannot access privileged kernel information
6

Carnegie Mellon
VM as a Tool for Caching
 Virtual memory is an array of N contiguous bytes stored on disk.
 The contents of the array on disk are cached in physical memory (DRAM cache)
 These cache blocks are called pages (size is P = 2p bytes)
VP 0 VP 1
0
Virtual memory
Physical memory
Unallocated
Cached
Uncached
Unallocated
Cached
Uncached
Cached
Uncached
0
PP 0 PP 1
PP 2m-p-1
Empty
Empty
Empty
VP 2n-p-1
M-1
Virtual pages (VPs) stored on disk
Physical pages (PPs) cached in DRAM
N-1
7

Carnegie Mellon
DRAM Cache Organization
 DRAM cache organization driven by the enormous miss penalty  DRAM is about 10x slower than SRAM
 Disk is about 10,000x slower than DRAM
 Consequences
 Large page (block) size: typically 4-8 KB, sometimes 4 MB  Fully associative
 Any VP can be placed in any PP
 Requires a “large” mapping function – different from CPU caches  Highly sophisticated, expensive replacement algorithms
 Too complicated and open-ended to be implemented in hardware  Write-back rather than write-through
8