代写代考 CS162 Operating Systems and Systems Programming Lecture 14

CS162 Operating Systems and Systems Programming Lecture 14
Memory 2:Virtual Memory (Con’t), Caching and TLBs
March 8th, 2022 Prof. and http://cs162.eecs.Berkeley.edu

Copyright By PowCoder代写 加微信 powcoder

Recall: General Address translation
Virtual Addresses
Physical Addresses
Untranslated read or write
• Consequently, two views of memory:
– View from the CPU (what program sees, virtual memory)
– View from memory (physical memory)
– Translation box (Memory Management Unit or MMU) converts between the two views
• Translation ⇒ much easier to implement protection!
– If task A cannot even gain access to task B’s data, no way for A to adversely affect B
– Extra benefit: every program can be linked/loaded into same region of user address space
3/8/22 Joseph & Kubiatowicz CS162 © UCB Spring 2022 Lec 14.2

Recall: Multi-Segment Model
V V N V N N V
Virtual Address
Physical Address
Check Valid
Access Error
• Segment map resides in processor
– Segment number mapped into base/limit pair
– Base added to offset to generate physical address – Error check catches offset out of range
• As many chunks of physical memory as entries
– Segment addressed by portion of virtual address – However, could be included in instruction instead:
» x86 Example: mov [es:bx],ax.
• What is “V/N” (valid / not valid)?
– Can mark segments as invalid; requires check as well
Joseph & Kubiatowicz CS162 © UCB Spring 2022

What if not all segments fit in memory?
• Extreme form of Context Switch: Swapping
– To make room for next process, some or all of the previous process is moved to disk » Likely need to send out complete segments
– This greatly increases the cost of context-switching • What might be a desirable alternative?
– Some way to keep only active portions of a process in memory at any one time
– Need finer granularity control over physical memory
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Problems with Segmentation
• Must fit variable-sized chunks into physical memory • May move processes multiple times to fit everything • Limited options for swapping to disk
• Fragmentation: wasted space
– External: free gaps between allocated chunks
– Internal: don’t need all memory within allocated chunks
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Recall: General Address Translation
OS heap & Stacks
Virtual Address Space 1
Translation Map 1
Virtual Address Space 2
Translation Map 2
Physical Address Space
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Paging: Physical Memory in Fixed Size Chunks
• Solution to fragmentation from segments?
– Allocate physical memory in fixed size chunks (“pages”) – Every chunk of physical memory is equivalent
» Can use simple vector of bits to handle allocation:
00110001110001101 … 110010
» Each bit represents page of physical memory
1 ⇒ allocated, 0 ⇒ free
• Should pages be as big as our previous segments? – No: Can lead to lots of internal fragmentation
» Typically have small pages (1K-16K)
– Consequently: need multiple pages/segment
Joseph & Kubiatowicz CS162 © UCB Spring 2022

How to Implement Simple Paging?
Virtual Page #
Virtual Address:
PageTablePtr PageTableSize
> Access Error
Physical Address Check Perm
Access Error
Physical Page #
• Page Table (One per process)
– Resides in physical memory
– Contains physical page and permission for each virtual page (e.g.Valid bits, Read,Write, etc)
• Virtual address mapping
– Offset from Virtual address copied to Physical Address » Example: 10 bit offset ⇒ 1024-byte pages
– Virtual page # is all remaining bits
» Example for 32-bits: 32-10 = 22 bits, i.e. 4 million entries » Physical page # copied from table into physical address
– Check Page Table bounds and permissions
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Simple Page Table Example
Example (4 byte pages)
0x04 0000 0100
0001 0000 1 0000 1100
Virtual Memory
Page Table
0000 0110 0000 1001
1110 0000 0101
Physical Memory
3/8/22 Joseph & Kubiatowicz CS162 © UCB Spring 2022 Lec 14.9

What about Sharing?
Virtual Address (Process A):
PageTablePtrA
Virtual Page #
Joseph & Kubiatowicz CS162 © UCB Spring 2022
PageTablePtrB
Shared Page
This physical page appears in address space of both processes
Virtual Address (Process B):
Virtual Page #

Where is page sharing used ?
• The “kernel region” of every process has the same page table entries
– The process cannot access it at user level
– But on U->K switch, kernel code can access it AS WELL AS the region for THIS user
» What does the kernel need to do to access other user processes? • Different processes running same binary!
– Execute-only, but do not need to duplicate code segments
• User-level system libraries (execute only)
• Shared-memory segments between different processes
– Can actually share objects directly between processes » Must map page into same place in address space!
– This is a limited form of the sharing that threads have within a single process
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Recall: Memory Layout for Linux 32-bit (Pre-Meltdown patch!)
http://static.duartes.org/img/blogPosts/linuxFlexibleAddressSpaceLayout.png
3/8/22 Joseph & Kubiatowicz CS162 © UCB Spring 2022 Lec 14.12

Some simple security measures
• Address Space Randomization
– Position-Independent Code ⇒ can place user code anywhere in address space
» Random start address makes much harder for attacker to cause jump to code that it seeks to
– Stack & Heap can start anywhere, so randomize placement
• Kernel address space isolation
– Don’t map whole kernel space into each process, switch to kernel page table
– Meltdown⇒map none of kernel into user mode!
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Summary: Paging
Virtual memory view
1111 1111 1111
0000 1100 0000
Page Table
Physical memory view
11101 null 11100
11011 null 11010 null 11001 null 11000 null 10111 null 10110 null 10101 null 10100 null 10011 null
11101 11100
null stack
0000 0000 page # offset
01111 null 01110 null 01101 null 01100 null
00110 null
null code null
0001 0000 0000 0000
00100 null
00000 00010
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Summary: Paging
Virtual memory view 1111
Page Table
Physical memory view
11101 null 11100 null 11011 null 11010 null 11001 null 11000 null 10111 null 10110 null 10101 null 10100 null 10011 null
11101 11100
1110 0000 1100 0000
What happens if stack grows to 1110 0000?
01111 null 01110 null 01101 null 01100 null
0000 0000 page # offset
00110 null
null code null
0001 0000 0000 0000
00100 null
00000 00010
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Summary: Paging
Virtual memory view 1111
1110 0000 1100 0000
Page Table
Physical memory view
Allocate new
000 0101 000
11011 null 11010 null 11001 null 11000 null 10111 null 10110 null 10101 null 10100 null 10011 null
01111 null 01110 null 01101 null 01100 null
pages where heap
room! 0111
0000 0000 page # offset
00110 null
null code null
0001 0000 0000 0000
00100 null
00000 00010
Joseph & Kubiatowicz CS162 © UCB Spring 2022

How big do things get?
32-bit address space => 232 bytes (4 GB) – Note: “b” = bit, and “B” = byte
– And for memory:
Typical page size: 4 KB
– how many bits of the address is that ? (remember 210 = 1024) – Ans – 4KB = 4×210 = 212 ⇒ 12 bits of the address
So how big is the simple page table for each process?
– 232/212 = 220 (that’s about a million entries) x 4 bytes each => 4 MB
– When 32-bit machines got started (vax 11/780, intel 80386), 16 MB was a LOT of memory
How big is a simple page table on a 64-bit processor (x86_64)?
– 264/212 = 252(that’s 4.5×1015 or 4.5 exa-entries)×8 bytes each =
36×1015 bytes or 36 exa-bytes!!!! This is a ridiculous amount of memory!
– This is really a lot of space – for only the page table!!!
The address space is sparse, i.e. has holes that are not mapped to physical memory
– So, most of this space is taken up by page tables mapped to nothing
» “K”(kilo) » “M”(mega) » “G”(giga)
= 210 = 1024 ≈ 103 (But not quite!): Sometimes called “Ki” (Kibi)
= 220 = (1024)2 = 1,048,576 ≈ 106 (But not quite!): Sometimes called “Mi” (Mibi)
= 230 = (1024)3 = 1,073,741,824 ≈ 109 (But not quite!): Sometimes called “Gi” (Gibi)
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Page Table Discussion
• What needs to be switched on a context switch? – Page table pointer and limit
• What provides protection here?
– Translation (per process) and dual-mode!
– Can’t let process alter its own page table!
• Analysis – Pros
» Simple memory allocation
» Easy to share
– Con:What if address space is sparse?
» E.g., on UNIX, code starts at 0, stack starts at (231-1) » With 1K pages, need 2 million page table entries!
– Con:What if table really big?
» Not all pages used all the time ⇒ would be nice to have working set of page table in memory
• Simple Page table is way too big!
– Does it all need to be in memory?
– How about multi-level paging?
– or combining paging and segmentation
Joseph & Kubiatowicz CS162 © UCB Spring 2022

How to Structure a Page Table
• Page Table is a map (function) from VPN to PPN
Virtual Address Physical Address
• Simple page table corresponds to a very large lookup table – VPN is index into table, each entry contains PPN
• What other map structures can you think of? – Trees?
– Hash Tables?
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Page Table

Fix for sparse address space:The two-level page table
Physical Page #
Physical Address:
Virtual P1 index
Virtual P2 index
Virtual Address:
PageTablePtr
• Tree of Page Tables
– “Magic” 10b-10b-12b pattern!
• Tables fixed size (1024 entries)
– On context-switch: save single PageTablePtr register (i.e. CR3)
• Valid bits on Page Table Entries
– Don’t need every 2nd-level table
– Even when exist, 2nd-level tables can reside on disk if not in
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Example: x86 classic 32-bit address translation
• Intel terminology:Top-level page-table called a “Page Directory” – With “Page Directory Entries”
• CR3 provides physical address of the page directory
– This is what we have called the “PageTablePtr” in previous slides – Change in CR3 changes the whole translation table!
Joseph & Kubiatowicz CS162 © UCB Spring 2022

What is in a Page Table Entry (PTE)?
• What is in a Page Table Entry (or PTE)?
– Pointer to next-level page table or to actual page
– Permission bits: valid, read-only, read-write, write-only
• Example: Intel x86 architecture PTE:
– Address same format previous slide (10, 10, 12-bit offset) – Intermediate page tables called “Directories”
31-12 11-9 876543210
P: Present (same as “valid” bit in other architectures)
W: Writeable
U: User accessible
PWT: Page write transparent: external cache write-through PCD: Page cache disabled (page cannot be cached)
A: Accessed: page has been accessed recently
D: Dirty (PTE only): page has been modified recently
PS: Page Size: PS=1⇒4MB page (directory only). Bottom 22 bits of virtual address serve as offset
Page Frame Number (Physical Page Number)
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Examples of how to use a PTE
• How do we use the PTE?
– Invalid PTE can imply different things:
» Region of address space is actually invalid or
» Page/directory is just somewhere else than memory
– Validity checked first
» OS can use other (say) 31 bits for location info
• Usage Example: Demand Paging
– Keep only active pages in memory
– Place others on disk and mark their PTEs invalid
• Usage Example: Copy on Write
– UNIX fork gives copy of parent address space to child » Address spaces disconnected after child created
– How to do this cheaply?
» Make copy of parent’s page tables (point at same memory) » Mark entries in both sets of page tables as read-only
» Page fault on write creates two copies
• Usage Example: Zero Fill On Demand
– New data pages must carry no information (say be zeroed) – TEs as invalid; page fault on use gets zeroed page
– Often, OS creates zeroed pages in background
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Sharing with multilevel page tables
10 bits 10 bits 12 bits
Physical Page #
Virtual P1 index
Virtual P2 index
Virtual Address:
PageTablePtr
PageTablePtr’
• Entire regions of the address space can be efficiently shared
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Summary:Two-Level Paging
Virtual memory view
Page Table (level 1)
Page Tables (level 2)
11 10 01 00
11 10 01 00
11 10 01 00
11 10 01 00
Physical memory view
1111 0000 1100 0000
0100 0000 page2 #
0000 0000 page1 # offset
null 10000 01111 01110
0001 0000 0000 0000
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Summary:Two-Level Paging
Virtual memory view
Page Table (level 1)
Page Tables (level 2)
11 10 01 00
11 10 01 00
11 10 01 00
Physical memory view
stack stack
1001 0000 (0x90)
1000 0000 (0x80)
01111 01110
0001 0000 0000 0000
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Multi-level Translation: Segments + Pages
• What about a tree of tables?
– Lowest level page table ⇒ memory still allocated with bitmap – Higher levels often segmented
• Could have any number of levels. Example (top segment):
Virtual Seg #
Virtual Page #
Virtual Address:
Physical Page #
V V V N V N N V
Physical Address
Check Permissions
Access Error
• What must be saved/restored on context switch?
– Contents of top-level segment registers (for this example) – Pointer to top-level table (page table)
Access Error
Joseph & Kubiatowicz CS162 © UCB Spring 2022

What about Sharing (Complete Segment)?
Virtual Seg #
Virtual Page #
Process A:
V V V N V N N V
Shared Segment
Virtual Seg #
Virtual Page #
Process B:
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Multi-level Translation Analysis
– Only need to allocate as many page table entries as we need for application
» In other wards, sparse address spaces are easy – Easy memory allocation
– Easy Sharing
» Share at segment or page level (need additional reference counting)
– One pointer per page (typically 4K – 16K pages today) – Page tables need to be contiguous
» However, the 10b-10b-12b configuration keeps tables to exactly one page in size
– Two (or more, if >2 levels) lookups per reference » Seems very expensive!
Joseph & Kubiatowicz CS162 © UCB Spring 2022 Lec 14.29

Recall: Dual-Mode Operation
• Can a process modify its own translation tables? NO!
– If it could, could get access to all of physical memory (no protection!)
• To Assist with Protection, Hardware provides at least two modes (Dual-Mode Operation):
– “Kernel” mode (or “supervisor” or “protected”)
– “User” mode (Normal program mode)
– Mode set with bit(s) in control register only accessible in Kernel mode
– Kernel can easily switch to user mode; User program must invoke an exception of some sort to get back to kernel mode (more in moment)
• Note that x86 model actually has more modes:
– Traditionally, four “rings” representing priority; most OSes use only two: » Ring 0 ⇒ Kernel mode, Ring 3 ⇒ User mode
» Called “Current Privilege Level” or CPL
– Newer processors have additional mode for hypervisor (“Ring -1”)
• Certain operations restricted to Kernel mode:
– Modifying page table base (CR3 in x86), and segment descriptor tables » Have to transition into Kernel mode before you can change them!
– Also, all page-table pages must be mapped only in kernel mode
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Making it real: X86 Memory model with segmentation (16/32-bit)
Segment Selector from instruction: mov eax, gs(0x0)
2-level page table
in 10-10-12 bit address
Combined address Is 32-bit “linear” Virtual address
Second level called “table”
First level
called “directory”
Joseph & Kubiatowicz CS162 © UCB Spring 2022

X86 Segment Descriptors (32-bit Protected Mode)
• Segments are implicit in the instruction (e.g. code segments) or part of the instruction – There are 6 registers: SS, CS, DS, ES, FS, GS
Segment selector [13 bits]
• What is in a segment register?
– A pointer to the actual segment description:
– G/L selects between GDT and LDT tables (global vs local descriptor tables) – RPL: Requestor’s Privilege Level (RPL of CS ⇒ Current Privilege Level)
• Two registers: G

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com