程序代写 CS162 © UCB Spring 2022

Recall:The two-level page table
Physical Page #
Physical Address:
Virtual P1 index

Copyright By PowCoder代写 加微信 powcoder

Virtual P2 index
Virtual Address:
PageTablePtr
• Tree of Page Tables
– “Magic” 10b-10b-12b pattern!
• Tables fixed size (1024 entries)
– On context-switch: save single PageTablePtr register (i.e. CR3)
• Valid bits on Page Table Entries
– Don’t need every 2nd-level table
– Even when exist, 2nd-level tables can reside on disk if not in
Joseph & Kubiatowicz CS162 © UCB Spring 2022

How is the Translation Accomplished?
Virtual Addresses
Physical Addresses
• The MMU must translate virtual address to physical address on: – Every instruction fetch
– Every load
– Every store
• What does the MMU need to do to translate an address?
– 1-level Page Table
» Read PTE from memory, check valid, merge address » Set “accessed” bit in PTE, Set “dirty bit” on write
– 2-level Page Table
» Read and check first level
» Read, check, and update PTE
– N-level Page Table …
• MMU does page table Tree Traversal to translate each address 3/10/22 Joseph & Kubiatowicz CS162 © UCB Spring 2022

Where and What is the MMU ?
< data @ mem[VtoP(m)] >
The processor requests READVirtual-Address to memory system – Through the MMU to the cache (to the memory)
Some time later, the memory system responds with the data stored at the physical address (resulting from virtual 🡪 physical) translation
– Fast on a cache hit, slow on a miss So what is the MMU doing?
On every reference (I-fetch, Load, Store) read (multiple levels of) page table entries to get physical frame or FAULT
– Through the caches to the memory – Then read/write the physical location
Joseph & Kubiatowicz CS162 © UCB Spring 2022 Lec 15.3
Physical Memory
Processor (core)
page tables
Read
Read
Memory Bus

Recall: CS61c Caching Concept
• Cache: a repository for copies that can be accessed more quickly than the original – Make frequent case fast and infrequent case less dominant
• Caching underlies many techniques used today to make computers fast
– Can cache: memory locations, address translations, pages, file blocks, file names, network routes, etc…
• Only good if:
– Frequent case frequent enough and – Infrequent case not too expensive
• Important measure:Average Access time =
(Hit Rate x Hit Time) + (Miss Rate x Miss Time)
3/10/22 Joseph & Kubiatowicz CS162 © UCB Spring 2022

Recall: In Machine Structures (eg. 61C) …
• Caching is the key to memory system performance Access time = 100ns
Main Memory (DRAM)
Main Memory (DRAM)
Cache (SRAM)
1 ns Average Memory Access Time (AMAT)
= (Hit Rate x HitTime) + (Miss Rate x MissTime) Where HitRate + MissRate = 1
HitRate = 90% => AMAT = (0.9 x 1) + (0.1 x 101)=11 ns
HitRate = 99% => AMAT = (0.99 x 1) + (0.01 x 101)=2.01 ns
MissTimeL1 includes HitTimeL1+MissPenaltyL1 ≡ HitTimeL1 +AMATL2 Joseph & Kubiatowicz CS162 © UCB Spring 2022

Another Major Reason to Deal with Caching
Virtual Seg #
Virtual Page #
Virtual Address:
Physical Page #
N W Perm N
Access Error
Physical Address V,R, Check
Joseph & Kubiatowicz CS162 © UCB Spring 2022
• Cannot afford to translate on every access
– At least three DRAM accesses per actual DRAM access
– Or: perhaps I/O if page table partially on disk!
• Even worse:What if we are using caching to make memory access faster than DRAM access?
• Solution? Cache translations!
– Translation Cache:TLB (“Translation Lookaside Buffer”)

Why Does Caching Help? Locality!
Probability of reference
0 Address Space 2n – 1 • Temporal Locality (Locality in Time):
– Keep recently accessed data items closer to processor
• Spatial Locality (Locality in Space):
– Move contiguous blocks to the upper levels
To Processor
From Processor
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Lower Level Memory
Upper Level Memory

Recall: Memory Hierarchy
• Caching:Take advantage of the principle of locality to:
– Present the illusion of having as much memory as in the cheapest technology – Provide average speed similar to that offered by the fastest technology
Address Translation needs to occur here
Processor L
Page table lives here (perhaps cached)
L 1 C a c h
Secondar y
Storage (SSD)
Secondary Storage
L3 Ca che (sh are d)
Main Memor y (DRAM)
100,000 (0.1 ms)
10,000,000 (10 ms)
0.3 1 Size (bytes): 100Bs 10kBs
3 10-30 100 100kBs MBs GBs
Speed (ns):
Joseph & Kubiatowicz CS162 © UCB Spring 2022

How do we make Address Translation Fast?
• Cache results of recent translations !
– Different from a traditional cache
– Cache PageTable Entries usingVirtual Page # as the key
Physical Memory
Processor (core)
page tables
V_Pg M1 :
V_Pg M2 :
V_Pg Mk :
3/10/22 Joseph & Kubiatowicz CS162 © UCB Spring 2022 Lec 15.9
Read
Read
Memory Bus

Translation Look-Aside Buffer
Record recent Virtual Page # to Physical Frame # translation
If present, have the physical address without reading any of the page tables !!! – Even if the translation involved multiple levels
– Caches the end-to-end result
Was invented by Sir Maurice Wilkes – prior to caches
– When you come up with a new concept, you get to name it!
– People realized “if it’s good for page tables, why not the rest of the data in memory?”
On a TLB miss, the page tables may be cached, so only go to memory when both miss
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Caching Applied to Address Translation
Cached? Yes
Translate (MMU)
Data Read or Write
Virtual Address
Physical Address
(untranslated)
• Question is one of page locality: does it exist?
– Instruction accesses spend a lot of time on the same page (since accesses sequential)
– Stack accesses have definite locality of reference
– Data accesses have less page locality, but still some… • Can we have a TLB hierarchy?
– Sure: multiple levels at different sizes/speeds
Physical Memory
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Save Result

# of Sets (N)
What kind of Cache for TLB?
Set Size (k) – Associativity
line size (L)
• Remember all those cache design parameters and trade-offs? – Amount of Data = N * L * K
– Tag is portion of address that identifies line (w/o line offset)
– Write Policy (write-thru, write-back), Eviction Policy (LRU, …)
Joseph & Kubiatowicz CS162 © UCB Spring 2022

How might organization of TLB differ from that of a conventional instruction or data cache?
• Let’s do some review …
3/10/22 Joseph & Kubiatowicz CS162 © UCB Spring 2022 Lec 15.13

A Summary on Sources of Cache Misses
• Compulsory (cold start or process migration, first reference): first access to a block
– “Cold” fact of life: not a whole lot you can do about it
– Note: If you are going to run “billions” of instruction, Compulsory Misses are insignificant
• Capacity:
– Cache cannot contain all blocks access by the program – Solution: increase cache size
• Conflict (collision):
– Multiple memory locations mapped to the same cache location – Solution 1: increase cache size
– Solution 2: increase associativity
• Coherence (Invalidation): other process (e.g., I/O) updates memory Joseph & Kubiatowicz CS162 © UCB Spring 2022

How is a Block found in a Cache?
Block Address
Block offset
Set Select
Data Select
• Block is minimum quantum of caching
– Data select field used to select data within block
– Many caching applications don’t have data select field
• Index Used to Lookup Candidates in Cache – Index identifies the set
• Tag used to identify actual copy
– If no candidates match, then declare cache miss
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Review: Direct Mapped Cache
• Direct Mapped 2N byte cache:
– The uppermost (32 – N) bits are always the Cache Tag – The lowest M bits are the Byte Select (Block Size = 2M)
• Example: 1 KB Direct Mapped Cache with 32 B Blocks – Index chooses potential block
– Tag checked to verify block
– Byte select chooses byte within block
Cache Index
Byte Select
Ex: 0x01 Ex: 0x00 Cache Data
Byte 1023 Byte 992
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Review: Set Associative Cache
• N-way set associative: N entries per Cache Index – N direct mapped caches operates in parallel
• Example:Two-way set associative cache
– Cache Index selects a “set” from the cache
– Two tags in the set are compared to input in parallel
– Data is selected based on the tag result
Cache Data
Cache Index
Byte Select
Cache Tag Cache Data
Cache Block 0
Cache Block 0
Joseph & Kubiatowicz CS162 © UCB Spring 2022
Cache Block

Review: Fully Associative Cache
• Fully Associative: Every block can hold any line
– Address does not include a cache index
– Compare Cache Tags of all Cache Entries in Parallel
• Example: Block Size=32B blocks
– We need N 27-bit comparators
– Still have byte select to choose from within block
Cache Tag (27 bits long)
Byte Select
Ex: 0x01 Cache Data
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Where does a Block Get Placed in a Cache?
• Example: Block 12 placed in 8 block cache
32-Block Address Space:
Block 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3
Direct mapped:
block 12 can go only into block 4 (12 mod 8)
0 1 2 3 4 5 6 7
01234 567890123456789012345 67890
Set associative:
block 12 can go anywhere in set 0 (12 mod 4)
0 1 2 3 4 5 6 7
Set Set Set Set 0123
Fully associative:
block 12 can go anywhere
0 1 2 3 4 5 6 7
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Which block should be replaced on a miss?
Easy for Direct Mapped: Only one possibility Set Associative or Fully Associative:
– LRU (Least Recently Used)
Miss rates for a workload:
2-way 4-way 8-way
Size LRU Random LRU Random LRU
Random 4.7% 5.3% 4.4% 5.0% 1.5% 1.7% 1.4% 1.5%
1.13% 1.13% 1.12% 1.12%
16 KB 5.2% 5.7% 64 KB 1.9% 2.0% 256 KB 1.15% 1.17%
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Review:What happens on a write?
• Write through:The information is written to both the block in the cache and to the block in the lower-level memory
• Write back:The information is written only to the block in the cache
– Modified cache block is written to main memory only when it is replaced – Question is block clean or dirty?
• Pros and Cons of each? – WT:
» PRO: read misses cannot result in writes
» CON: Processor held up on writes unless writes buffered – WB:
» PRO: repeated writes not sent to DRAM processor not held up on writes
» CON: More complex
Read miss may require writeback of dirty data
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Physically-Indexed vs Virtually-Indexed Caches
• Physically-Indexed Caches
– Address handed to cache after translation – Page Table holds physical addresses
– Benefits:
» Every piece of data has single place in cache
TLB ⊕ physical
Cache [Physically indexed]
» Cache can stay unchanged on context switch
– Challenges:
» TLB is in critical path of lookup!
– Pretty Common today (e.g. x86 processors)
• Virtually-Indexed Caches
– Address handed to cache before translation
– Page Table holds virtual addresses (one option)
– Benefits:
» TLB not in critical path of lookup, so can be faster
[Physically addressed]
Cache [Virtually indexed]
– Challenges:
» Same data could be mapped in multiple places of cache » May need to flush cache on context switch
• We will stick with Physically Addressed Caches for now!
offset virtual TLB
[Virtually addressed]
3/10/22 Joseph & Kubiatowicz CS162 © UCB Spring 2022
physical physical

What TLB Organization Makes Sense?
• Needs to be really fast
– Critical path of memory access
» In simplest view: before the cache
» Thus, this adds to access time (reducing cache speed)
– Seems to argue for Direct Mapped or Low Associativity • However, needs to have very few conflicts!
– With TLB, the Miss Time extremely high! (PT traversal)
– Cost of Conflict (Miss Time) is high
– Hit Time – dictated by clock cycle
• Thrashing: continuous conflicts between accesses
– What if use low order bits of virtual page number as index into TLB? » First page of code, data, stack may map to same entry
» Need 3-way associativity at least?
– What if use high order bits as index?
» TLB mostly unused for small programs
Cache [Physically indexed]
Joseph & Kubiatowicz CS162 © UCB Spring 2022

TLB organization: include protection
• How big does TLB actually have to be?
– Usually small: 128-512 entries (larger now)
– Not very big, can support higher associativity
• Small TLBs usually organized as fully-associative cache
– Lookup is by Virtual Address
– Returns Physical Address + other info
• What happens when fully-associative is too slow?
– Put a small (4-16 entry) direct-mapped cache in front
– Called a “TLB Slice”
• Example for MIPS R3000:
Virtual Address
Physical Address
Access ASID
0xFA00 0x0003 Y N Y R/W34 0x0040 0x0010 N Y Y R 0 0x0041 0x0011 N Y Y R 0
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Example: R3000 pipeline includes TLB “stages”
MIPS R3000 Pipeline
TLB I-Cache RF Operation WB E.A. TLB D-Cache
64 entry, on-chip, fully associative, software TLB fault handler
Virtual Address Space
0xx User segment (caching based on PT/TLB entry) 100 Kernel physical space, cached
101 Kernel physical space, uncached
11x Kernel virtual space
Allows context switching among
64 user processes without TLB flush
Inst Fetch
V. Page Number
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Reducing translation time for physically-indexed caches
• As described,TLB lookup is in serial with cache lookup
– Consequently, speed of TLB can impact speed of access to cache
• Machines with TLBs go one step further: overlap TLB lookup with cache access
Virtual Address
TLB Lookup
V page no.
Access Rights
P page no.
– Works because offset available early
– Offset in virtual address exactly covers the “cache index” and “byte select” – Thus can select the cached byte(s) in parallel to perform address translation
Physical Address
virtual address:
Virtual Page #
physical address: tag / page #
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Overlapping TLB & Cache Access
• Here is how this might work with a 4K cache:
assoc lookup
index 20 102
= FDatHit/ Na Miss
• What if cache size is increased to 8KB?
– Overlap not complete
– Need to do something else. See CS152/252
• Another option:Virtual Caches would make this faster
– Tags in cache are virtual addresses
– Translation only happens on cache misses
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Current Intel x86 (Skylake, Cascade Lake)
3/10/22 Joseph & Kubiatowicz CS162 © UCB Spring 2022 Lec 15.28

Current Example: Memory Hierarchy
• Caches (all 64 B line size)
– L1 I-Cache: 32 KiB/core, 8-way set assoc.
– L1 D Cache: 32 KiB/core, 8-way set assoc., 4-5 cycles load-to-use,Write-back policy
– L2 Cache: 1 MiB/core, 16-way set assoc., Inclusive,Write-back policy, 14 cycles latency
– L3 Cache: 1.375 MiB/core, 11-way set assoc., shared across cores, Non-inclusive victim cache,Write-back policy, 50-70 cycles latency
– L1 ITLB, 128 entries; 8-way set assoc. for 4 KB pages
» 8 entries per thread; fully associative, for 2 MiB / 4 MiB page
– L1 DTLB 64 entries; 4-way set associative for 4 KB pages
» 32 entries; 4-way set associative, 2 MiB / 4 MiB page translations: » 4 entries; 4-way associative, 1G page translations:
– L2 STLB: 1536 entries; 12-way set assoc. 4 KiB + 2 MiB pages » 16 entries; 4-way set associative, 1 GiB page translations:
Joseph & Kubiatowicz CS162 © UCB Spring 2022

What happens on a Context Switch?
• Need to do something, since TLBs map virtual addresses to physical addresses – Address Space just changed, so TLB entries no longer valid!
• Options?
– Invalidate TLB: simple but might be expensive
» What if switching frequently between processes?
– Include ProcessID in TLB
» This is an architectural solution: needs hardware
• What if translation tables change?
– For example, to move page from memory to disk or vice versa…
– Must invalidate TLB entry!
» Otherwise, might think that page is still in memory!
– Called “TLB Consistency”
• Aside: with Virtually-Indexed cache, need to flush cache!
– Rember, everyone has their own version of the address “0”!
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Putting Everything Together:Address Translation
Virtual Address:
Physical Memory:
Virtual P1 index
Virtual P2 index
PageTablePtr
Physical Address:
Physical Page #
PageTable (1st level)
PageTable (2nd level)
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Putting Everything Together:TLB
Virtual P1 index
Virtual P2 index
PageTablePtr
Physical Page #
Virtual Address:
Physical Memory:
Physical Address:
PageTable (1st level)
PageTable (2nd level)
Joseph & Kubiatowicz CS162 © UCB Spring 2022

Putting Everything Together: Cache
Physical Memory:
Virtual P1 index
Virtual P2 index
PageTablePtr
Physical Page #
tag: block:
3/10/22 Joseph & Kubiatowicz CS162 ©
CB Spring 2022
Virtual Address:
Physical Address:
PageTable (1st level)
PageTable (2nd level)

Page Fault
• TheVirtual-to-PhysicalTranslation fails
– PTE marked invalid, Priv. Level Violation, Access violation, or does not exist
– Causes an Fault / Trap
» Not an interrupt because synchronous to instruction execution
– May occur on instruction fetch or data access
– Protection violations typically terminate the instruction
• Other Page Faults engage operating system to fix the situation and retry the instruction
– Allocate an additional stack page, or
– Make the pag

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com