Operating Systems Lecture 7b
Dr Ronald Grau School of Engineering and Informatics Spring term 2020
Previously 1 Memory management
Addressing and address spaces Partitioning and segmentation
Today 2 Memory management
Virtual memory Paging
Recap: Questions 3
1. What is a logical address?
2. What are the two main purposes of the memory management unit?
3. Why do we have to bind or translate addresses?
4. When is address-binding during run-time mandatory?
5. Which of the techniques we know performs non-contiguous memory allocation?
6. How is access to a process’ memory partition protected?
7. What is segmentation?
8. What is internal fragmentation and when does it occur?
9. What is external fragmentation and when does it occur?
Virtual memory 4 Objectives
Hide physical memory
Memory protection
Illusion of unbounded memory
Logical address space
Partitioning/segmentation
Problem: Limited size of processes/segments – overlays required
Solution: Paging – we load processes only partially into memory
Paging 5 Principles
Physical memory divided into frames of equal size power of 2, e.g. 4096 bytes Process image divided into pages of the same size
Pages loaded into frames (in physical memory)
Secondary (swap) storage for pages that are not in main memory
Paging 6 Properties
Non-contiguous allocation
Process image can be larger than available main memory Many processes can reside (partially) in memory
Invisible to user (unlike segmentation or overlays)
Data structures: Page table for each process Current frame number for each page
Free frame list
Paging 7
Paging 8
Page table 9 One page table per process
Indexed by page number
Present bit (loaded?)
Frame number (if present)
Modified bit (written since loaded?)
Control and access bits (read-write-execute, kernel/user, etc)
Paging model 10
Paging model
11
Example
32 bytes of memory 4 byte page size
Pages
Frames
Page table 12 Effect of page size
Equal page size: Eliminates external fragmentation Larger page size: Larger internal fragmentation
Smaller page size: Larger page tables
Example:
4KB page size, 48 bit logical address space
8 bytes per table entry → 512 GB just for storing the page table
Multi-Level Paging 13 Page table itself is paged
E.g. four levels on many Intel architectures
Hashed Page Table 14
E.g. ARM
Hashed Page Table 15 Table size ~number of used frames
E.g. Solaris on SPARC
Inverted Page Table 16 One table for all processes
E.g. Power PC
Translation Look-Aside Buffer (TLB) 17 Page table in main memory
Each address translation requires at least two memory accesses Translation Look-Aside Buffer
Cache in MMU for page table entries Cache policy, e.g. most recently used Associative access
Cleared on each context switch
Translation Look-Aside Buffer (TLB) 18
Cache Memory and Associativity 19
Cache Memory and Associativity 20
Page Faults and Thrashing 21 Resident set
Pages of a process that are currently assigned to frames Page fault
Access to a page that is not resident →need to swap in page
Which page should be replaced? →page replacement
What to do on process start? →demand vs prepaging
Page Faults and Thrashing 22 Thrashing
Performance degradation by high number of page faults, i.e. resident set is too small
How many pages should be loaded per process? →working set
Shared pages 23
Shared memory Shared libraries
Summary 24 Memory management
Paging
Basis for modern virtual memory systems
Page table implementations
Page tables can be large
Clever implementations are essential
Hardware support in MMU
(e.g. Translation Look-Aside Buffer)
Read 25 Tanenbaum & Bos., Modern Operating Systems
Chapter 3
Silberschatz et al., Operating System Concepts Chapter 8
Next Lecture
26
Introduction
Operating System Architectures Processes
Threads – Programming
Process Scheduling – Evaluation Process Synchronisation
Deadlocks
Memory Management (continued) File Systems
Input / Output
Security