Memory Management
See chapter 8 in [SGG 9/E]
1. Preliminaries (system’s programming) 2. ContiguousAllocation
Partitioned Allocation:
Copyright By PowCoder代写 加微信 powcoder
3. Segmentation (variable sizes) 4. Paging(fixedsizes)
5. Segmentationwithpaging
CMPUT 379 (E.S. Elmallah)
1. Preliminaries
Logical versus physical addresses
o A central idea in memory management is to map logical addresses (aka virtual addresses) generated by the CPU to physical addresses (seen by the memory).
logical address
CMPUT 379 (E.S. Elmallah)
physical address
Relocatable modules
A relocatable address is relative to the start address of a program (assumed to be some constant, e.g., zero).
Each module stores information about memory references that need relocation.
Modules can be generated before or after linking:
Source file
Source file
Source file
Compiler Compiler
file Assembler
Assembler Assembler file
Assembler Assembler file
Object Program file library
Object file
file Program
CMPUT 379 (E.S. Elmallah)
A relocatable address can be converted to an absolute address by adding a “relocation factor”.
o Note: With MMU introduced, absolute addresses my be different from physical addresses.
Binding a relocatable address to an absolute address can be done at
o compile or link time (by using compiler directives or setting linker parameters)
o execution time:
CMPUT 379 (E.S. Elmallah)
Dynamic loading
o Here, a program is composed of a number of
relocatable modules.
o Linking is done prior to program execution.
o A module is loaded on demand in main memory with the help of a system’s loader program.
o Note: dynamic loading does not require especial support from the OS.
CMPUT 379 (E.S. Elmallah)
Static versus dynamic (shared) libraries o Linking to a static library:
• ’gcc file.c /usr/lib/libc.a’
• produces a relatively large executable file (at least 300K bytes)
o Linking to a shared library:
• ’gcc file.c’ (uses ’/usr/lib/libc.so’)
• produces a much smaller executable file
o How does it work?
CMPUT 379 (E.S. Elmallah)
Dynamic linking and shared libraries o Basic idea:
• A stub is included in the image of each reference to a shared library routine
• When executed, the stub checks (with the help of the OS) whether the routine is loaded in memory
– If not, the program loads the routine.
– The stub replaces itself with the address of the
o Note 1: The design allows all processes that use the routine to execute only one copy of the routine.
o Note 2: The design allows library upgrades without relinking existing programs.
CMPUT 379 (E.S. Elmallah)
2. Contiguous Allocation
Our main goals here are:
1. Support multiprocessing (kernel + multiple
processes)
2. Protect the kernel from the users (and the users from each other)
3. Support swapping of blocked processes from the main memory
• Note: Care should be taken if process P1 starts an I/O using DMA (direct memory access) before being swapped out, then a different process P2 replaces P1.
4. Support static libraries
CMPUT 379 (E.S. Elmallah)
Architecture
The limit register holds the largest possible logical address in the process.
The limit and relocation registers are part of the PCB. The OS keeps track of allocated and free partitions
CMPUT 379 (E.S. Elmallah)
Storage allocation: How to satisfy a request of size n from the set of free partitions?
o First fit o Best fit o Worst fit
CMPUT 379 (E.S. Elmallah)
External Fragmentation: there may be enough free memory to satisfy a request, but the layout is not contiguous.
o Fix: run a memory compaction algorithm to create large free areas
Internal Fragmentation: Unused areas internal to memory blocks (called holes). What to do?
o Keeping track of small free partitions is not worth the associated overhead.
o Avoid creating a small partition by possibly allocating a slightly larger space to some processes.
CMPUT 379 (E.S. Elmallah)
3. Partitioned Allocation: Segmentation
Now, we would like to
1. Satisfy all previous design goals: multiprocessing, protection, and swapping
2. Allow a process to occupy non-contiguous space
3. Minimize the frequency of running compaction
algorithms
4. Allow more sharing of data and code
CMPUT 379 (E.S. Elmallah)
Architecture
Segmentation supports the view that a program is composed of a collection of logical segments: the main program, user functions, variables, the stack area, etc.
So, why not allow each segment to be loaded in a different place in memory?
CMPUT 379 (E.S. Elmallah)
A typical segmentation hardware:
o Each logical address is made of
CMPUT 379 (E.S. Elmallah)
The IBM 360 uses a different architecture (i.e., a logical address is not divided into two parts):
o the system has a segment register
o a program should load the segment register explicitly
before using a segment, e.g.
use segment #i
access locations at displacements . . . , . . . use segment #j
access locations at displacements . . . , . . . …
CMPUT 379 (E.S. Elmallah)
For the typical architecture, the segment table (part of the PCB) is an array of limit and base registers.
Segment tables are intended to have few entries. Q: How to store them?
o In main memory: this option results in access time: ta (segmented memory) = 2 ta (memory)
(hence, impractical).
o In a special array of registers: here,
ta (segmented memory) = ta (register) + ta (memory)
this is the best case.
o Note: techniques for handling larger tables will be discussed with paging.
CMPUT 379 (E.S. Elmallah)
Advantages and Drawbacks
Additional Protection: Each segment can have its own protection flags (e.g., read only)
Data Sharing: o Possible
o In addition, if data does not contain a physical pointer, then different processes can map the shared data segment into different segment numbers
o Processes access shared data with no additional provisions
CMPUT 379 (E.S. Elmallah)
Code Sharing:
o Possible for reentrant functions:
CMPUT 379 (E.S. Elmallah)
o However, there is a subtle point:
• Q: Can different processes map a shared library
code to different segments?
• No, consider a library function loaded in physical memory,
• Some addresses in the function are bound to absolute values
• So, such addresses have a unique segment number
Internal and external fragmentation:
o Continue to exist, but not as severe as in the previous architecture.
CMPUT 379 (E.S. Elmallah)
4. Partitioned Allocation: Paging
Our refined set of goals are now:
1. support all previous design goals, and
2. eliminate external fragmentation
Architecture
Partition the physical memory into fixed-sized partitions (called frames) (typically, 512 bytes, 1K bytes, 2K bytes, or 4K bytes).
Partition the logical address space into fixed- sized partitions (called pages).
Typically, pages and frames are of equal length.
The OS keeps track of used and free frames. CMPUT 379 (E.S. Elmallah)
Paging hardware:
o Each logical address is made of
CMPUT 379 (E.S. Elmallah)
o The page table is an array of entries indexed by page numbers.
• How long is the page table for a system with a page size= 4K bytes, and 16-bit long addresses?
• What if the system has 32-bit long addresses?
CMPUT 379 (E.S. Elmallah)
Organization: small page tables (· 256 entries)
o Use a high speed associative memory (also called content addressable memory (CAM), or translation look-aside buffers (TLBs)).
CMPUT 379 (E.S. Elmallah)
o Access time:
ta (paged memory) = ta (CAM) + ta (memory)
o The OS loads the CAM before switching between processes.
CMPUT 379 (E.S. Elmallah)
Case Study: the MC68851 PMMU
o Provides transparent logical-to-physical address translation
CMPUT 379 (E.S. Elmallah)
o Some commonly seen status and control bits:
• the used bit: set by the MMU on page access; may
be reset by the OS subsequently
• the modified bit: set by the MMU if the page has been modified
• the write protect bit:
– 1 ≡ “a write attempt causes violation”
• the cache inhibit bit: 1 ≡ “don’t remove from the
CAM” (e.g., for memory mapped i/o devices) • the lock bit: 1 ≡ “don’t replace”
CMPUT 379 (E.S. Elmallah)
Organization: large page tables (single level paging)
o store the page table in main memory (the page table base address (PTLB) becomes part of the PCB).
o Load the CAM with the needed part of the table. Performance: The effective access time is
teff= (CAM_hit_ratio) * (ta (CAM) + ta (memory)) + (1- CAM_hit_ratio) * (ta (CAM) + 2ta (memory))
CMPUT 379 (E.S. Elmallah)
Organization: very large tables: Multilevel (hierarchical) paging:
o If the page table cannot be stored contiguously in the physical address space (for random access) then we need to break it into smaller parts.
o Two levels are not suitable for 64-bit architectures: e.g.,
CMPUT 379 (E.S. Elmallah)
o Hashed page tables:
• Variation: clustered page tables
– Each entry refers to several pages (e.g., 16) rather than a single page.
CMPUT 379 (E.S. Elmallah)
Inverted Page Tables
o Useful when page tables are large (e.g., 4M bytes) and sparse, yet, the physical memory is relatively small (e.g. 64M bytes).
o One solution is to use a page table with one entry per physical frame (the inverted page table).
o The table is shared by all running processes. o Each entry has the form
Questions
o How can one store and search an inverted table?
o If each physical frame is associated with a unique entry; how can two processes share a page?
o Is there any other advantage of consolidating all pages in one table?
CMPUT 379 (E.S. Elmallah)
5. Segmentation with Paging
Refer to the textbook for a description of segmentation with paging in the Intel Pentium architecture:
CMPUT 379 (E.S. Elmallah)
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com