ECS150 FQ20
March 27, 2020
Main Memory
• Contiguous Memory Allocation • Allocation Data Structures
Lecture Notes 9
• Bitmaps – Use a single bit per fixed partition size for allocation
• Linked Lists – Use a linked list of allocated and free spaces
• Memory Allocation Algorithms
• First Fit – Allocate with first hole that is large enough
• Best Fit – Allocate with the smallest hole that is large enough
• Worst Fit – Allocate in the largest hole that is large enough
• Next Fit – Start at the last location allocated
• Quick Fit – Keeps track of common sizes (finding neighbors/merging is difficult)
• Fragmentation
• Internal Fragmentation – Difference between requested and actual allocation
• External Fragmentation – Small holes are created from allocation scheme
• Compaction – Shift memory locations to get rid of External Fragmentation (not
always possible)
Virtual Memory
• Demand Paging – Pages are loaded on demand
• Lazy Swapper – Swap only what is needed
• Pager – Moves pages, not entire process
• Memory Resident – The information is currently in memory
• Page-Fault Trap – Exception when required page is not memory resident
• Pure Demand Paging – Only bring pages in when they are needed
• Locality of Reference – References typically exist within a local area
• Performance
• EffectiveAccessTime–(1–p) ́ma+p ́pagefaulttime
• Page Fault Rate – The rate in which page faults occur per instruction
• Copy-on-Write – Pages are duplicated on first write
• Zero-fill-on-demand – Pages are initially zeroed out
• Virtual Memory fork – Child uses suspended parents memory space
• Page Replacement
• Over Allocating – Putting too many processes in memory
• Replacement
• Victim Frame – The frame that will be swapped out
• Modify bit (Dirty bit) – Keep track if page has been modified or not
• Frame-Allocation Algorithm – How to allocate frames
• Page-Replacement Algorithm – Decide what pages to replace
• Belady’s anomaly – Page fault rate increases with number of frames
• FIFO Page Replacement – First in is first to be replaced
This content is protected and may not be shared, uploaded, or distributed. Lecture Notes 9
1 of 2
ECS150 FQ20 March 27, 2020
• LRU Page Replacement – Page that hasn’t been used for longest time is replaced
• LRU Approximation Page Replacement
• Reference Bit – Bit is set when it is referenced
• Additional Reference Bits Algorithm – Use of bits that shift each period
• Second-Chance Algorithm – Give a second chance if reference bit is set
• Enhanced Second-Chance Algorithm – Recent, Modified
• Counting-Based Page Replacement – Count the number of accesses
• Least Frequently Used – Replace least frequently used page
• Most Frequently Used – Replace the most frequently used page
• Page Buffering Algorithms – Check to see if in the free-frame pool
• Applications and Page Replacements – Some applications like databases perform
terribly under virtual memory and need their own control
• Allocation of Frames
• Minimum Number of Frames – How many frames at a minimum should a process have?
• Equal Allocation – Divide all frames equally among processes
• Proportional Allocation – Allocate based upon size of process
• Global vs. Local Replacement – Replace from all frames or just the ones the process
owns
• Thrashing – High paging activity where paging exceeds execution
• Local Replacement Algorithm (Priority Replacement Algorithm) – Only allow process to replace its own frames, don’t take from other processes
• Locality Model – Processes typically execute in a small area of the program
• Working-Set Model – Amount needed to execute within a locality
• Working-Set Window – The range of pages recently used
• Page Fault Frequency – The frequency in which page faults are occurring
• Memory-Mapped Files
• File Mapping – Maps a file into memory space
• Named Shared-Memory Object – Map a location of memory shared between processes
that is named in the system
• Memory-Mapped I/O – I/O devices are mapped into the memory space
• Allocating Kernel Memory
• Power-of-2 Allocator – All allocations are of power size two
• Buddy System – Each segment is divided in half, one is further subdivided
• Coalescing – Two buddies can be combined to form larger space
• Slab Allocation – One or more contiguous pages in memory make up a slab, a cache is
used to allocate within the slab
This content is protected and may not be shared, uploaded, or distributed. Lecture Notes 9
2 of 2