COMP30023 – Computer Systems
© University of Melbourne 2022
Copyright By PowCoder代写 加微信 powcoder
Operating systems:
Memory Management
• Announcement on LMS
• Spec is available via LMS
• Extra consultation hours
© University of Melbourne 2
Project 1 is out
• Memory hierarchy
• Basic memory management
• Virtual memory
• Paging and page tables
• Page replacement algorithms
© University of Melbourne
• to allocate memory to processes when they require it
• to deallocate when finished
• to protect memory against unauthorized accesses, and
• to simulate the appearance of a bigger main memory by moving
data automatically between main memory and disk
• to keep track of which parts of memory are free and which parts
are allocated (and to which process)
Memory manager functionality
© University of Melbourne
fragmentation
© University of Melbourne 5
Swapping & Fragmentation
• There is a need to run programs that are too large to fit in
• Observations:
– all the code and data of a program does not have to be in
main memory when the program is running, and
– this code and data does not have to be stored in contiguous
locations.
• Virtual memory systems allow programs to continue making both
assumptions: each program has its own address space, broken up
into chunks called pages
© University of Melbourne 6
Virtual Memory
• The virtual address space of a machine is the set of addresses
that programs on that machine may generate.
• Each process has its own virtual address space.
• The virtual address space may be bigger than its physical address
© University of Melbourne 7
Virtual address spaces
• A paged system divides both virtual and physical address spaces
into fixed size pages.
• Virtual page can be mapped onto any physical page. As the job of
a physical page is to hold a virtual page, physical pages are also
called page frames.
• During the lifetime of a process, pages in its virtual address space
all start out on disk, and may move between disk and memory
any number of times.
• When a virtual page is moved to disk and back again, it may be
put in a different page frame than before.
© University of Melbourne 8
© University of Melbourne 9
Paging (1)
The relation between virtual
addresses and physical memory
addresses is given by the page table.
Every page begins on a multiple of
4096 and ends 4095 addresses higher,
so 4K–8K really means 4096–8191
and 8K to 12K means 8192–12287
© University of Melbourne 10
Paging (2)
The relation between virtual
addresses and physical memory
addresses is given by the page table.
Every page begins on a multiple of
4096 and ends 4095 addresses higher,
so 4K–8K really means 4096–8191
and 8K to 12K means 8192–12287
© University of Melbourne 11
Paging (3)
The relation between virtual
addresses and physical memory
addresses is given by the page table.
Every page begins on a multiple of
4096 and ends 4095 addresses higher,
so 4K–8K really means 4096–8191
and 8K to 12K means 8192–12287
© University of Melbourne 15
Memory Management Unit (MMU)
Which physical address
does virtual address 8196
corresponds to?
© University of Melbourne 16
Page Mapping
Virtual page 2 mapped to physical page 6
The mapping is generated by MMU. All actual memory
access is done with physical address value.
© University of Melbourne 18
Simple example: mapping to a
physical address using MMU
8196 mod 4096 = 4 (offset)
Logical/Virtual
Address Physical Address
Virtual page 2 mapped to physical page 6
The mapping is generated by MMU. All actual memory
access is done with physical address value.
© University of Melbourne 19
Simple example: mapping to a
physical address using MMU
8196 mod 4096 = 4 (offset)
Logical/Virtual
Address Physical Address
Virtual page 2 mapped to physical page 6
16 4-KB pages.
© University of Melbourne 20
Page table maps
virtual addresses to
physical addresses
• The MMU must check that the selected page table entry has the
present bit set to one (true) and that the permissions permit the
requested memory access.
• If both conditions are met, it will construct the physical address
using the physical page number field of the PTE.
(If not: exception: page fault)
• It will then set the referenced bit, and if the access was a write, it
will also set the modified bit.
© University of Melbourne 21
Operation of paging
Major issues faced:
1. The mapping from virtual address to physical
address must be fast.
2. If the virtual address space is large, the page table
will be large.
© University of Melbourne
Speeding Up Paging
• Without optimization, each memory reference by the
program in a paging system would require two
memory accesses: one to access the PTE and one to
access the data.
• To avoid this performance penalty, paged computers
have a translation lookaside buffer (TLB) in the MMU.
• The TLB serves as a cache, holding copies of recently
accessed page table entries (PTEs).
• The TLB is implemented using associative circuitry
that is much faster than main memory.
© University of Melbourne 23
Translation Lookaside Buffer (TLB)
© University of Melbourne
Translation Lookaside Buffers
© University of Melbourne
Virtual Pages
Where is it?
• If the page fault is caused by the permissions being
violated, the page fault handler will usually terminate the
• Otherwise, the OS must:
– suspend the process,
– free up a page frame (if there are none available now),
– load the required virtual page from swap space into a
free page frame,
– cause the MMU to map the virtual page onto the
physical page, and
– restart the process at the same instruction.
© University of Melbourne 26
Page fault handling
• Decide what to remove to make space
– Which process’s page to remove? Local vs. global
– Which page to remove?
• Discard the page, if not modified
• Write to disk, if modified
© University of Melbourne
Memory Replacement Algorithms
• Optimal algorithm
• Not recently used algorithm
• First-in, first-out (FIFO) algorithm
• Second-chance algorithm
• Clock algorithm
• Least recently used (LRU) algorithm
• Working set algorithm
• WSClock algorithm
© University of Melbourne
Page Replacement Algorithms
© University of Melbourne
Page Replacement Algorithms
At page fault, system inspects pages
Categories of pages based on the current values of
their R and M bits:
• Class 0: not referenced, not modified.
• Class 1: not referenced, modified.
• Class 2: referenced, not modified.
• Class 3: referenced, modified.
Bits R and M updated on each memory reference
© University of Melbourne
Not Recently Used (NRU) Algorithm
Operation of second chance. (a) Pages sorted in FIFO order. (b) Page list if a
page fault occurs at time 20 and A has its R bit set. The numbers above the
pages are their load times.
© University of Melbourne
• Observation: pages heavily used recently,
will probably be heavily used soon
• How to implement in hardware?
–Global instruction counter
– Store counter value with each page
– Evict the page with lowest value
© University of Melbourne 32
Least Recently Used (LRU)
Simulating LRU in Software
© University of Melbourne 33
• Keep a bit counter for each page
• On each access: shift the counter by 1 bit to the
• If the page is referenced, add 1 to the leftmost bit
TB 3-17. The aging algorithm simulates LRU in software. Shown are six pages for five
clock ticks. The five clock ticks are represented by (a) to (e).
Simulating LRU in Software (Aging)
© University of Melbourne 34
TB 3-17. The aging algorithm simulates LRU in software. Shown are six pages for five
clock ticks. The five clock ticks are represented by (a) to (e).
Simulating LRU in Software (Aging)
© University of Melbourne 35
TB 3-17. The aging algorithm simulates LRU in software. Shown are six pages for five
clock ticks. The five clock ticks are represented by (a) to (e).
Simulating LRU in Software (Aging)
© University of Melbourne 36
• Working set: pages currently used by a process
• If the working set is too small, thrashing occurs: frequent
page faults
• Locality: Data access is not uniform throughout memory
• Access may be clustered around certain areas/time
– Consecutive locations of code
– Within a data structure
© University of Melbourne 37
Temporal and spatial locality
• Optimal algorithm
• Not recently used algorithm
• First-in, first-out (FIFO) algorithm
• Second-chance algorithm
• Clock algorithm
• Least recently used (LRU) algorithm
• Working set algorithm
• WSClock algorithm
© University of Melbourne
Page Replacement Algorithms
• Memory hierarchy
• Memory allocation
• Virtual memory
• Page replacement algorithms
© University of Melbourne 39
• Announcement on LMS
• Spec is available via LMS
• Extra consultation hours
© University of Melbourne 40
Project 1 is out
• The slides were prepared by .
• Some material is borrowed from slides developed by
, , , , , and .
• Some of the images included in the notes were supplied as
part of the teaching resources accompanying the text books
listed in lecture 1.
• Reference: TB 3.3-3.3.3, 3.4 (excl. 3.4.5, 3.4.9)
Acknowledgement
© University of Melbourne
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com