12/04/2022, 09:27 Exam/Memory Management and Virtual Memory – COMP3231/COMP9201/COMP3891/COMP9283
Memory Management and Virtual Memory
1. Memory Management and Virtual Memory
1. Describe the difference between external and
Copyright By PowCoder代写 加微信 powcoder
internal fragmentation. Indicate which of the two are most likely to be an issue on a) a simple memory management machine using base limit registers and static partitioning, and b) a similar machine using dynamic partitioning.
2. List and describe the four memory allocation algorithms covered in lectures. Which two of the four are more commonly used in practice?
3. Base-limit MMUs can support swapping. What is swapping? Can swapping permit an application requiring 16M memory to run on a machine with 8M of RAM?
4. Describe page-based virtual memory. You should consider pages, frames, page tables, and Memory Management Units in your answer.
5. Give some advantages of a system with page- based virtual memory compared to a simply system with base-limit registers that implements swapping.
6. Describe segmentation-based virtual memory. You should consider the components of a memory address, the segment table and its contents, and how the final physical address is formed in your answer.
7. What is a translation look-aside buffer? What is contained in each entry it contains?
8. Some TLBs support address space identifiers (ASIDS). Why?
9. Describe a two-level page table. How does it compare to a simple page table array?
10. What is an inverted page table? How does it compare to a two-level page table?
11. What are temporal locality and spatial locality?
12. What is the working set of a process?
13. How does page size of a particular architecture
affect working set size?
14. What is thrashing? How might it be detected?
How might one recover from it once detected?
15. Enumerate some pros and cons for increasing
the page size.
16. Describe two virtual memory page fetch
policies. Which is less common in practice?
17. What operating system event might we observe
and use as input to an algorithm that decides how many frames an application receives (i.e. an algorithm that determines the application’s resident set size)?
18. Name and describe four page replacement algorithms. Critically compare them with each other.
Describe the difference between external and internal fragmentation. Indicate which of the two are most likely to
https://wiki.cse.unsw.edu.au/cs3231cgi/Exam/Memory Management and Virtual Memory 1/6
12/04/2022, 09:27 Exam/Memory Management and Virtual Memory – COMP3231/COMP9201/COMP3891/COMP9283
be an issue on a) a simple memory management machine using base limit registers and static partitioning, and b) a similar machine using dynamic partitioning.
Internal fragmentation: the space wasted internal to the allocated region. Allocated memory might be larger than requested memory.
External fragmentation: the space wasted external to the allocated region. Memory exists to satisfy a request but it’s unusable as it’s not contiguous.
Static partitioning is more likely to suffer from internal fragmentation because more memory is allocated to each process than needed and the desired partition size may not be divisible by the minimum unit of allocation.
Dynamic partitioning is more likely to suffer from external fragmentation because dynamic partitioning gives each process exactly as much memory as it needs from a larger chunk of free memory, leaving behind a fragment of memory that is often too small to use. Dynamically partitioned machines may use compaction to reduce external fragmentation.
List and describe the four memory allocation algorithms covered in lectures. Which two of the four are more commonly used in practice?
First fit – allocate into the first available gap found of adequate size, starting from the beginning of memory.
Next fit – allocate into the first available gap found, resuming the search from the last allocation.
Best fit – search all of memory to find the smallest valid gap and allocate into it, on the basis that it’ll leave the smallest unusable gap.
Worst fit – search and place into the largest area, on the basis that it is likely to leave a gap big enough for something else to use.
First and next fit are faster than best fit and worst fit because they don’t search through the entire list. As a result, they are more commonly used in practice.
Base-limit MMUs can support swapping. What is swapping? Can swapping permit an application requiring 16M memory to run on a machine with 8M of RAM?
Swapping is the act of running each whole process in main memory for a time then placing back onto disk and vice versa. It is used when the system does not have enough main memory to hold all the currently active processes.
Assuming that an application is one program and hence a single process, swapping will not permit an application (process) requiring 16M of memory to run on a machine with 8M RAM (main memory) as the whole process is too large to fit into main memory.
Describe page-based virtual memory. You should consider pages, frames, page tables, and Memory Management Units in your answer.
Main memory is divided up into equal-sized physical frames.
https://wiki.cse.unsw.edu.au/cs3231cgi/Exam/Memory Management and Virtual Memory 2/6
12/04/2022, 09:27 Exam/Memory Management and Virtual Memory – COMP3231/COMP9201/COMP3891/COMP9283
The address space of a process is divided up into equal-sized virtual pages. Each process is given the illusion of running in it’s own address space.
Pages and frames are the same size. Pages are mapped to frames using a page table. The memory management unit (MMU) is a piece of hardware that converts virtual addresses to physical addresses using the page table. The translation lookaside buffer (TLB) is a common implementation of the MMU that caches recently used page table entries in associative memory.
When main memory is full, pages that haven’t been recently used can be written to disk. The frame that they were previously using can be used by a new page. The pages on disk can be loaded back into main memory as needed.
Give some advantages of a system with page-based virtual memory compared to a simply system with base-limit registers that implements swapping.
Abstracts organisation of physical memory.
Supports sharing of pages between processes.
Supports partially loading a process into memory, improving performance by allowing more processes to be active.
No external fragmentation.
A process does not need to be contiguous in memory in paging.
Avoids expensively loading an entire process from disk when swapping, making it faster to context switch.
Supports virtual address spaces larger than physical memory.
Describe segmentation-based virtual memory. You should consider the components of a memory address, the segment table and its contents, and how the final physical address is formed in your answer.
Segmentation supports the user’s view of memory. Each program is a collection of segments, where a segment may represent a function, an array, etc.. Segments can be easily shared between process.
The programmer must be aware of segmentation.
A memory address is composed of a segment number and an offset.
Segments are loaded into contiguous regions of physical memory. The segment table maps each segment number to a base and a limit. The base is the starting physical address of the segment and the limit is the length of the segment. Segments also record information on permissions.
The segment table base register (STBR) stores the starting location of the segment table for the current address space. The segment table length register (STLR) stores the number of segments in the current address space.
Converting a virtual address to a physical address in segmentation:
1. Check that the segment number is less than the segment table length register. If it isn’t, then there is an addressing error.
2. Look up the segment number in the segment table.
3. Check that the offset in the virtual address is less than the limit of the segment. If
it isn’t, then there is an addressing error.
4. The physical address is the base of the segment plus the offset in the virtual
https://wiki.cse.unsw.edu.au/cs3231cgi/Exam/Memory Management and Virtual Memory 3/6
12/04/2022, 09:27 Exam/Memory Management and Virtual Memory – COMP3231/COMP9201/COMP3891/COMP9283
What is a translation look-aside buffer? What is contained in each entry it contains?
A translation look-aside buffer (TLB) is a high-speed associative hardware cache of recently used page table entries. It is a popular implementation of a memory management unit (MMU). A minimal TLB maps virtual page numbers to physical frame numbers. A TLB may also contain address space IDs (ASIDs) and bits that indicate whether an entry is valid, cached, referenced or global.
Some TLBs support address space identifiers (ASIDS). Why?
TLBs that support ASIDs are known as tagged TLBs. In TLBs that don’t support ASIDs, when you switch to another address space you need to flush the TLB(invalidate all entries) because you don’t know whether a TLB entry belongs to the current address space – the TLB is shared between all processes but each page table entry is valid for only one process. This takes a long time. In tagged TLBs, you can simply change your current address space ID and instead check whether the address space of a TLB entry matches the current address space ID on each page reference. This reduces the time it takes to perform a context switch. In addition, it allows multiple processes to have hot TLB entries, preventing expensive TLB misses after a context switch.
Describe a two-level page table. How does it compare to a simple page table array?
A two-level page table maps virtual page numbers to physical frame numbers. It differs from a simple page table array in that not every entry in the page table has memory allocated for it. Two-level page tables only allocate memory for a second level table if it contains a resident virtual page. As a result, two-level page tables use considerably less memory than a simple page table on average. However, two-level page tables require 2 memory references to look up a mapping instead of 1, so they save memory at the expense of performance.
For a 32 bit (4GB) virtual address space, with 4K pages/frames: The top-level page table (an array of 1024 pointers to 2nd-level arrays) would be indexed such that the most significant bits of the virtual address (VADDR[31:22]) are used to look up the 2-level array (of page-table entries – frame numbers and permissions), where some of the less significant bits VADDR[21:12] are used to index that table.
What is an inverted page table? How does it compare to a two-level page table?
An inverted page table is a list of pages sorted by frame number. Inverted page tables hash the virtual address page number to look up into the page table, which then matches the page number (or uses a linked list structure to find the page). This index is the frame number.
Page table size is proportional to address space size. This is particularly important on 64 bit machines, where virtual address space size, if 2^64 bytes, is 16 bytes; storing a
Memory for page table array = 2^20 PTEs * 4 bytes each
= 2^22 bytes
https://wiki.cse.unsw.edu.au/cs3231cgi/Exam/Memory Management and Virtual Memory 4/6
12/04/2022, 09:27 Exam/Memory Management and Virtual Memory – COMP3231/COMP9201/COMP3891/COMP9283
multi level page table for this would require too many levels of indirection and be slow, and require alot of space to store.
Inverted page table is an array of page numbers sorted (indexed) by frame number. Virtual address is hashed to locate the entry in the frame table. The most important advantage is that its size is related to physical memory size, not virtual memory. Inverted page table is good for system with large addressing space but significantly less physical memory (e.g. 64-bit addressing with 2GB of ram)
What are temporal locality and spatial locality?
Temporal locality: items that were recently used are likely to be used in the near future. Spatial locality: items that are close to each other in space are likely to be used close to each other in time.
What is the working set of a process?
The working set of a process consists of all the pages used by the process over some period of time. This approximates the locality of the process.
How does page size of a particular architecture affect working set size?
If the page size is small, the work set size is smaller (in terms of memory used, not absolute number of pages) as the pages more accurately reflect current memory usage (i.e. there is less unrelated data within the page).
If the page size is large, working set also increases as more unrelated data within the larger pages are included in the current working set.
What is thrashing? How might it be detected? How might one recover from it once detected?
Thrashing occurs when the total working set size of all processes exceeds the size of physical memory. As a result, a process page faults, replaces a page and blocks waiting on IO. Then, the next process also page faults, replaces a page and also blocks to wait for the faulting page to be loaded from disk. This repeats with other processes, leading to an increase in page faults and decrease in CPU usage, which can be used to detect thrashing.
To recover from thrashing, suspend a few processes so that their resident sets move to the backing store. This frees up physical memory for the remaining processes so that they can load in missing pages from their working sets. Thrashing can also be avoided by using more RAM.
Enumerate some pros and cons for increasing the page size.
Reduces page table size. Increases TLB coverage.
Increases swapping I/O throughput, as small disk transaction times are dominated by seek & rotational delays.
https://wiki.cse.unsw.edu.au/cs3231cgi/Exam/Memory Management and Virtual Memory 5/6
12/04/2022, 09:27 Exam/Memory Management and Virtual Memory – COMP3231/COMP9201/COMP3891/COMP9283
Increases page fault latency, as swap response time is slower due to more data. Increases internal fragmentation of pages – more of the page can potentially be wasted on a piece of data that is too small.
Describe two virtual memory page fetch policies. Which is less common in practice? Why?
1. Demand paging – relevant pages are loaded as page faults occur. 2. Prepaging – try to load pages for a process before they’re accessed.
Prepaging is less common in practice:
Wastes I/O bandwidth if pages are loaded unnecessarily. Difficult to implement.
Can replace a page in the working set with an unused page.
What operating system event might we observe and use as input to an algorithm that decides how many frames an application receives (i.e. an algorithm that determines the application’s resident set size)?
Monitor page fault frequency of an application and compare it to an ideal page fault frequency. If an application’s page fault frequency is too high, give it an extra frame and vice versa.
Name and describe four page replacement algorithms. Critically compare them with each other.
From best to worst (lecture notes):
1) Optimal – use time travel to find the page that won’t be used for the most time and boot it. Impossible to implement – used only as a theoretical reference point for comparing over algorithms.
2) Least Recently Used – calculate whatever page hasn’t been used the longest, and boot it. impossible to implement efficiently in practice – requires a timestamp on every memory reference.
3) Clock page replacement – set ‘referenced’ bit to 1 when something is used. when looking for an entry to boot, set these bits to 0 if they’re 1, and kick the first 0 candidate found. resume searches from the last one booted. Efficient (implementable) approximation of LRU – used in practice.
4) FIFO – remove the page that has been there longest – does not consider actual memory usage in its decision – it will evict the most frequently used page in the system.
Exam/Memory Management and Virtual Memory (last edited 2019-05-11 21:35:26 by KevinElphinstone)
https://wiki.cse.unsw.edu.au/cs3231cgi/Exam/Memory Management and Virtual Memory 6/6
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com