CS计算机代考程序代写 x86 data structure compiler UNIVERSITY of WISCONSIN-MADISON Computer Sciences Department

UNIVERSITY of WISCONSIN-MADISON Computer Sciences Department
CS 537
Introduction to Operating Systems
Andrea C. Arpaci-Dusseau Remzi H. Arpaci-Dusseau
Exam 1: Review
Questions answered in this lecture
What are some useful things to remember about virtualization?
:

Announcements
P2: Graded in
handin
lessons: Problems with instructional machines at last minute
and Canvas; major problems contact TAs
P3:
Future assignments may be due at 5pm instead of midnight Turn in a reasonable version before the last minute
Small changes less problematic vs. no code at all
Handin
Use “
cp

p” (or “
scp

p”): Keep modification
time, access time
level scheduling Choose your own project partner
P4: Available later today: xv6 multi

Will post form on Canvas if you want us to assign (binding)

Midterm 1
Two
• Room Ingraham B10:
All other discussion sections
hours
7:30
9:30


pm,Thursday 10/10
If you are enrolled in Discussion Section 301 (Wed 11-11:50am)
• Room Ingraham 19:
Bring
All multiple choice, fill in
Covers everything so far in course:
#2 pencils and student id (need id number)
scantron
bubbles
• Lectures
• Chapters 1 – 26, excluding 10 (Multiprocessor Scheduling), 17 (Free-Space
Management), and 23 (VAX/VMS Virtual Memory System)
• Look over sample exams
including
last Thursday (threads) + Reading +
Quizzes
3
+
P1

Review: Easy Piece 1
CPU
Process
Virtualization
Memory
Address Space
Context Switch Schedulers
Dynamic Relocation
Segmentation Paging
TLBs
Multilevel Swapping

What questions did you ask?

How are system Calls Performed?
Process P
OS
RAM
P can only see its own memory because of user mode (other areas, including kernel, are hidden)
P wants to call read() but no way to call it directly
OS is not part of P’s address space

Process P
OS
System Call
read():
RAM
movl $6, %eax; int $64

System Call
Process P
64th entry
OS
movl $6, %eax; syscall-table index
int $64 trap-table index
RAM

Process P
SYSTEM CALL
64th entry
OS
RAM
movl $6, %eax; int $64
syscall-table index trap-table index
Kernel mode: we can do anything!

System Call
Process P
64th entry
6th entry
movl $6, %eax;
syscall-table index
int $64
trap-table index
RAM
Follow entries to correct system call code
syscall
sys_read

System Call
Process P
buf
RAM
movl $6, %eax; int $64 syscall-table index trap-table index
Kernel can access user memory to fill in user buffer return-from-trap at end to return to Process P
syscall
sys_read

HW, OS, or User Process?

Process API: HW in Book
a program
using fork().The child process should print “hello”; the parent
Write
process should print “goodbye”.You should try to ensure that the child process
always prints first; can you do this without calling wait() in the parent
• Waitpid, sleep, other synchronization primitives such as condition variables and semaphores (next topic!)
Is it possible for child process to wait for a parent or does it always have to be the other way around?
• Wait() and waitpid() apply to children processes
Typical workflow of creating a new process is to call exec in child after forking. Would there ever be a reason to create a child and call exec in the parent instead?
• No good reason I can think of
?

Process API
If a parent and a child can access the same file descriptor, why does closing a file descriptor in a child not effect the parent process? Is it just because the file descriptor table is unique for each, but each entry references the same file?

M
ulti
Rule 1: If priority(A) > Priority(B), A runs

level feedback Queue (MLFQ) Rules
Rule
A
2: If priority(A) == Priority(B), & B run in RR
More rules:
Rule 3: Processes start at top priority Rule 4: If job uses whole slice, demote (longer time slices at lower priorities)
Q3 A
Q2 B
Q1
Q0 C D
process

Job that performs I/O Periodically
Q3 Q2
Q1 Q0
Stays in Q1 queue as long as doesn’t use entire Q1 timeslice
0 5 10 15 20

Prevent Gaming
Problem: High priority job could trick scheduler and get more CPU by performing I/O right before time-slice ends
Fix: Account for process’s total run time at priority level downgrade when exceed threshold

How are virtual Addresses generated?


What do addresses look like from the program’s perspective? (from the user process’s perspective)
Generated by compiler and contents of registers

Virtual
Initial %rip = 0x10 %rbp = 0x200
Memory
Accesses
1) Fetch instruction at addr 0x10 Exec:
2) load from addr 0x208
3) Fetch instruction at addr 0x13 Exec:
no memory access
4) Fetch instruction at addr 0x19 Exec:
5) store to addr 0x208
?
0x10: movl 0x13: addl 0x19: movl
0x8(%rbp), %edi $0x3, %edi %edi, 0x8(%rbp)
%rbp is the base pointer:
points to base of current stack frame
%rip is instruction pointer (or program counter, PC)
Memory Accesses to what virtual addresses? Include instruction fetches!

High

Description
one process uses RAM at a time
rewrite code and addresses before running
level approaches
for virtualizing memory?
Name of approach
1. 2. 3.
add per
process starting location to
virt
5.
addr

to obtain phys
addr
4.
valid range
dynamic approach that verifies address is in
several
Candidates:
base+bound
pairs per process
Segmentation
Base+Bounds,Time
Sharing
, Static Relocation,
Which need hardware support?
Base,

Review: 4) Base and Bounds Advantages
Provides protection (both read and write) across address spaces Supports dynamic relocation
Can place process at different locations initially and also move address spaces
Simple, inexpensive implementation: Few registers, little logic in MMU Fast: Add and compare in parallel
logical address
physic addres
registers
no
yes
32 bits
base
32 bits
bounds
1 bit
mode
mode = user?
yes bounds?
< no + base error Subtraction for stack segment (not on exam) a s Paging and segmentation 73) One advantage of adding segmentation to paging is that it potentially reduces the size of the page table. True; only need page table entries for valid pages in each separate segment. 74) One advantage of adding paging to segmentation is that it reduces the amount of internal fragmentation. False; Paging has internal fragmentation. Quiz: Given known page size, how many bits are needed in address to specify offset in page? Address Format Page Size 16 bytes 1 KB 1 MB 512 bytes 4 KB Low Bits (offset) 4 10 20 9 12 Assuming byte addressable architecture Quiz: Given number of bits in virtual address and bits for offset, how many bits for virtual page number? Page Size Low Bits (offset) 16 bytes 4 1KB 10 1MB 20 512 bytes 9 4KB 12 Correct? Virt Addr Bits 10 20 32 16 32 High Bits (vpn) Address Format 6 10 12 5 20 7 Quiz: Given number of bits for vpn, how many virtual pages can there be in an address space? Address Format Page Size Low Bits (offset) Virt Addr Bits 10 20 32 16 32 High Bits (vpn) Virt Pages 16 bytes 1KB 10 1MB 20 512 bytes 9 4KB 12 4 6 64 10 1K 12 4K 7 128 20 1M Tells you how many entries are needed in page tables! Virt Number of bits in virtual address format does not need to equal number of bits in physical address format UAL => Phys
VPN offset
ical PAGE
Mapping
Addr Mapper
0
1
0
1
0
1
1
0
1
1
0
1
0
1
PPN
How should OS translate VPN to PPN?
offset
OS
,
addr
What data structure is good? Big array: pagetable
For segmentation,
used a formula
= virt_offset + base_reg)
For paging
e
OS
need
(e.g., phys
general mapping mechanism
s
mor

Where Are Pagetables Stored?
How big is a typical page table?
– – –
32

bit
assume
assume 4 KB pages
• • •
address space
assume 4 byte Final answer: 2 ^ (32
entries
(PTEs)
page table

Page table size = Num entries * size of each entry
log(4KB)) * 4 =
Num entries = num virtual pages = 2^(bits for
4 MB
vpn
Bits for
) number of bits for page offset
vpn lg(4KB) = 32
= 32

= 32
Num entries = 2^20 = 1 MB
Page table size = Num entries * 4 bytes = 4 MB


• •

12 = 20
Implication:
in memory
(e.g., CR3 on x86)
Store
Hardware finds page table base with register
each page table

Change contents of page table base register to newly scheduled process
descheduled
switch?
Save old page table base register in PCB of
What happens on a context
• •
process

How big are page Tables?
How big is each page table?
2 bytes
2 bytes
4 bytes
1. PTE’s are
2. PTE’s are
3. PTE’s are
4. PTE’s are
32
, virtual
, and
possible virtual page numbers
32 *2bytes=64bytes
addrs
are
24 bits
16 bytes
, pages are
2 bytes * 2^(24 – lg 16) = 2^21 bytes (2 MB)
, virtual
addrs
are
32 bits
4 KB
, and pages are
4 bytes * 2^(32 – lg 4K) = 2^22 bytes (4 MB)
4 bytes
64 bits
4 KB
, virtual
addrs
are
, and pages are
4 bytes * 2^(64 – lg 4K) = 2^54 bytes

3) Multilevel Page
Goal: Allow
Idea: Page the page tables
• • •
Tables

use
Used in x86 architectures (hardware can walk known structure)
30-bit address:
contiguously
tables; outer level “page directory”
Only allocate page tables for pages in
each page table
Creates multiple levels of page
to be allocated non
outer page (8 bits)
inner page (10 bits)
page offset (12 bits)
base of page directory

Simulations: Multi-Level PageTables
Each page is 32 bytes
The virtual address space for process in question (assume only one) is 1024 pages Physical memory consists of 128 pages
Thus, a virtual address needs 15 bits (5 for the offset, 10 for the VPN). A physical address requires 12 bits (5 offset, 7 for the PFN).
1 byte PTE
The system assumes a multi-level page table. Thus, the upper five bits of a virtual address are used to index into a page directory; the page directory entry (PDE), if valid, points to a page of the page table. Each page table page holds 32 page-table entries (PTEs). Each PTE, if valid, holds the desired translation (physical frame number, or PFN) of the virtual page in question.
The format of a PTE is thus: VALID | PFN6 … PFN0
and is thus 8 bits or 1 byte.
The format of a PDE is essentially identical:
VALID | PT6 … PT0

Simulations: Multi-level Pagetables
Virtual address 0x611c contentsà
PDBR: 108 (decimal) [page directory is held in this page]
page 108: 83 fe e0 da 7f d4 7f eb be 9e d5 ad e4 ac 90 d6 92 d8 c1 f8 9f e1 ed e9 a1 e8 c7 c2 a9 d1 db ff
Which entry of PageDir? PDE: 16+8=24
à 0xa1
à 1010 0001 àValid and 33
0110 0001 0001 1100
page 33:7f7f7f7f7f7f7f7fb57f9d7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f f6 b1 7f 7f 7f 7f
Which entry of Page Table? PTE: 8
àb5
à 1011 0101
àvalidand 32+16+4+1=Page53
page 53:0f0c18090e121c0f081713071c1e191b09161b150e030d121c1d 0e 1a 08 18 11 00
Which offset on page?
à offset 16+8+4 = 28
Final data value: 08!

TLBs: HW and OS Roles
Who Handles TLB Hit?
Who Handles TLB Miss? HW or OS H/W
H/W must know where pagetables are stored in memory
– CR3registeronx86
– PagetablestructurefixedandagreeduponbetweenHWandOS – HW“walks”knownpagetablestructureandfillsTLB
H/W

HW AND OS ROLES
Who Handles TLB MISS? H/W or OS? OS:
CPU traps into OS upon TLB miss “Software-managed TLB”
OS interprets pagetables as it chooses; any data structure possible Modifying TLB entries is privileged instruction

Present vs Valid Bit


Virtual memory when page is not allocated in physical memory (RAM); instead on disk
Why is a present bit needed? Why not just use valid bit?

Virtual Address Space Mechanisms
Each page in virtual address space maps to one of three:
• • •
Extend page tables with an extra bit:
• •

• •
• • •
Nothing (error): Free
Physical main memory: Small, fast, expensive Disk (persistent storage): Large, slow, cheap
present
present
)
bit set in PTE, hold PPN bit cleared
permissions (r/w), valid,
Page is not allocated or mapped (not
valid
Segmentation fault
Page in memory:
Page on disk:
present
present
block
address on
disk
PTE points to
Causes trap into OS when page is referenced Trap: page fault

Belady’s
Anomaly for FIFO
Page reference
ABC DAB
9 misses
DCB E
string:
ABCDABEABCDE
Three or Four pages of physical memory
Metric: Miss count
A
B
C
A
B
C
D
B
C
A
B
C
D
D
A
C
A
B
C
D
D
A
B
A
B
C
D
AE
10 misses
E
A
B
E
B
C
D
E
A
B
E
A
B
E
A
C
D
E
A
B
D
E
C
B
E
A
B
C
E
C
D
E
C
D
D
A
B
C
D
E
B
C

Good luck!
Remember ID and pencils
Two hours
• Room Ingraham 19:
If you are enrolled in Discussion Section 301 (Wed 11- 11:50am)
• Room Ingraham B10:
All other discussion sections

7:30

9:30 pm,Thursday 10/10