程序代写代做 kernel flex cache go compiler Threads and Computer System Review

Threads and Computer System Review
1

Read Assignment
l Dinosaur Chapter 4
l Comet Chapters 26, 27
2

Thread Design Space
3

Scheduling Threads
l No longer just scheduling processes, but threads l Kernel scheduler used to pick among PCBs
l Now what?
l We have basically two options
l Kernel explicitly selects among threads
l Hide threads from the kernel, and have a user-level scheduler inside each multi-threaded process
l Why do we care?
l Think about the overhead of switching between threads
l Who decides which thread in a process should go first?
l What about blocking system calls?
4

An Analogy: Family Car Rental
l Scenario (a day is 9am-5pm)
l Avis rents a car to 2 family, Round-robin daily
l Each family has 4 members, round-robin every 2 hrs
l Two ways of doing it:
l Global scheduler: Avis schedules family for each
day, family schedules among its members
l Local scheduler: Avis schedules among 8 members
5

Thread Implementations
l User-level thread implementation l Kernel-level thread implementation
6

User-Level Thread Implementation
l User-level threads are managed entirely by a run-time system (a.k.a. user-level thread library)
l Often cooperative scheduling
l No kernel intervention (kernel sees single entity)
l Invisible to kernel
l A thread represented inside process by a PC,
registers, stack, and small thread control block (TCB) l Creating a new thread, switching, and synchronizing
threads are done via user-level procedure call
l User-level thread operations 100x faster than kernel
thread
l What operations?
7

The big picture
l The kernel only sees one scheduling entity (which has many user threads inside)
Terminate
(call user-level scheduler)
Create a process
Ready
Blocked
Scheduler dispatch
Yield
(call user-level scheduler)
Running
Block for resource
(call user-level scheduler)
Resource becomes available (move to ready queue)
8

User-Level Threads Mapping to Kernel Threads
l Definition: Kernel thread is the kernel scheduling unit
l In user thread implementation, all user threads of the same process are effectively mapped to one kernel thread
l Examples
– pthread:PTHREAD_SCOPE_PROCESS
9

Context switching threads
l Context switching two user-level threads l If belonging to the same process
l Handled by the dispatcher in the thread library § Only need to store/load the TCB information
l OS does not do anything
l If belonging to different processes
l Like an ordinary context switch of two processes
§ Handled by OS (drop in/out of the kernel)
§ OS needs to load/store PCB information and TCB information
10

User-Level Thread Limitations
l What happens if a thread invokes a syscall? l A blocking syscall blocks the whole process!
l User-level threads are invisible to the OS l They are not well integrated with the OS
l OS may be incapable of making proper decisions
l Scheduling a process with idle threads
l Blocking a process whose thread initiated an I/O, even though the process has other threads that can execute
l Unscheduling a process with a thread holding a lock l Solution: scheduler activations
11

Kernel-Level Thread Implementation
l OS now manages threads and processes
l All thread operations are implemented in the kernel
l Kernel performs thread creation, scheduling, and management (each thread is a scheduling entity)
l The OS schedules all threads in the system
l OS-managed threads are called kernel-level
threads or lightweight processes l Scheduler deals in threads
l If a thread blocks, another thread in the same process can run
l Recall: Linux only knows “tasks”
12

Kernel Thread Implementation
l Each user thread maps to a kernel thread l Examples: Windows family, Linux
– Slower to create and manipulate
+ Integrated with OS well (e.g., a blocking syscall will not block the whole process)
13

Three multithreading models
l Many-to-One l One-to-One
l Many-to-Many
14

Many-to-One
l Many user-level threads mapped to single kernel entity (kernel thread)
l Used in user thread implementation
l Drawback: blocking sys call blocks the whole process
15

One-to-One
l Each user thread maps to kernel thread l Used in kernel thread implementation
l May lead to too many kernel threads
16

Many-to-Many model
17

Many-to-Many model
l Allows many user threads to be mapped to many kernel threads
l Allows OS to create a sufficient number of kernel threads running in parallel
l When one blocks, schedule another user thread l Examples:
l Go goroutines (check this out. very cool)
l Solaris 2
l Windows NT/2000 with the ThreadFiber package
18

Web browser example
l Why do some modern web browsers want to use separate processes (e.g., for different tabs, extensions) instead of threads?
l Security isolation l Reliability isolation
19

Refresher of memory hardware

“Things related to memory” — you have learned/heard so far
l What is a processor
l What are registers
l What is memory
l How’s memory organized
l Hardware memory
l What’s a cache and how’s it organized
l What’s a heap?
l What’s a stack?
l Globals, locals, etc. l PC, SP
l All of above deal with logical memory!
l Physical memory a whole new can of worms!

What does a Processor Do?
while (1)
fetch (get instruction)
decode (understand instruction) execute
Execute: load, store, test, math, branch

Logical Organization
Fetch Decode Execute
Logically
F1 D1 E1 F2 D2 E2

Processor Operations
Logically
F1 D1 E1 F2 D2 E2
Pipeline
F1 D1 E1
F2 D2 E2
F3 D3 E3
What is the condition for a smooth pipelining?
What can happen to the E part of Load inst: LD R0, _Y

What Is Memory (Address Space)
l “Slots” that hold values
l Slots are “numbered”
l Numbers are called addresses
l Two operations – read or write l e.g., LD R1, _X
l What can you put in memory?
l Anything. No “intrinsic” semantics

What is Cache?
l Another kind of “memory”, physically l Closer to the processor
l More expensive, smaller, faster
l Operation is logically transparent
l No naming/access by
l program, compiler, linker, loader l OS?
l CPU?

(Hardware) Memory Hierarchy
CPU
TLB L1
L2
Main Memory
Where do we hope requests get satisfied? e.g. LD R1, _X
28

What Are Registers?
l Places to hold information l Built into the processor
l “Named” specially
Why?
l Need a place to put operands / temp values
l e = (a+b) * (c+d) * (a-f)
l Highest level of memory hierarchy

What Is a Program?
l Code
l Main, subroutines (lib
functions)
l Program accessed data
l Static, global variables
l Dynamically-allocated data (e.g. ,malloc())
l Parameters, local variables
int *totalPtr; Init(void)
{
totalPtr = calloc(1, sizeof(int));
}
AddToTotal(int y) {
int i;
*totalPtr += y;
} l What is a process?

Everything Becomes Memory
l Various ranges of memory (addr space) are used for different purposes
l Text/Code (program instructions)
l Data (global variables)
l Stack (local variables, parameters, etc) l Heap (dynamically allocated memory)

Procedure calls
l Incoming parameters from caller l Don’t even know who caller is
l Local variables survive only when in use l Temporary variables (a+b) * (c+d)
void Loop(int N) {
int a,b,c,d,e,f,g; …
g = (a+b)*(c+d);
}

Stack Frames
l Frame = info for one procedure call
l Incoming parameters
l Return address for caller l New local variables
l New temporary variables l Size of frame

Stack Is Just Memory
l Defined, used by convention
l Agreement among OS, compiler, programmer
l How does OS manage stack (and heap)? l Allocate chunk of memory
l Have pointer into chunk
l Problems?
l Must know maximum size of stack?
l How to stay efficient despite uncertainty?

What Does Memory Look Like?
l Logical memory
l Code+data, stack, heap
l Which ones grow?
l How do you give them the most flexibility
l Physical memory?
l Another can of worms, entirely
l We will move on to Memory Management

A gap among Architecture, Compiler and OS courses
main.c math.c
compiler
main.o math.o
a.out
memory management
loader
linker?
Load a.out to mem Manage mem for proc
arch
Instruction execution