COMP 3430 Operating systems – Chapter 26, 27 reading notes
Winter 2022
About these reading notes
Chapter 26: Concurrency: An Introduction
Copyright By PowCoder代写 加微信 powcoder
Whyusethreads?………………………………….. 3 AnExample:ThreadCreation ……………………………. 3 Whyitgetsworse:Shareddata …………………………… 3 Theheartoftheproblem:uncontrolledscheduling …………………. 4 Thewishforatomicity ……………………………….. 4 Onemoreproblem:Waitingforanother……………………….. 4 Summary:WhyinOSclass? …………………………….. 4
Chapter 27: Thread API 5 Thread Creation . . . ………………………………… 5 Thread Completion . ………………………………… 5 Locks ……………………………………….. 5 Conditionvariables…………………………………. 6 CompilingandRunning ………………………………. 6 Summary……………………………………… 6
COMP 3430 Operating systems – Chapter 26, 27 reading notes Winter 2022
About these reading notes
These are my own personal reading notes that I took (me, Franklin) as I read the textbook. I’m pro- viding these to you as an additional resource for you to use while you’re reading chapters from the textbook. These notes do not stand alone on their own — you might be able to get the idea of a chap- ter while reading these, but you’re definitely not going to get the chapter by reading these alone.
These notes are inconsistently all of the following:
• Mesummarizingpartsofthetext.
• Mecommentingonpartsofthetext.
• Measkingquestionstomyselfaboutthetext.
– …andsometimesansweringthosequestions.
The way that I would expect you to read or use these notes is to effectively permit me to be your in- ner monologue while you’re reading the textbook. As you’re reading chapters and sections within chapters, you can take a look at what I’ve written here to get an idea of how I’m thinking about this content.
Chapter 26: Concurrency: An Introduction
The first little paragraph here summarizes last week’s readings on processes (cool!).
Now we’ve got a new abstraction for running a single process: a “thread”. I don’t like the way that they’re describing this concept (“for running a single process”).
Threads actually require a context switch, just like a process.
Process state goes into a process control block, thread state goes into a thread control block (TCB).
Remember: PC is referring to “Program Counter” the register, not “Personal Computer” or “Pro- gressive Conservative”.
Processes required context switches beyond just registers: at least as far as we know so far, a con- text switch for a process implies that the address space is also switched out (they’re not shared, right?). With that in mind, what kind of differences do you think there are in terms of context switching for a process and context switching for a thread? This is hinted at in the text (“there is no need to switch which page table we are using”). Page tables are something that we’ll look at later in the course.
COMP 3430 Operating systems – Chapter 26, 27 reading notes Winter 2022
Where do you think a thread control block is stored? How is it related to a PCB? Why use threads?
Remember how we could fork a new process and do work in it, then wait for that child to end? Why are we bothering with looking at this new structure?
An Example: Thread Creation
Oh boy, pthreads!
The authors are describing two kinds of workloads here to justify using threads: one is to improve the speed of programs that are embarrassingly parallel (they call this parallelization), and the other is to let some parts of a program continue operating while another is blocked waiting on, for example, I/O. Can you think of any example problems or existing software that you currently use that uses threading in each of these ways?
Considering the code listed in figure 26.2, how would you write this same code using fork and wait? How different is that code from what’s listed here with pthreads?
Make sure that you can convince yourself of what the authors are saying immediately below fig- ure 26.2: “Overall, three threads were employed during this run: the main thread, T1, and T2.”
Take the time to compare figures 26.3 and 4.4! How are they different? How are they the same? How are the author’s descriptions of these figures similar and different, specifically thinking about non-determinism.
Why it gets worse: Shared data
You were actually introduced to this in chapter 2! (pages 7–8).
Try entering and running the code in figure 26.6 on your own (or, uh, just download it from here: https://github.com/remzi-arpacidusseau/ostep-code/tree/master/threads-intro).
Thinking back to chapter 2, can you briefly explain why 1) we don’t get the correct result with multiple threads, and 2) try to explain why this might actually happen (keeping in mind that two threads are running “at the same time” and that one thread might be interrupted between instructions). This is described in gory detail in section 26.4 and figure 26.6, but try to be able to
COMP 3430 Operating systems – Chapter 26, 27 reading notes Winter 2022
explain it to yourselves verbally, and at a high level.
The heart of the problem: uncontrolled scheduling
Can data races or race conditions happen with processes? Why or why not?
The wish for atomicity
It sure would be nice if we could just add a bunch of atomic instructions to our instruction set.
One more problem: Waiting for another
Summary: Why in OS class?
This is a fair question, and one that I think we often ignore because the exercises that we give to stu- dents are threads from the user side rather than threads from the OS side.
Let’s try and relate what’s happening in OS to other courses (CS or not). Where else have you seen this concept of atomicity? Where else have you seen this concept of a “transaction”? Where else have you seen this concept of “all or nothing”?
That memory-add instruction seems like a fine idea, but the authors do a pretty good job con- vincing us that adding instructions for all operations that need to be atomic isn’t reasonable. But why not add instructions for things that are reasonably universal, like memory-add? This is out- side the scope of the course, but: are there any instruction set architectures that have atomic primitive instructions like memory-add?
The authors here describe a problem for threads about “waiting around” for another thread to complete some action. Can you think of any reasons why they didn’t describe this problem when talking about the wait system call? It seems awfully similar.
Don’t worry too much yet about the critical sections described by the book in terms of what the OS has to worry about. Instead, think about it this way: if your process (as a user process) or thread could be interrupted during execution, the OS probably has similar internal issues. So let’s think about the OS side of things for a second: Thinking specifically of the system calls that you know about so far, which of the system calls might have a critical section, and what might it need to protect?
COMP 3430 Operating systems – Chapter 26, 27 reading notes
Winter 2022
Chapter 27: Thread API
This is an introduction to the pthreads threading API.
Thread Creation
Creating a thread with pthread_create.
Note that the authors switch back and forth between using pthread_* and Pthread_* (lower- vs upper-case p). The lower-case p pthread_* functions are those that come in pthread.h, the upper-case p Pthread_* are wrappers that the authors have written in their support code for the book that checks the return code from pthread_create.
The discussion on the function pointer stuff is kind of confusing, specifically about how changing the types changes the signature of pthread_create. To be clear: it doesn’t change the signa- ture of pthread_create, it changes how you can call it. In fact, you must pass a function with the signature void *func(void*) (a function that takes a void pointer and returns a void pointer). Look at figure 27.1 for a more concrete explanation of how to create a thread, note the signature of the function mythread and how mythread unpacks its arguments manually.
Thread Completion
pthread_join and wait are pretty similar in terms of what they do for threads and pro- cesses, but they do have one pretty significant difference. What is it? (it’s something that pthread_join can do and wait can’t) Why can’t wait do that thing?
In the code listing in figure 27.2, the main function doesn’t call malloc to allocate memory for rvals, but this code doesn’t crash. Why does this work? Where is malloc called? Why can this work with threads specifically?
The authors ask a good question on pg 4, summed up as “Why shouldn’t you try to return a stack- allocated variable from a thread?” To add to their question: is this any different from returning a stack-allocated struct from a function? Why or why not?
Remember that mutual exclusion stuff from last chapter? Yeah, here’s how we’re going to do it.
COMP 3430 Operating systems – Chapter 26, 27 reading notes Winter 2022
In terms of design, why do you think the pthread library has two ways to initialize a mutex (i.e., PTHREAD_MUTEX_INITIALIZER and pthread_mutex_init)?
The code listed on pg 6 giving an example of how you might use a pthread_mutex_t is kind of misleading in the way it’s written. Specifically, it’s implying that lock is a stack-allocated variable (so it’s in thread-local storage). Do you think that locking like this would work if each thread has its own ‘lock’ object? Where should such a lock go?
What’s the main differences between the three variants of acquiring a lock: lock, trylock, and timedlock? Check out the man pages for each (you may need to look on aviary itself, or, you know, refer to our friend Google and ask “man page pthread_mutex_lock”).
Condition variables
This looks awfully similar to an idea that we briefly saw before in terms of processes: the idea of “signaling”. How is this different? Is this different beyond different functions and threads vs processes?
On pg 8 in the discussion about pthread_cond_wait, the authors are describing a lot of stuff going on behind the scenes. Convince yourself about what’s happening here in terms of locks being released and acquired, and when that might be happening in terms of what happens when pthread_cond_wait is called and is returned from.
Compiling and Running
Weirdly, this is a really important part of using pthreads: being able to compile code that con- tains pthreads. Write a simple pthread program (even if it’s just the first example the authors give in this chapter) and compile it on aviary.
Take special note of the man page reference here: they’re passing the -k option, giving you the power to search through man pages by topic. Try running man man to get an idea of what kind of options man has.
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com