Programming Paradigms
• Course overview •Introduction to programming
• Review: The object-oriented
Copyright By PowCoder代写 加微信 powcoder
paradigm in Java
•Imperative and concurrent programming paradigm: Go.
• Logic paradigm: Prolog.
• Functional paradigm: Scheme.
Announcement
•Office hours for comprehensive assignment(assignment 1)
• 5 | more office hours • check brightSpace for
information
• Assignment 1 – late
submission ends Feb 9th
Announcement
• Previous exams posted.
• No labs next week (Jan 31 –
Feb 4). Office hours will be offered instead for Go questions and assignment questions
• Quiz 1 is posted due Thursday Feb 3rd .
Acknowledgment
The slides posted through the term are based of the slides offered by:
Prof. Jochen Lang
Demo code: https://www.site.uottawa.ca/~jl ang/CSI2120/demoCode.html
System Programming: Go
Concurrency and Parallelism Goroutines
Concurrent Programming
• Anapplicationisaprocessrunningonamachine
– A process is an independently executing entity that runs in its own address space.
• Aprocessiscomposedofoneormoreoperatingsystem threads(light weighted process)
– Threads are concurrently executing entities that share the same address space.
Application
Process Process Process
Execution Thread
• Anexecutionthreadisasequenceofexecutablestatements and it may or may not interact with other threads
– Threads often share some variables (or resources) with other threads
– Threads often (but not necessarily) have a limited life- time within a program execution
• Athreadcanbeblocked:
– Accessing a shared variable or resource used currently by another thread
– Waiting for the result or completion of another thread
• Anapplicationprogramcanoftenbedividedintomultiple processes and potentially many threads
Parallel Programming vs. Concurrent Programming
• Concurrentprogrammingmeansexpressingprogram structure such that is organized into independently executing actions
– Concurrent programming can also target a single processor
• The processes or threads then run in turn over time according to a schedule
• Ifprocessesorthreadsarerunningondifferentprocessorsor cores simultaneously, we have a parallel program
• Ifaprogramisexecutedonmultiplemachineswithloose interaction, it is commonly called distributed programming
• Ifaprogramisexecutedonagraphicscardoratightly integrated cluster, it is often called massively parallel programming.
Concurrent Programming Languages
• Aconcurrentprogramminglanguagemustsupport:
– the creation and execution of processes and threads – data communication between threads and processes
• may use mechanisms using inter-process communication defined by the operating system
Concurrent Programming Languages
• Aconcurrentprogramminglanguagemustalsosupport synchronization operations
– Cooperative synchronization: A process waits for the execution of another before continuing its own execution.
• deterministic concurrency
– Competitive synchronization: Multiple processes use the same resource with some form of locking mechanism for mutual exclusion.
• non-deterministic concurrency
Figure 4.4 by under CC BY-SA 3.0; converted from GitHub
Reminder: Threads in Java
• Option implementing interface runnable
public class HelloRunnable
implements Runnable {
• Option subclassing Thread
public class HelloThread
extends Thread {
public void run() {
System.out.println(
public void run() {
System.out.println(
“Running in a
} thread”);
“Running in a
thread!”);
public static void
public static void
main(String args[]) {
main( String args[]) {
(new Thread( new
HelloRunnable())).start();
HelloThread()).start();
Level of Concurrency
• At the (set of) statement level:
– Sets of statements are executed concurrently while the main process suspends its execution. All threads share the same data (OpenMP)
• At the sub-program level:
– New process is created for running a subroutine. Once the new process starts, the calling process continues its execution. Requires a synchronization mechanism.
• At the object level:
– Each object instance of a class competes for resources and methods on different objects run concurrently. Class variables (attributes) are not shared.
• At the program level:
– Parent process run one or more child processes. Child process Id must be known by parent. Data may be shared.
Type of Concurrency
• Physical
– Multiple processes/threads share different processors or cores
– Multiple processes/threads share execution time on a single processor.
• Distributed
– Processes/threads of an application share several machines over a network
Concurrency in Go
• The big problem in concurrency is data sharing
• Don’t communicate by sharing data Instead, you have to share data by communicating!
• Communication is the key to good synchronization
• CSP Paradigm
– CommunicatingSequentialProcesses • Message exchange
• Go also contains a synch package Including common types and functions in competing programming e.g. Mutex, WaitGroup
Concurrency in Go: Non-deterministic
Two mechanisms are provided
1. Non-deterministicConcurrency
– More traditional threads with low-level synchronization
– Mutex in synch package – Use is discouraged in Go
Gopher image by , licensed under Creative Commons 3.0 Attributions license.
Concurrency in Go: Deterministic
2. DeterministicConcurrency
– Communicating Sequential Processes (CSP) – Based on message passing between threads – goroutines and channels
– Recommended approach
Concurrency in Go
– Based on the idea that avoiding data sharing will avoid the biggest problem in concurrent programming
– Threads in CSP do not communicate by sharing data
– Rather they share data by communicating
– Timing of threads is based on messaging between threads
Gopher image by , licensed under Creative Commons 3.0 Attributions license.
Goroutines
• PartsofaGoprogramthatrunconcurrentlyare organized in goroutines
– goroutines can run in several threads
– several goroutines can run in one thread • no 1:1 correspondence to OS threads
• goroutinesruninthesameaddressspace
– shared memory access can be used but is
discouraged
• goroutinesaredesignedtobelight-weight
– inexpensive to create
– automatic (segmented) stack management
• Agoroutinecanbeimplementedasafunctionor method
• Agoroutineisinvokedusingthekeywordgo
Calling a Goroutine
• Goroutines are functions or methods called with go
– The default number of OS threads is one for all goroutines
– Can be changed with the environment variable GOMAXPROCS
– Can also be changed with the runtime package
import “runtime”
func main() {
// change max number of OS threads to 3
runtime.GOMAXPROCS(3)
Example: Calling goroutines
func main() {
runtime.GOMAXPROCS(3)
sTime := time.Now(); // time it fmt.Println(“Starting”)
go letters() // goroutine A
go numbers() // goroutine B fmt.Println(“Waiting …”) time.Sleep(2*time.Second) fmt.Println(“\nDone\n”)
eTime := time.Now();
fmt.Printf(“Run time: %s”, eTime.Sub(sTime))
Example: goroutines
func numbers() {
for number := 1; number < 27; number++ {
// pause before every print time.Sleep(10*time.Millisecond) fmt.Printf("%d ", number)
func letters() {
for char := 'a'; char < 'a'+26; char++ {
time.Sleep(10)
fmt.Printf("%c ", char)
Example Execution
Waiting ...
abcdefghij1klmnopqrst2uvwxy z 345678910111213141516171819202122 23
Run time: 2s
• Running it without goroutines (sequential) Starting
abcdefghijklmnopqrstuvwxyz1 2 345678910111213141516171819202122 23
24 25 26 Waiting ...
Run time: 2.286s
Example Execution
• A goroutine exits when its code is complete
• When the main goroutine is complete, all other goroutines exit
• A goroutine may not complete its execution because main completes early
Communication between goroutines • goroutinesaredesignedtoeffectivelycommunicateby
message passing for
– the exchange of information
– for synchronizing the execution
• ImplementedinGothroughchannels – channels are similar to pipes
Concept of channels
• Dataispassedaroundthroughchannels
– only one goroutine has access to a channel at any given time
– communication is synchronized by design • no race conditions can occur
• Ownershipofthedataispassedaroundaswell
• Achannelisadataqueue(FIFO)
• Channelsaretyped(onlyonedatatypecanbepassed around on a channel)
• Readingandwritinginachannelareblocking statements
• Executionwillnotbeblockedifthechannel'scapacityis not exceeded
Channel Declaration
• Declaringachannelonlycreatesareference
– Use make to allocate space for the channel – Example: Channel for strings
var ch chan string
ch = make(chan string)
– Or shorter with initializer declaration
ch := make(chan string)
– By default, a channel has a capacity of 1 element
– Channels are first-class object
– To send or receive, use the arrow operator
ch <- str // send
str = <- ch // receive
Buffered Channels
• Channelscanbebuffered
– A buffered channel will not block a sender until the buffer is full
– A receiver will always block until there is some data to receive
Channel Declaration
By default, a channel has a capacity of 1 element A writing in such a channel will be blocking
A buffer is declared as follows
ch = make (chan string,2)// allows the insertion of 2 //instances before blocking
Example: Communicating goroutines through channels
func main() {
ch := make(chan string) go sendString(ch)
go recieveString(ch) time.Sleep(1*time.Second)
func sendString(ch chan string) { ch <- "Ottawa"
ch <- "Toronto"
ch <- "Gatineau"
ch <- "Casselman" }
func recieveString (ch chan string) { var str string
for { str= <-ch
fmt.Printf("%s ", str) }
Range Loop applied to a Channel
• Wecanloopovertheincomingelements
• Loopsoverthechanneluntilitisclosed
• Exampleusesalambdaasgoroutine
func recieveString(ch chan string) { go func() {
for str := range ch {
fmt.Printf("%s ", str) }
Closing channels
• Channels may be explicitly closed
func sendString(strArr []string ) chan string { ch := make(chan string)
go func() { // start a lambda in a go routine
for _,s := range strArr { } ch <- s
return ch }
• We can test if the channel has been closed
str, ok := <- ch
if !ok {break;
}fmt.Printf("%s ", str) }
Synchronization across Channels
• We can use select for non-blocking channel i/o. Similar to a switch but depending on which channel receives or ready to send an element
Forever: for {
case str, ok := <- ch1:
if !ok {break Forever
}fmt.Printf("%s \n", str ) case str, ok := <- ch2:
if !ok {break Forever
}fmt.Printf("%s \n", str ) }
Timers in Go
• AtimerinGohasachannel
• Wecancheckitsstatusorreadfromitschannel
outOfHere := time.NewTimer(time.Second * 2) Forever:
fmt.Printf("Time is up\n" ) break Forever
select...{\\ Omitted
case <-outOfHere.C:
Parallel loop
Synchronization with a WaitGroup
– Semaphore pattern is great to monitor the maximum number of go routines, it is not suitable to monitor when all go routines are done
• PackagesyncprovidesWaitGroupwhichhas
– WaitGroup.Add(int) counting up the number of things to wait for
– WaitGroup.Done() one less thing to wait for
– WaitGroup.Wait() wait until all things done
– Things can be distributed over multiple processes, useful for process synchronization
• Seesync/waitgroup.go
Synchronization with a WaitGroup
– Parallellooponasliceofdata
• A semaphore is a mechanism that allows synchronization and resource sharing between processes
• Go semaphores can be designed using channels
– E.g.limitingthenumberofGo routines at any one point by
• Writing a global channel when a new routine is started
• A full channel will block
• Reading from a global channel when a routine finishes
Synchronization with a Semaphore
package main
import ( "fmt" "time" ) func worker(done chan bool)
fmt.Print("treatment in progress...") time.Sleep(time.Second) fmt.Println("finish")
done <- true // end signal
} func main() {
done := make(chan bool, 1)
// launch of the goroutine
go worker(done)
//here we continue the main routine
// synchronization point (meet)
// synchronization channel
Mutex and lock
Mutex and lock
Mutex and lock
Quick Sort
func qsort_pass(arr []int, done chan int) []int{ if len(arr) < 2 {
if arr[j] >= pivot { } j–
arr[0], arr[j] = arr[j], arr[0] done <- 1;
go qsort_pass(arr[:j], done) go qsort_pass(arr[j+1:], done) return arr
done <- len(arr) } return arr
pivot := arr[0]
i, j := 1, len(arr)-1 for i != j {
func qsort(arr []int) []int { done := make(chan int) defer func() {
}() close(done)
go qsort_pass(arr[:], done)
rslt := len(arr) for rslt > 0 {
} rslt -= <-done;
return arr
for arr[i] < pivot && i!=j{
for arr[j] >= pivot && i!=j{ } j–
if arr[i] > arr[j] {
arr[i], arr[j] = arr[j], arr[i]
Concurrent Binary tree
func Sum(t *Tree, result chan int) {
if t==nil { result <- 0
go Sum(t.Left, ch)
go Sum(t.Right, ch)
result <- t.Value + (<-ch) + (<-ch)
func main() {
t:= NewTree(5)
t.Insert(7)
t.Insert(9)
t.Insert(2)
ch:= make(chan int, 2)
Sum(t, ch)
fmt.Println(<-ch)
Concurrent programing
•Parallel Data
•share the data to be processed
•Parallel control •share the tasks
•Parallel flow •Assembly chain
a+bx+cx2+dx3
Example: Calculating a polynomial for N variable
Parallel Data
Each of the gorountines does the math for a subset of data
Parallel flow
A goroutine calculate r1=(dx+c)
A goroutine calculate r2=r1*x+b
A goroutine calculate r2*x+a Parallel control
A goroutine calculate a+bx A goroutine calculate cx2 A goroutine calculate dx3
• ConcurrencyandParallelism • Goroutines
• Channels
• ChannelSynchronization
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com