Systems, Networks & Concurrency 2019
Introduction to Conc1urrency Uwe R. Zimmer – The Australian National University
[Ben-Ari06]
M. Ben-Ari
Introduction to Concurrency
Principles of Concurrent and Distributed Programming
2006, second edition, Prentice-Hall, ISBN 0-13-711821-X
References for this chapter
© 2019 Uwe R. Zimmer, The Australian National University page 161 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Working definitions:
• Literally ‘concurrent’ means:
Introduction to Concurrency
Forms of concurrency
What is concurrency?
Adj.: Running together in space, as parallel lines; go-
ing on side by side, as proceedings; occurring togeth- er, as events or circumstances; existing or arising togeth- er; conjoint, associated [Oxfords English Dictionary]
© 2019 Uwe R. Zimmer, The Australian National University page 162 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Working definitions:
• Literally ‘concurrent’ means:
Introduction to Concurrency
Forms of concurrency
What is concurrency?
Adj.: Running together in space, as parallel lines; go-
ing on side by side, as proceedings; occurring togeth- er, as events or circumstances; existing or arising togeth- er; conjoint, associated [Oxfords English Dictionary]
• Technically ‘concurrent’ is usually defined negatively as:
If there is no observer who can identify two events as being in strict temporal sequence (i.e. one event has fully terminated before the other one started) then these two events are considered concurrent.
© 2019 Uwe R. Zimmer, The Australian National University page 163 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Introduction to Concurrency
Forms of concurrency
Why do we need/have concurrency?
• Physics, engineering, electronics, biology, …
basically every real world system is concurrent! • Sequential processing is suggested by most core computer architectures
… yet (almost) all current processor architectures have concurrent elements … and most computer systems are part of a concurrent network.
• Strict sequential processing is suggested by widely used programming languages. Sequential programming delivers some
fundamental components for concurrent programming
but we need to add a number of further crucial concepts
© 2019 Uwe R. Zimmer, The Australian National University page 164 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Introduction to Concurrency
Forms of concurrency
Why would a computer scientist consider concurrency?
… to be able to connect computer systems with the real world
… to be able to employ / design concurrent parts of computer architectures
… to construct complex software packages (operating systems, compilers, databases, …) … to understand when sequential and/or concurrent programming is required
… or: to understand when sequential or concurrent programming can be chosen freely … to enhance the reactivity of a system
… to enhance the performance of a system … to be able to design embedded systems …
© 2019 Uwe R. Zimmer, The Australian National University page 165 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
• Multi-programming
Add (non-deterministic) communication channels
Allow multiple independent programs to be executed on one CPU
• Multi-tasking
• General network architectures
Allow multiple interacting processes to be executed on one CPU
Allow for any form of communicating, distributed entities
© 2019 Uwe R. Zimmer, The Australian National University
page 166 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Introduction to Concurrency
Forms of concurrency
A computer scientist’s view on concurrency
• Overlapped I/O and computation
• Multi-processor systems Add physical/real concurrency
Employ interrupt programming to handle I/O
• Parallel Machines & distributed operating systems
• SISD
• MISD
[singe instruction, single data] Sequential processors
[multiple instruction, single data] Pipelined processors
• SIMD
• MIMD
[singe instruction, multiple data] Vector processors
[multiple instruction, multiple data]
Multi-processors or computer networks
© 2019 Uwe R. Zimmer, The Australian National University
page 167 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Introduction to Concurrency
Forms of concurrency
A computer scientist’s view on concurrency
Terminology for physically concurrent machines architectures:
Introduction to Concurrency
Forms of concurrency
An engineer’s view on concurrency
Multiple physical, coupled, dynamical systems form the actual environment and/or task at hand
In order to model and control such a system, its inherent concurrency needs to be considered Multiple less powerful processors are often preferred over a single high-performance cpu
The system design of usually strictly based on the structure of the given physical system.
© 2019 Uwe R. Zimmer, The Australian National University page 168 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
• non-deterministic phenomena
• non-observable system states
Introduction to Concurrency
Forms of concurrency
Does concurrency lead to chaos?
Concurrency often leads to the following features / issues / problems:
• results may depend on more than just the input parameters and states at start time (timing, throughput, load, available resources, signals … throughout the execution)
• non-reproducible debugging?
© 2019 Uwe R. Zimmer, The Australian National University page 169 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
• non-observable system states
Introduction to Concurrency
Forms of concurrency
Does concurrency lead to chaos?
Concurrency often leads to the following features / issues / problems:
• non-deterministic phenomena
• results may depend on more than just the input parameters and states at start time (timing, throughput, load, available resources, signals … throughout the execution)
• non-reproducible debugging?
Meaningful employment of concurrent systems features:
• non-determinism employed where the underlying system is non-deterministic
• non-determinism employed where the actual execution sequence is meaningless
• synchronization employed where adequate … but only there
Control & monitor where required (and do it right), but not more … © 2019 Uwe R. Zimmer, The Australian National University page 170 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Introduction to Concurrency
Models and Terminology
Concurrency on different abstraction levels/perspectives
Networks
• Large scale, high bandwidth interconnected nodes (“supercomputers”)
• Networked computing nodes
• Standalone computing nodes – including local buses & interfaces sub-systems
• Operating systems (& distributed operating systems)
Implicit concurrency
Explicit concurrent programming (message passing and synchronization)
Assembler level concurrent programming
• Individual concurrent units inside one CPU
• Individual electronic circuits •…
© 2019 Uwe R. Zimmer, The Australian National University page 171 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Introduction to Concurrency
Models and Terminology
The concurrent programming abstraction
1. What appears sequential on a higher abstraction level, is usually concurrent at a lower abstraction level:
e.g. Concurrent operating system or hardware components, which might not be visible at a higher programming level
2. What appears concurrent on a higher abstraction level, might be sequential at a lower abstraction level:
e.g. Multi-processing system,
which are executed on a single, sequential computing node
© 2019 Uwe R. Zimmer, The Australian National University page 172 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Introduction to Concurrency
Models and Terminology
The concurrent programming abstraction
• ‘concurrent’ is technically defined negatively as:
If there is no observer who can identify two events as being in strict temporal sequence (i.e. one event has fully terminated before the other one starts up), then these two events are considered concurrent.
• ‘concurrent’ in the context of programming and logic:
“Concurrent programming abstraction is the study of interleaved execution sequences of the atomic
© 2019 Uwe R. Zimmer, The Australian National University
page 173 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
instructions of sequential processes.” (Ben-Ari)
Introduction to Concurrency
Models and Terminology
The concurrent programming abstraction
Concurrent program ::=
Multiple sequential programs (processes or threads) which are executed concurrently.
P.S. it is generally assumed that concurrent execution means that there is one execution unit (processor) per sequential program
• even though this is usually not technically correct, it is still an often valid, conservative assumption in the context of concurrent programming.
© 2019 Uwe R. Zimmer, The Australian National University page 174 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Introduction to Concurrency
Models and Terminology
The concurrent programming abstraction
No interaction between concurrent system parts means that we can analyze them individually as pure sequential programs [end of course].
© 2019 Uwe R. Zimmer, The Australian National University page 175 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Interaction occurs in form of:
• Contention (implicit interaction):
Multiple concurrent execution units compete for one shared resource.
Introduction to Concurrency
Models and Terminology
The concurrent programming abstraction
No interaction between concurrent system parts means that we can analyze them individually as pure sequential programs [end of course].
• Communication (explicit interaction):
Explicit passing of information and/or explicit synchronization.
© 2019 Uwe R. Zimmer, The Australian National University page 176 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Introduction to Concurrency
Models and Terminology
The concurrent programming abstraction Time-line or Sequence?
Consider time (durations) explicitly:
Real-time systems join the appropriate courses
Consider the sequence of interaction points only: Non-real-time systems stay in your seat
© 2019 Uwe R. Zimmer, The Australian National University page 177 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Introduction to Concurrency
Models and Terminology
The concurrent programming abstraction Correctness of concurrent non-real-time systems
[logical correctness]:
• does not depend on clock speeds / execution times / delays
• does not depend on actual interleaving of concurrent processes
holds true for all possible sequences of interaction points (interleavings)
© 2019 Uwe R. Zimmer, The Australian National University page 178 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Introduction to Concurrency
Models and Terminology
The concurrent programming abstraction Correctness vs. testing in concurrent systems:
Slight changes in external triggers may (and usually does) result in completely different schedules (interleaving):
Concurrent programs which depend in any way on external influences cannot be tested without modelling and embedding those influences into the test process.
Designs which are provably correct with respect to the specification and are independent of the actual timing behavior are essential.
P.S. some timing restrictions for the scheduling still persist in non-real-time systems, e.g. ‘fairness’
© 2019 Uwe R. Zimmer, The Australian National University page 179 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Introduction to Concurrency
Models and Terminology
The concurrent programming abstraction Atomic operations:
Correctness proofs / designs in concurrent systems rely on the assumptions of ‘Atomic operations’ [detailed discussion later]:
• Complex and powerful atomic operations ease the correctness proofs, but may limit flexibility in the design
• Simple atomic operations are theoretically sufficient, but may lead to complex systems which correctness cannot be proven in practice.
© 2019 Uwe R. Zimmer, The Australian National University page 180 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Introduction to Concurrency
Models and Terminology
The concurrent programming abstraction
Standard concepts of correctness:
• Partial correctness:
(P (I) /terminates (Program (I, O))) & Q (I, O)
• Total correctness:
P (I) & (terminates (Program (I, O)) /Q (I, O))
where I, O are input and output sets, P is a property on the input set, and Q is a relation between input and output sets
do these concepts apply to and are sufficient for concurrent systems?
© 2019 Uwe R. Zimmer, The Australian National University page 181 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Safety properties: Liveness properties:
Introduction to Concurrency
Models and Terminology
The concurrent programming abstraction
Extended concepts of correctness in concurrent systems: ¬ Terminationisoftennotintendedorevenconsideredafailure
(P (I) /Processes (I, S)) & XQ (I, S)
where XQ means that Q does always hold
(P (I) /Processes (I, S)) & oQ (I, S)
where oQ means that Q does eventually hold (and will then stay true)
and S is the current state of the concurrent system
© 2019 Uwe R. Zimmer, The Australian National University page 182 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Safety properties: Examples:
(P (I) /Processes (I, S)) & XQ (I, S)
where XQ means that Q does always hold
Introduction to Concurrency
• Mutual exclusion (no resource collisions)
Models and Terminology
The concurrent programming abstraction
• Absence of deadlocks
(and other forms of ‘silent death’ and ‘freeze’ conditions)
• Specified responsiveness or free capabilities
(typical in real-time / embedded systems or server applications)
© 2019 Uwe R. Zimmer, The Australian National University page 183 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Examples:
Introduction to Concurrency
Models and Terminology
The concurrent programming abstraction
Liveness properties:
(P (I) /Processes (I, S)) & oQ (I, S)
where oQ means that Q does eventually hold (and will then stay true)
• Requests need to complete eventually
• The state of the system needs to be displayed eventually • No part of the system is to be delayed forever (fairness) Interesting liveness properties can be very hard to prove
© 2019 Uwe R. Zimmer, The Australian National University page 184 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
1 CPU per control-flow
address space 1
…
address space n
Specific configurations only, e.g.:
CPU
stack
code
CPU
stack code stack code stack code
• Distributedμcontrollers.
• Physicalprocess control systems:
CPU
stack
code
CPU
1 cpu per task, connected via a bus-system.
shared memory
shared memory
Process management (scheduling) not required.
Shared memory access need to be coordinated.
© 2019 Uwe R. Zimmer, The Australian National University
page 185 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Introduction to Concurrency
Introduction to processes and threads
CPU
stack
code
CPU
1 CPU for all control-flows
address space 1
…
address space n
• OS: emulate one CPU for every control-flow:
stack code stack code stack code
stack code stack code stack code
Multi-tasking operating system
Support for memory protection essential.
Process management (scheduling) required.
shared memory
shared memory
Shared memory access need to be coordinated.
CPU
© 2019 Uwe R. Zimmer, The Australian National University
page 186 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Introduction to Concurrency
Introduction to processes and threads
process 1
process n
Processes
address space 1
…
address space n
Process ::=
Address space
stack code stack code stack code
stack code stack code stack code
+ Control flow(s)
Kernel has full knowledge about all processes as well as their states, requirements and currently held resources.
shared memory
shared memory
© 2019 Uwe R. Zimmer, The Australian National University
page 187 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Introduction to Concurrency
Introduction to processes and threads
CPU
process 1
process n
Threads
address space 1
…
address space n
Threads (individual control- flows) can be handled:
stack
thread
stack
thread
• Inside the OS:
Kernel scheduling.
stack
thread
stack
thread
• Thread can easily
be connected to external events (I/O).
stack
thread
stack
thread
• Outside the OS:
shared memory
shared memory
User-level scheduling.
• Threads may need to go through their parent process
to access I/O.
CPU
© 2019 Uwe R. Zimmer, The Australian National University
page 188 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Introduction to Concurrency
Introduction to processes and threads
physical address space
process 1
process n
Symmetric Multiprocessing (SMP)
address space 1
…
address space n
All CPUs share the same physical address space (and access to resources).
stack
thread
stack
thread
Any process / thread can be executed on any available CPU.
shared memory
shared memory
© 2019 Uwe R. Zimmer, The Australian National University
page 189 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Introduction to Concurrency
Introduction to processes and threads
CPU
CPU
…
CPU CPU
stack
thread
stack
thread
stack
thread
stack
thread
shared memory
Introduction to Concurrency
Introduction to processes and threads
Processes ) Threads
Also processes can share memory and the specific definition of threads
is different in different operating systems and contexts:
Threads can be regarded as a group of processes, which
share some resources ( process-hierarchy).
Due to the overlap in resources, the attributes attached to
threads are less than for ‘first-class-citizen-processes’.
Thread switching and inter-thread communication can be more efficient than switching on process level.
Scheduling of threads depends on the actual thread implementations:
• e.g. user-level control-flows, which the kernel has no knowledge about at all.
• e.g. kernel-level control-flows, which are handled as processes with some restrictions.
© 2019 Uwe R. Zimmer, The Australian National University page 190 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
• Process Id
• Process state:
Process Control Blocks (PCBs)
• Scheduling attributes:
Priorities, deadlines, consumed CPU-time, …
Scheduling info
Introduction to Concurrency
Introduction to processes and threads
Process Control Blocks
{created, ready, executing, blocked, suspended, bored …}
Process Id Process state
• CPU state: Saved/restored information while context switches (incl. the program counter, stack pointer, …)
Saved registers (complete CPU state)
• Memory attributes / privileges: Memory base, limits, shared areas, …
Memory spaces / privileges
• Allocated resources / privileges:
Open and requested devices and files, …
Allocated resources / privileges
… PCBs (links thereof) are commonly enqueued at a certain state or condition (awaiting access or change in state)
© 2019 Uwe R. Zimmer, The Australian National University page 191 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
main memory
created
terminated
• created: the task is ready to run, but not yet considered by any dispatcher waiting for admission
admit
finish
ready
dispatch timeout
running
• ready: ready to run
waiting for a free CPU
© 2019 Uwe R. Zimmer, The Australian National University
page 192 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
release
waiting for a resource
Introduction to Concurrency
block
• running: holds a CPU and executes
• blocked: not ready to run
blobclokbcelokdcekded
Process states
secondary memory
main memory
created
terminated
• created: the task is ready to run, but not yet considered by any dispatcher waiting for admission
admit
finish
reload (swap in)
blobclokbcelokdcekded
main memory
(none time critical processes) waiting for main memory space (and other resources)
ready
dispatch timeout
running
• ready: ready to run
waiting for a free CPU
ready, susp. release
blockbeldob,clsokuceskdpe.d
© 2019 Uwe R. Zimmer, The Australian National University
page 193 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
release
waiting for a resource
• suspended states: swapped out of
suspend (swap out)
Introduction to Concurrency
suspend (swap-out)
block
• running: holds a CPU and executes
• blocked: not ready to run
Process states
secondary memory
main memory
created
terminated
• created: the task is ready to run, but not yet considered by any dispatcher waiting for admission
admit
finish
reload (swap in)
blobclokbcelokdcekded
main memory
(none time critical processes) waiting for main memory space (and other resources)
ready
dispatch timeout
running
• ready: ready to run
waiting for a free CPU
suspend (swap out)
ready, susp.
release
blockbeldob,clsokuceskdpe.d
dispatching and suspending can now be independent modules
© 2019 Uwe R. Zimmer, The Australian National University
page 194 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
release
waiting for a resource
• suspended states: swapped out of
Introduction to Concurrency
suspend (swap-out)
block
• running: holds a CPU and executes
• blocked: not ready to run
Process states
creation
batch
admitted
ready
executing
dispatch CPU termination
© 2019 Uwe R. Zimmer, The Australian National University
page 195 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Introduction to Concurrency
unblock swap-in
suspend (swap-out)
suspend (swap-out)
swap-out
Process states
ready, suspended
blocked, suspended
unblock
blocked
block or synchronize
pre-emption or cycle done
Introduction to Concurrency
resulting in a duplication of the current process
… returning ‘0’ to the newly created process (the ‘child’ process)
UNIX processes
In UNIX systems tasks are created by ‘cloning’
pid = fork ();
… returning the process id of the child process to the creating process (the ‘parent’ process) … or returning ‘-1’ as C-style indication of a failure (in void of actual exception handling)
Frequent usage:
if (fork () == 0) {
… the child’s task …
… often implemented as: exec (“absolute path to executable file“, “args“); exit (0); /* terminate child process */
} else {
… the parent’s task …
pid = wait (); /* wait for the termination of one child process */
}
© 2019 Uwe R. Zimmer, The Australian National University page 196 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
int data_pipe [2], c, rc;
if (pipe (data_pipe) == -1) { perror (“no pipe“); exit (1);
}
if (fork () == 0) { close (data_pipe [1]); while ((rc = read
} else {
}
pid = wait ();
}
(data_pipe [0], &c, 1)) > 0) {
putchar (c);
if (write(data_pipe[1], &c, 1)== -1) { perror (“pipe broken“);
close (data_pipe [1]);
exit (1);
}
if (rc == -1) {
perror (“pipe broken“); close (data_pipe [0]); exit (1);
}; }
close (data_pipe [0]); exit (0); © 2019 Uwe R. Zimmer, The Australian National University
page 197 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Introduction to Concurrency
UNIX processes
Communication between UNIX tasks (‘pipes’)
close (data_pipe [0]);
while ((c = getchar ()) > 0) {
close (data_pipe [1]);
Introduction to Concurrency
Concurrent programming languages
Requirement
• Concept of tasks, threads or other potentially concurrent entities Frequently requested essential elements
• Support for management or concurrent entities (create, terminate, …) • Support for contention management (mutual exclusion, …)
• Support for synchronization (semaphores, monitors, …)
• Support for communication (message passing, shared memory, rpc …)
• Support for protection (tasks, memory, devices, …)
© 2019 Uwe R. Zimmer, The Australian National University page 198 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Explicit concurrency • Ada, C++, Rust
Implicit (potential) concurrency
No support:
• Eiffel,Pascal
• Chill
• Lisp, Haskell, Caml, Miranda, and any other functional language
•C
• Fortran, Cobol, Basic…
• Erlang
• Go
• Smalltalk,Squeak
• Prolog
• Esterel, Lustre, Signal
Libraries & interfaces (outside language definitions)
• Chapel,X10
• Occam,CSP
• All .net languages
• POSIX
• Java, Scala, Clojure
Wannabe concurrency
• MPI(Message Passing Interface)
• Algol 68, Modula-2, Modula-3
•
Ruby, Python
[mostly broken due to global interpreter locks]
•…
•…
© 2019 Uwe R. Zimmer, The Australian National University
page 199 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
Introduction to Concurrency
Concurrent programming languages
Language candidates
Introduction to Concurrency
Languages with implicit concurrency: e.g. functional programming
Implicit concurrency in some programming schemes
Quicksort in a functional language (here: Haskell):
qsort [] = []
qsort (x:xs) = qsort [y | y <- xs, y < x] ++ [x] ++ qsort [y | y <- xs, y >= x]
Pure functional programming is side-effect free
Parameters can be evaluated independently could run concurrently
Some functional languages allow for lazy evaluation, i.e. sub- expressions are not necessarily evaluated completely:
borderline = (n /= 0) && (g (n) > h (n))
If n equals zero then the evaluation of g(n) and h(n) can be stopped (or not even be started).
Concurrent program parts should be interruptible in this case.
Short-circuit evaluations in imperative languages assume explicit sequential execution:
if Pointer /= nil and then Pointer.next = nil then …
© 2019 Uwe R. Zimmer, The Australian National University page 200 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)
• Forms of concurrency
• Models and terminology
Introduction to Concurrency
• Abstractions and perspectives: computer science, physics & engineering
• Observations: non-determinism, atomicity, interaction, interleaving
• Correctness in concurrent systems
• Processes and threads
• Basic concepts and notions
• Processstates
• Concurrent programming languages:
• Explicit concurrency: e.g. Ada, Chapel
• Implicit concurrency: functional programming – e.g. Haskell, Caml
Summary
Concurrency – The Basic Concepts
© 2019 Uwe R. Zimmer, The Australian National University page 201 of 757 (chapter 1: “Introduction to Concurrency” up to page 201)