COMP8551 Multithreading and multiprocessing I
COMP 8551
Advanced Games
Programming
Techniques
Realtime Issues and Multithreading I
Borna Noureddin, Ph.D.
British Columbia Institute of Technology
Overview
• Overview of multithreading
• Basic definitions
• Multithreading challenges
• Race conditions
• Mutexes
2
©
B
or
na
N
ou
re
dd
in
C
O
M
P
85
51
3
©
B
or
na
N
ou
re
dd
in
C
O
M
P
85
51
What is multithreading?
• Technique allowing application to do
multiple tasks “simultaneously”
• Stream of instructions within a process
• Each thread has: instruction pointer, set of
registers, stack memory
• Virtual address space common to all
threads within a process
• Data on heap can be accessed by all
threads
What is multithreading?
• Not new, but only in past decade useful on
PCs, especially with multi-core processors
(before that: parallel processing systems;
concurrent Pascal, Ada, etc.)
• Why now?
• emergence of SMPs in particular
What is multithreading?
• What is an SMP?
• Multiple CPUs in a single box sharing all
the resources such as memory and I/O
• Is an SMP more cost effective than two
uniprocessor boxes?
• Yes (roughly 20% more for a dual
processor SMP)
• Modest speedup for application on dual-
processor SMP will make it worthwhile
6
©
B
or
na
N
ou
re
dd
in
C
O
M
P
85
51
Applications
• Multimedia
• GUIs
• Games
• Process-intensive (e.g., scientific
calculation or visualization)
• High-end rendering
7
©
B
or
na
N
ou
re
dd
in
C
O
M
P
85
51
Multi-threading vs. -processing
• Threads: shared memory; lightweight
• Processes: separate memory; more
overhead
• Usually O/S assigns threads to different
processors
8
©
B
or
na
N
ou
re
dd
in
C
O
M
P
85
51
Multi-threading vs. -processing
• Application with
multiple threads
running within a
process
• Application
organized across
multiple OS-level
processes
9
©
B
or
na
N
ou
re
dd
in
C
O
M
P
85
51
Multi-threading vs. -processing
• More “light
weight” form of
concurrency
• context per thread
• lifetime, context
switching and
synchronization
costs lower
• Processes are
insulated from each
other by OS
• error in one cannot
bring down another
• individual processes
may run as different
users and have
different
permissions
Task A
Processor
Task B Task C
The Operating
System assigns
processor time
to each task
The Multi-Tasking Concept
Task A
Processor
A Threading
library creates
threads and
assigns
processor time
to each thread
T0
T1
T2
The Multi-Threading Concept
Task A
Processor 1
T0
T1
T2
Processor 2
Processor 3
Processor 4
Multi-Threading in Multi-
Processors
1
3
©
B
or
na
N
ou
re
dd
in
C
O
M
P
85
51
Definitions
Physical CPU: Actual CPU/processor on the
motherboard. Single core: same number
of physical and logical CPUs.
Logical CPU: A separate CPU pipeline.
HyperThreaded: two logical CPUs per core,
multi-core processor: one logical CPU per
core per processor.
1
4
©
B
or
na
N
ou
re
dd
in
C
O
M
P
85
51
Definitions
Atomic Operation: Operation in code to be
executed by one thread at a time,
typically to maintain data integrity:
intThreadIterations = 0;
while (intSharedVariable < intMaximumIterations &&
intThreadIterations <= 5) {
// Do some work
intSharedVariable++;
intThreadIterations++;
}
will not work correctly if other threads try
updating intSharedVariable: this block of
code must be atomic operation.
1
5
©
B
or
na
N
ou
re
dd
in
C
O
M
P
85
51
Definitions
Block: Thread (process) is in such a state
that all other threads must wait until it is
finished to continue their work. E.g., any
thread trying to access intSharedVariable
will block until atomic operation is
completed.
Lock: System for restricting access to
resource to other threads (other threads
will block until lock is released).
1
6
©
B
or
na
N
ou
re
dd
in
C
O
M
P
85
51
Main challenge
Shared resources
• Locking data versus performance
• Size of atomic operations
• Testing/debugging is much harder
1
7
©
B
or
na
N
ou
re
dd
in
C
O
M
P
85
51
Race conditions
• Behaviour of code depends on interleaving
of multiple threads –fundamental problem
with multi-threaded programming
• Single-threaded: only have to think about
lines of code right in front of us – can
assume data will not “magically” change
between statements
• Multi-threaded code: non-local data can
change unexpectedly due to actions of
another thread
1
8
©
B
or
na
N
ou
re
dd
in
C
O
M
P
85
51
Race conditions
• Can result in high-level logical fault in your
program
• May even pierce C++'s statement-level
abstraction:
• cannot even assume that single C++
statements execute atomically (may
compile to multiple assembly instructions)
• cannot guarantee outcome of foo += 1; if
foo is non-local and may be accessed from
multiple threads
1
9
©
B
or
na
N
ou
re
dd
in
C
O
M
P
85
51
Race conditions
int sharedCounter = 50;
void* workerThread(void*)
{
while(sharedCounter > 0)
{
doSomeWork();
–sharedCounter;
}
}
2
0
©
B
or
na
N
ou
re
dd
in
C
O
M
P
85
51
Race conditions
• Start a number of threads, all executing
workerThread()
• Just one thread: doSomeWork() will be
executed the correct number of times
(whatever sharedCounter starts out at).
• Multiple threads: doSomeWork() will
most likely be executed too many times.
• we do not test and update sharedCounter
as an atomic operation!
2
1
©
B
or
na
N
ou
re
dd
in
C
O
M
P
85
51
Race conditions
• Solution: use a mutex to synchronize
threads with respect to the test and
update
• That is, we need to define a “critical
section” in which we both test and update
the sharedCounter.
2
2
©
B
or
na
N
ou
re
dd
in
C
O
M
P
85
51
Mutexes
• A locking primitive used to ensure that
only one thread at a time has access to a
resource
• An OS-level synchronization primitive that
can be used to ensure a section of code
can only be executed by one thread at a
time
2
3
©
B
or
na
N
ou
re
dd
in
C
O
M
P
85
51
Mutexes
• Two states: locked and unlocked
• Locked: any further attempt to lock it will
block (calling thread will be suspended)
• Unlocked: if there are threads waiting, one
of these will be resumed and will lock the
mutex
• mutex may only be unlocked by the thread
that locked it
2
4
©
B
or
na
N
ou
re
dd
in
C
O
M
P
85
51
Mutexes
• If we have resource we need to share
between threads:
• Associate mutex with it
• Use mutex to synchronize resource access
• Ensure our code locks mutex before using
resource, and unlocks it after it is finished
• Will prevent race conditions related to
multiple threads simultaneously accessing
that resource
2
5
©
B
or
na
N
ou
re
dd
in
C
O
M
P
85
51
Mutexes
2
6
©
B
or
na
N
ou
re
dd
in
C
O
M
P
85
51
Additional Reading
http://randu.org/tutorials/threads/
http://www.codeproject.com/Articles/14746/Multithreading-Tutorial
http://www.computersciencelab.com/MultithreadingTut1.htm
https://katyscode.wordpress.com/2013/05/17/introduction-to-multi-
threaded-multi-core-and-parallel-programming-concepts/
https://scalibq.wordpress.com/2012/06/01/multi-core-and-multi-
threading/
Review
• Overview of multithreading
• Basic definitions
• Multithreading challenges
• Race conditions
• Mutexes
2
7
©
B
or
na
N
ou
re
dd
in
C
O
M
P
85
51
2
8
©
B
or
na
N
ou
re
dd
in
C
O
M
P
85
51
E N D