CS代写 Parallel Programming: Speedups and Amdahl’s law

Parallel Programming: Speedups and Amdahl’s law

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
Computer Graphics

Copyright By PowCoder代写 加微信 powcoder

speedups.and.amdahls.law.pptx
mjb – March 21, 2021

If you are using n processors, your Speedupn is:
Definition of Speedup
Speedup  T nT
where T1 is the execution time on one core and Tn is the execution time on n cores. Note that Speedupn should be > 1.
And your Speedup Efficiencyn is:
Efficiencyn  Speedupn
which could be as high as 1., but probably never will be.
Computer Graphics
mjb – March 21, 2021

However, Multicore is not a Free Lunch: Amdahl’s Law
If you put in n processors, you should get n times Speedup (and 100% Speedup Efficiency), right? Wrong!
There are always some fraction of the total operation that is inherently sequential and cannot be parallelized no matter what you do. This includes reading data, setting up calculations, control logic, storing results, etc.
If you think of all the operations that a program needs to do as being divided between a fraction that is parallelizable and a fraction that isn’t (i.e., is stuck at being sequential), then Amdahl’s Law says:
Speedup  T  n T
1  Fparallel  F
n sequential
Fparallel  (1 F )
n parallel
This fraction can be reduced by deploying multiple processors.
Computer Graphics
This fraction can’t.
mjb – March 21, 2021

A Visual Explanation of Amdahl’s Law
The Sequential Portion doesn’t go away, and it also doesn’t get any smaller. It just gets more and more dominant.
Sequential Portion
Parallel Portion
Sequential Portion
Sequential Portion
Parallel Portion
Sequential Portion
Parallel Portion
124 # of Cores
Computer Graphics
mjb – March 21, 2021
Execution Time

SpeedUp as a Function of Number of Processors and :
Computer Graphics
mjb – March 21, 2021

SpeedUp as a Function of Fparallel and Number of Processors
Computer Graphics
Parallel Fraction
mjb – March 21, 2021

SpeedUp Efficiency 𝑺𝑵 as a Function of Number of Processors and F 7
1.00 0.90 0.80 0.70 0.60 0.50 0.40 0.30 0.20 0.10 0.00
SpeedUp Efficiency vs. # of Processors
Fparallel : 90%
1 6 11 16 21 26
Computer Graphics
# of Processors
mjb – March 21, 2021
SpeedUp Efficiency

SpeedUp Efficiency 𝑺𝑵 as a Function of Fparallel and Number of Processors 8 𝑵
1.00 0.90 0.80 0.70 0.60 0.50 0.40 0.30 0.20 0.10 0.00
SpeedUp Efficiency vs. Fp
Computer Graphics
mjb – March 21, 2021
SpeedUp Efficiency

You can also solve for Fparallel using Amdahl’s Law if you know your speedup and the number of processors
Amdahl’s law says:
T1 S 1 F
1F FnF 1 (1n)
Sn(1F)1 n
Solving for F:
Use this if you know the timing
Use this if you know the speedup
T TTTT 11 n1 n 1 1 n
T T T n(TT) n TT n  1  FS1 1 1 1 n 1 n 1 
1 n 1 n 1 n n 1 T (n 1) (n 1) T (n 1) Speedup
11 If you’ve got several (n,S) values, you can take the average (which is actually a least squares fit):
Computer Graphics
F i 1 ni,i2..N
i (n1)T i1
note that when i=1, T
mjb – March 21, 2021

Amdahl’s Law can also give us the Maximum Possible SpeedUp
Note that these fractions put an upper bound on how much benefit you will get from adding more processors:
max Speedup  limSpeedup  1  1
F 1F sequential parallel
maxSpeedup
Computer Graphics
mjb – March 21, 2021

Fp  P PS
Parallel Serial Time Time
A More Optimistic Take on Amdahl’s Law: The Gustafson-
Gustafson observed that as you increase the number of processors, you have a tendency to attack larger and larger versions of the problem. He also observed that when you use the same parallel program on larger datasets, the parallel fraction, Fp, increases.
Let P be the amount of time spent on the parallel portion of an original task and S spent on the serial portion. Then
Without loss of generality, we can set P=1 so that, really, S is now a
fraction of P. We now have:
Computer Graphics
mjb – March 21, 2021

A More Optimistic Take on Amdahl’s Law: The Gustafson-
We know that if we multiply the amount of data to process by N, then the amount of parallel work becomes NP. Surely the serial work must increase too, but we don’t know how much. Let’s say it doesn’t increase at all, so that we know we are getting an upper bound answer.
In that case, the new parallel fraction is: F ‘  P ‘  NP
P’S NPS And substituting for P (=1) and for S, we have:
N  1  Fp
Computer Graphics
mjb – March 21, 2021

A More Optimistic Take on Amdahl’s Law: The Gustafson-
If we tabulate this, we get a table of Fp’ values:
Computer Graphics
mjb – March 21, 2021

Or, graphing it:
A More Optimistic Take on Amdahl’s Law: The Gustafson-
Computer Graphics
mjb – March 21, 2021

A More Optimistic Take on Amdahl’s Law: The Gustafson-
We can also turn Fp’ into a Maximum Speedup:
Computer Graphics
mjb – March 21, 2021

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com