程序代写 This work is licensed under a Creative Commons Attribution-NonCommercial-No

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
Computer Graphics
Parallel Programming: Speedups and Amdahl’s law

Copyright By PowCoder代写 加微信 powcoder

speedups.and.amdahls.law.pptx
mjb – March 21, 2021
Definition of Speedup
If you are using n processors, your Speedupn is: Speedup  T
And your Speedup Efficiencyn is:
Efficiencyn  Speedupn
where T1 is the execution time on one core and Tn is the execution time on n cores. Note that Speedupn should be > 1.
which could be as high as 1., but probably never will be.
Computer Graphics
mjb – March 21, 2021

However, Multicore is not a Free Lunch: Amdahl’s Law
If you put in n processors, you should get n times Speedup (and 100% Speedup Efficiency), right? Wrong!
There are always some fraction of the total operation that is inherently sequential and cannot be parallelized no matter what you do. This includes reading data, setting up calculations, control logic, storing results, etc.
If you think of all the operations that a program needs to do as being divided between a fraction that is parallelizable and a fraction that isn’t (i.e., is stuck at being sequential), then Amdahl’s Law says:
Speedup  T  n Tn
Computer Graphics
1  Fparallel  F
n sequential
Fparallel  (1 F ) n parallel
This fraction can be reduced by deploying multiple processors.
This fraction can’t.
mjb – March 21, 2021
A Visual Explanation of Amdahl’s Law
The Sequential Portion doesn’t go away, and it also doesn’t get any smaller. It just gets more and more dominant.
Parallel Portion
Sequential Portion
Sequential Portion
Parallel Portion
Sequential Portion
Parallel Portion
Computer Graphics
Sequential Portion
124 # of Cores
mjb – March 21, 2021
Execution Time

SpeedUp as a Function of Number of Processors and :
mjb – March 21, 2021
Computer Graphics
SpeedUp as a Function of Fparallel and Number of Processors
Computer Graphics
Parallel Fraction
mjb – March 21, 2021

SpeedUp Efficiency 𝑺𝑵 as a Function of Number of Processors and F 7
1.00 0.90 0.80 0.70 0.60 0.50 0.40 0.30 0.20 0.10 0.00
SpeedUp Efficiency vs. # of Processors
parallel : 90%
1 6 11 16 21 26
Computer Graphics
# of Processors
mjb – March 21, 2021
SpeedUp Efficiency 𝑺𝑵 as a Function of Fparallel and Number of Processors 8 𝑵
1.00 0.90 0.80 0.70 0.60 0.50 0.40 0.30 0.20 0.10 0.00
SpeedUp Efficiency vs. Fp
0.6 0.7 0.8 0.9
Computer Graphics
mjb – March 21, 2021
SpeedUp Efficiency
SpeedUp Efficiency

You can also solve for Fparallel using Amdahl’s Law if you know your speedup and the number of processors
Amdahl’s law says:
Tn F (1F) n
Solving for F:
T TTTT 11 n1 n 1 1 n
S 1  Sn(1F)1 n
1F FnF 1 (1n)
T T T n(TT) n TT n  1  FS1 1 1 1 n 1 n 1 
1 n 1 n 1 n n 1 T (n 1) (n 1) T (n 1) Speedup
11 If you’ve got several (n,S) values, you can take the average (which is actually a least squares fit):
Computer Graphics
Fini 1 ni,i2..N
(n1) T i1
note that when i=1, T
mjb – March 21, 2021
Use this if you know the timing
Use this if you know the speedup
Amdahl’s Law can also give us the Maximum Possible SpeedUp
Note that these fractions put an upper bound on how much benefit you will get from adding more processors:
max Speedup  limSpeedup  1  1 n Fsequential 1 Fparallel
Computer Graphics
maxSpeedup
mjb – March 21, 2021

A More Optimistic Take on Amdahl’s Law: The Gustafson-
Gustafson observed that as you increase the number of processors, you have a tendency to attack larger and larger versions of the problem. He also observed that when you use the same parallel program on larger datasets, the parallel fraction, Fp, increases.
Let P be the amount of time spent on the parallel portion of an original task and S spent on the serial portion. Then
Fp P or SPPFp PS Fp
Parallel Serial Time Time
Without loss of generality, we can set P=1 so that, really, S is now a fraction of P. We now have: 1 Fp
Computer Graphics
mjb – March 21, 2021
A More Optimistic Take on Amdahl’s Law: The Gustafson-
We know that if we multiply the amount of data to process by N, then the amount of parallel work becomes NP. Surely the serial work must increase too, but we don’t know how much. Let’s say it doesn’t increase at all, so that we know we are getting an upper bound answer.
In that case, the new parallel fraction is: F ‘  P ‘  NP
P’S NPS And substituting for P (=1) and for S, we have:
N  1  Fp
Computer Graphics
mjb – March 21, 2021

A More Optimistic Take on Amdahl’s Law: The Gustafson-
If we tabulate this, we get a table of Fp’ values:
Computer Graphics
mjb – March 21, 2021
Or, graphing it:
A More Optimistic Take on Amdahl’s Law: The Gustafson-
Computer Graphics
mjb – March 21, 2021

A More Optimistic Take on Amdahl’s Law: The Gustafson-
We can also turn Fp’ into a Maximum Speedup:
Computer Graphics
mjb – March 21, 2021

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com