程序代写 This work is licensed under a Creative Commons Attribution-NonCommercia

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
Definition of Speedup
If you are using n processors, your Speedupn is: Speedup  T
And your Speedup Efficiencyn is:

Copyright By PowCoder代写 加微信 powcoder

Efficiency  Speedupn
where T1 is the execution time on one core and Tn is the execution time on n cores. Note that Speedupn should be > 1.
which could be as high as 1., but probably never will be.
Computer Graphics
mjb – March 21, 2021
Computer Graphics
Parallel Programming: Speedups and Amdahl’s law

speedups.and.amdahls.law.pptx
mjb – March 21, 2021
However, Multicore is not a Free Lunch: Amdahl’s Law
If you put in n processors, you should get n times Speedup (and 100% Speedup Efficiency), right? Wrong!
There are always some fraction of the total operation that is inherently sequential and cannot be parallelized no matter what you do. This includes reading data, setting up calculations, control logic, storing results, etc.
If you think of all the operations that a program needs to do as being divided between a fraction that is parallelizable and a fraction that isn’t (i.e., is stuck at being sequential), then Amdahl’s Law says:
Speedup  T  n Tn
Computer Graphics
1  Fparallel  F
n sequential
Fparallel  (1 F ) n parallel
This fraction can be reduced by deploying multiple processors.
This fraction can’t.
mjb – March 21, 2021
A Visual Explanation of Amdahl’s Law
The Sequential Portion doesn’t go away, and it also doesn’t get any smaller. It just gets more and more dominant.
Parallel Portion
Sequential Portion
Sequential Portion
Parallel Portion
Sequential Portion
Parallel Portion
Computer Graphics
Sequential Portion
124 # of Cores
mjb – March 21, 2021
Execution Time

SpeedUp as a Function of Number of Processors and :
mjb – March 21, 2021
Computer Graphics
SpeedUp as a Function of Fparallel and Number of Processors
Computer Graphics
Parallel Fraction
mjb – March 21, 2021
SpeedUp Efficiency 𝑺𝑵 as a Function of Number of Processors and F 7
1.00 0.90 0.80 0.70 0.60 0.50 0.40 0.30 0.20 0.10 0.00
SpeedUp Efficiency vs. # of Processors
parallel : 90%
1 6 11 16 21 26
Computer Graphics
# of Processors
mjb – March 21, 2021
SpeedUp Efficiency 𝑺𝑵 as a Function of Fparallel and Number of Processors 8 𝑵
1.00 0.90 0.80 0.70 0.60 0.50 0.40 0.30 0.20 0.10 0.00
SpeedUp Efficiency vs. Fp
0.6 0.7 0.8 0.9
Computer Graphics
mjb – March 21, 2021
SpeedUp Efficiency
SpeedUp Efficiency

You can also solve for Fparallel using Amdahl’s Law if you know your speedup and the number of processors
Amdahl’s law says:
ST1  1 1F(1F)1FnF 11F(1n) TnF(1F)Sn n Sn
Solving for F:
1 1 Tn 1 Tn T1 T1 Tn
F  S  T1  T1  T1  n(T1 Tn )  n T1 Tn 
1n 1n 1n n1 T1(n1) (n1) T1 nnnn
If you’ve got several (n,S) values, you can take the average (which is actually a least squares fit):
Computer Graphics
Use this if you know the timing
F  ni T1 Tni ,i2..N i (ni1)T1
Use this if you know the speedup
(n1) Speedup 
notethatwheni=1,Tni T1
mjb – March 21, 2021
Amdahl’s Law can also give us the Maximum Possible SpeedUp 10 Note that these fractions put an upper bound on how much benefit you will
get from adding more processors:
max Speedup  limSpeedup  1  1 n Fsequential 1
maxSpeedup
Computer Graphics
mjb – March 21, 2021
A More Optimistic Take on Amdahl’s Law: The Gustafson-
Gustafson observed that as you increase the number of processors, you have a tendency to attack larger and larger versions of the problem. He also observed that when you use the same parallel program on larger datasets, the parallel fraction, Fp, increases.
Let P be the amount of time spent on the parallel portion of an original task and S spent on the serial portion. Then
Fp P or SPPFp PS Fp
Parallel Serial Time Time
Without loss of generality, we can set P=1 so that, really, S is now a
fraction of P. We now have:
Computer Graphics
mjb – March 21, 2021
A More Optimistic Take on Amdahl’s Law: The Gustafson-
We know that if we multiply the amount of data to process by N, then the amount of parallel work becomes NP. Surely the serial work must increase too, but we don’t know how much. Let’s say it doesn’t increase at all, so that we know we are getting an upper bound answer.
In that case, the new parallel fraction is: F’  P’  NP
Computer Graphics
P’S NPS And substituting for P (=1) and for S, we have:
N  S N  1 
mjb – March 21, 2021

A More Optimistic Take on Amdahl’s Law: The Gustafson-
If we tabulate this, we get a table of Fp’ values:
Computer Graphics
mjb – March 21, 2021
Or, graphing it:
A More Optimistic Take on Amdahl’s Law: The Gustafson-
Computer Graphics
mjb – March 21, 2021
A More Optimistic Take on Amdahl’s Law: The Gustafson-
We can also turn Fp’ into a Maximum Speedup:
Computer Graphics
mjb – March 21, 2021

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com