Algorithms & Data Structures (Winter 2022) Algorithm Paradigms – Dynamic Programming 1
Announcements
Copyright By PowCoder代写 加微信 powcoder
• Complete Search
• Divide and Conquer.
• Dynamic Programming. • Introduction.
• Examples.
Dynamic Programming– What is it?
• What’s the following equal to 1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1 = ?
Dynamic Programming– What is it?
• What’s the following equal to 1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1 = 21 1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1 = ?
Dynamic Programming– What is it?
• What’s the following equal to 1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1 = 21 1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1 = 22
subproblem
“DP is just a fancy way to remembering stuff to save time later!”
Famous saying:
Algorithms who cannot
remember the past are
condemned to recompute it.
Dynamic Programming– What is it?
• What’s the following equal to 1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1 = 21 1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1 = ?
subproblem
DP breaks a problem into smaller overlapping sub-problems and avoids the computation of the same results more than once.
Famous saying:
Those who cannot remember the past are condemned to repeat it.
Algorithms who cannot
remember the past are
condemned to recompute it.
Dynamic Programming– What is it?
return fib(n – 1) + fib(n – 2)
Dynamic Programming– What is it?
return n else
FibArray =
if FibArray[n-1] == 0
FibArray[n-1] = fib(n-1)
if FibArray[n-2] == 0
FibArray[n-2] = fib(n-2)
FibArray[n] = FibArray[n-2] + FibArray [n-1] return FibArray[n]
Dynamic Programming– What is it?
recursive DP
DP looks through all possible sub-problems and never re-computes the solution to any sub- problem. This implies correctness and efficiency, which we can not say of most techniques. This alone makes DP special10
Dynamic Programming– When to use it?
Key ingredients to make DP works
1. This problem has optimal sub-structures.
• Solution for the sub-problem is part of the solution of the
original problem.
2. This problem has overlapping sub-problems.
Dynamic Programming– How to solve it?
Steps for Solving DP Problems
1. Define subproblems.
fib(n-1) and fib(n-2) are subproblems of fib(n)
2. Write down the recurrence that relates subproblems.
fib(n) = fib(n-1) + fib(n-2)
3. Recognize and solve the base cases.
fib(0) = 0; fib(1) = 1
4. Implement a solving methodology.
• Memoization: Top down approach => FibArray[]
• Tabulation: Bottom up approach
Top-Down VS Bottom-Up
ì Top Down (Memoization).
ì I will be an expert in dynamic programming. How? I will work hard like crazy. How? I’ll practice more and try to improve. How? I’ll actively participate in David classes. How? I will attend all David’s classes. Then, I’m going to learn dynamic programming.
ì Bottom Up (Tabulation).
ì I’m going to learn dynamic programming. Then, I will attend all David’s classes. Then, I will actively participate in David classes. Then, I’ll practice even more and try to improve. After working hard like crazy, I will be an expert in dynamic
DP – Top-Down
1. Initialize a DP ‘memo’ table with dummy values, e.g. ‘-1’.
• The dimension of the DP table must be the size of distinct sub-problems.
2. At the start of recursive function, simply check if this current state has been computed before.
(a) If it is, simply return the value from the DP memo table, O(1).
(b) If it is not, compute as per normal (just once) and then store the computed
value in the DP memo table so that further calls to this sub-problem is fast.
DP – Bottom-Up
1. Identify the Complete Search recurrence.
2. Initialize some parts of the DP table with known initial values.
3. Determine how to fill the rest of the DP table based on the Complete Search recurrence, usually involving one or more nested loops to do so.
DP – Top-Down VS Bottom-Up
1. It is a natural transformation form normal
complete search recursion.
2. Compute sub-problems only when
necessary.
1. Faster if many sub-problems are revisited
as there is no overhead from recursive
2. Can save memory space with DP “on-the-
fly” technique.
1. Slower if many sub-problems are revisited
due to recursive calls overhead.
2. If there are M states, it can use up to
O(M) table size which can lead to memory problems.
1. For some programmers who are inclined
with recursion, this may be not intuitive.
2. If there are M states, bottom-up DP visits
and fills the value of all these M states.
DP – Take home picture
DP – Examples
Interval 18
DP – 1-dimensional
Problem: Given n, find the number of different ways to write n as the sum of the numbers 1, 3, 4.
Example: for n = 5, the answer is 6 5=1+1+1+1+1
=1+1+3 =1+3+1 =3+1+1 =1+4 =4+1
DP – 1-dimensional
Step 1: Identify the sub-problems (in words). Step 1.1: Identify the possible sub-problems.
Let Dn-i be the number of ways to write n-i as the sum of 1, 3, 4.
DP – 1-dimensional
Step 2: Find the recurrence.
Step 2.1: What decision do I make at every step?.
ì Consideronepossiblesolutionn=x1 +x2 +…+xm
ì Ifxm =1,therestofthetermsmustsumton−1.Thus, the number of sums that end with xm = 1 is equal to Dn−1
D5 = 1 + 1 + 1 + 1 + 1 D5 = + 1
D5 = 1 + 1 + 3
D5 = 1 + 4
DP – 1-dimensional
Key ingredients to make DP works
1. Thisproblemhasoptimalsub-structures.
• Solution for the sub-problem is part of the solution of the original
2. Thisproblemhasoverlappingsub-problems.
DP – 1-dimensional
Step 3: Recognize and solve the base cases.
ì D0 =1,andDn =0foralln<0.
ì Alternatively,canset:D0 =D1 =D2 =1,andD3 =2
Step 4: Implement a solving methodology.
DP – 1-dimensional – weighted interval scheduling
• Input: Set S of n activities, a1, a2, ..., an. – si = start time of activity i.
– fi = finish time of activity i.
– wi= weight of activity i
• Output: find maximum weight subset of mutually compatible activities.
– 2 activities are compatible, if their intervals do not overlap.
0 1 2 3 4 5 6 7 8 9 1024
DP – 1-dimensional – weighted interval scheduling
0 1 2 3 4 5 6 7 8 9 10
222✓ 0 1 2 3 4 5 6 7 8 9 1025
weighted interval scheduling – Data Structure
Notation: All activities are sorted by finishing time f1 ≤ f2≤ ... ≤ fn Definition: p(j) = largest index i < j such that activity/job i is
compatible with activity/job j. Examples: p(6)=4, p(5)=2, p(4)=2, p(2)=0.
0 1 2 3 4 5 6 7 8 9 10
weighted interval scheduling – Data Structure
weighted interval scheduling
Step 1: Identify the sub-problems (in words). Step 1.1: Identify the possible sub-problems.
Let OPT(j) be the maximum total weight of compatible activities 1 to j (i.e., value of the optimal solution to the problem including activities 1 to j).
a2fsa5 f 255
s a1 114477
f s a4 f s a7 f
0 1 2 3 4 5 6 7 8 9 10
weighted interval scheduling
Step 2: Find the recurrence.
Step 2.1: What decision do I make at every step?.
Case 1: OPT selects activity j
• Add weight wj
• Cannot use incompatible activities
• Must include optimal solution on remaining compatible activities {
1, 2, ... , p(j) }.
Case 2: OPT does not select activity j
• Must include optimal solution on other activities { 1, 2, ..., j-1 }.
Optimal substructure property
weighted interval scheduling
Step 2: Find the recurrence.
ì Case 1: OPT selects activity j ì Addweightwj
ì Cannotuseincompatibleactivities
ì Mustincludeoptimalsolutiononremainingcompatibleactivities{1,2,...,p(j)}.
ì Case 2: OPT does not select activity j
• Must include optimal solution on other activities { 1, 2, ..., j-1 }.
𝑂𝑃𝑇𝑗 =& 0 𝑖𝑓𝑗=0 𝑚𝑎𝑥𝑤!+𝑂𝑃𝑇(𝑝𝑗),𝑂𝑃𝑇(𝑗−1) 𝑂𝑡h𝑒rwise
weighted interval scheduling
𝑂𝑃𝑇𝑗 =& 0 𝑖𝑓𝑗=0 𝑚𝑎𝑥 𝑤! + 𝑂𝑃𝑇(𝑝 𝑗 ), 𝑂𝑃𝑇(𝑗 − 1) 𝑂𝑡h𝑒rwise
Input: n, s[1..n], f[1..n], v[1..n]
Sort jobs by finish time so that f[1] ≤ f[2] ≤ ... ≤ f[n]. Compute p[1], p[2], ..., p[n].
Compute-Opt(j)
return max(v[j] + Compute-Opt(p[j]), Compute-Opt(j–1)).
weighted interval scheduling – recursion tree
w6+ OPT(4)
w6+w4+ w6+ OPT(2) OPT(3)
w6+w4+w2 w6+w4+ w6+w3+ OPT(1) OPT(1)
w5+ OPT(2)
w5+ ... OPT(1) 32
w6+ OPT(2)
Observation: OPT(j) is calculated multiple times ...
weighted interval scheduling – top down
Memoization: Cache results of each subproblem; lookup as needed.
Input: n, s[1..n], f[1..n], v[1..n]
Sort jobs by finish time so that f[1]≤f[2]≤ ... ≤f[n]. Compute p[1], p[2], ..., p[n].
for j = 1 to n
M[j] ← empty.
M-Compute-Opt(j)
if M[j] is empty
M[j] ← max(v[j]+M-Compute-Opt(p[j]),M-Compute-Opt(j–1))
return M[j].
weighted interval scheduling – bottom-up
Observation: When we compute M[j], we only need values M[k] for k
return { j } ∪ Find-Solution(p[j]) else
return Find-Solution(j–1).
Analysis. # of recursive calls ≤ n ⇒ O(n).
weighted interval scheduling – reconstruction
activity predecessor Best weight M Vj+M[p(j)] M[j-1]
12345 00223 23499 23498 02349
(1) Activities sorted by finishing time. (2) Weight equal to the length of activity.
a1 a2 a3 a4 a5
0 1 2 3 4 5 6 7 8 9 10
weighted interval scheduling – reconstruction
activity predecessor Best weight M Vj+M[p(j)] M[j-1]
12345 00223 23499 23498 02349
(1) Activities sorted by finishing time. (2) Weight equal to the length of activity.
a1 a2 a3 a4 a5
0 1 2 3 4 5 6 7 8 9 10
weighted interval scheduling – reconstruction
activity predecessor Best weight M Vj+M[p(j)] M[j-1]
12345 00223 23499 23498 02349
(1) Activities sorted by finishing time. (2) Weight equal to the length of activity.
a1 a2 a3 a4 a5
0 1 2 3 4 5 6 7 8 9 10
weighted interval scheduling – running time
binary search
• Complete Search
• Divide and Conquer.
• Dynamic Programming. • Introduction.
• Examples.
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com