CS代考 COMP90038 Algorithms and Complexity

COMP90038 Algorithms and Complexity
Dynamic Programming
Olya Ohrimenko
(Slides from Harald Søndergaard)
Lecture 18
Semester 2, 2021
Algorithms and Complexity (Sem 2, 2021) Dynamic Programming © University of Melbourne 1 / 20

Announcements
Assignment 2 is out on LMS.
Olya’s consultation sessions (Zoom links via LMS):
Tuesday October 5, 2:30 – 3:30 pm Tuesday October 12, 2:30 – 3:30 pm
Algorithms and Complexity (Sem 2, 2021) Dynamic Programming © University of Melbourne 2 / 20

Dynamic Programming
Dynamic programming is an algorithm design technique which sometimes is applicable when we want to solve a recurrence relation and the recursion involves overlapping instances.
In Lecture 16 we achieved a spectacular performance improvement in the calculation of Fibonacci numbers by switching from a naive top-down algorithm to one that solved, and tabulated, smaller sub-problems.
The bottom-up approach used the tabulated results, rather than solving overlapping sub-problems repeatedly.
That was a particularly simple example of dynamic programming.
Algorithms and Complexity (Sem 2, 2021) Dynamic Programming © University of Melbourne 3 / 20

Dynamic Programming and Optimization
Optimization problems sometimes allow for clever dynamic programming solutions.
A prerequisite for the application here is a certain preservation of optimality: An optimal solution to an instance can be obtained by combining optimal solutions to the sub-instances.
Algorithms and Complexity (Sem 2, 2021) Dynamic Programming © University of Melbourne 4 / 20

Example 1: The Coin-Row Problem
Given a row of coins, pick up the largest possible sum, subject to this constraint: No two adjacent coins can be picked.
Algorithms and Complexity (Sem 2, 2021) Dynamic Programming © University of Melbourne 5 / 20

Example 1: The Coin-Row Problem
Given a row of coins, pick up the largest possible sum, subject to this constraint: No two adjacent coins can be picked.
Think of the problem recursively.
Algorithms and Complexity (Sem 2, 2021) Dynamic Programming © University of Melbourne 5 / 20

Example 1: The Coin-Row Problem
Given a row of coins, pick up the largest possible sum, subject to this constraint: No two adjacent coins can be picked.
Think of the problem recursively.
Let the values of the coins be v1,v2,…vn.
Let S(i) be the sum that can be gotten by picking optimally from the first i coins.
Either the ith coin (with value vi) is part of the solution or it is not. If we choose to pick it up then we cannot also pick its neighbour on
the left, so the best we can achieve is S(i −2)+vi.
Otherwise we leave it, and the best we can achieve is S(i − 1).
Algorithms and Complexity (Sem 2, 2021) Dynamic Programming © University of Melbourne 5 / 20

Example 1: The Coin-Row Problem
Here is saying the same thing formally, as a recurrence relation: S(n) = max(S(n − 1), S(n − 2) + vn)
This holds for n > 1.
We need two base cases: S(0) = 0 and S(1) = v1.
Algorithms and Complexity (Sem 2, 2021) Dynamic Programming © University of Melbourne 6 / 20

Example 1: The Coin-Row Problem
Here is saying the same thing formally, as a recurrence relation: S(n) = max(S(n − 1), S(n − 2) + vn)
This holds for n > 1.
We need two base cases: S(0) = 0 and S(1) = v1.
You can code this algorithm directly like that in your favourite programming language.
However, the solution suffers the same problem as the naive Fibonacci program: lots of repetition of identical sub-computations.
Algorithms and Complexity (Sem 2, 2021) Dynamic Programming © University of Melbourne 6 / 20

Example 1: The Coin-Row Problem
Since all values S(1) to S(n) need to be found anyway, we may as well proceed from the bottom up, storing intermediate results in an array S as we go.
Given an array C that holds the coin values, the recurrence relation tells us what to do:
function CoinRow(C[·],n) S[0] ← 0
S[1] ← C[1]
for i ← 2 to n do
S [i ] ← max (S [i − 1], S [i − 2] + C [i ]) return S[n]
Algorithms and Complexity (Sem 2, 2021) Dynamic Programming © University of Melbourne 7 / 20

Example 1: The Coin-Row Problem
Let us run the algorithm on the seven coins 20, 10, 20, 50, 20, 10, 20:
index: 0 1 2 3 4 5 6 7 C: 20 10 20 50 20 10 20 S: 020
Algorithms and Complexity (Sem 2, 2021) Dynamic Programming © University of Melbourne 8 / 20

Example 1: The Coin-Row Problem
Let us run the algorithm on the seven coins 20, 10, 20, 50, 20, 10, 20: index: 0 1 2 3 4 5 6 7
C: S:
20 10 20 50 20 10 20
0 20 20
Algorithms and Complexity (Sem 2, 2021)
Dynamic Programming © University of Melbourne 8 / 20

Example 1: The Coin-Row Problem
Let us run the algorithm on the seven coins 20, 10, 20, 50, 20, 10, 20: index: 0 1 2 3 4 5 6 7
C: S:
20 10 20 50 20 10 20
0 20 20 40
Algorithms and Complexity (Sem 2, 2021)
Dynamic Programming © University of Melbourne 8 / 20

Example 1: The Coin-Row Problem
Let us run the algorithm on the seven coins 20, 10, 20, 50, 20, 10, 20: index: 0 1 2 3 4 5 6 7
C: S:
20 10 20 50 20 10 20
0 20 20 40 70
Algorithms and Complexity (Sem 2, 2021)
Dynamic Programming © University of Melbourne 8 / 20

Example 1: The Coin-Row Problem
Let us run the algorithm on the seven coins 20, 10, 20, 50, 20, 10, 20: index: 0 1 2 3 4 5 6 7
C: S:
20 10 20 50 20 10 20
0 20 20 40 70 70
Algorithms and Complexity (Sem 2, 2021)
Dynamic Programming © University of Melbourne 8 / 20

Example 1: The Coin-Row Problem
Let us run the algorithm on the seven coins 20, 10, 20, 50, 20, 10, 20: index: 0 1 2 3 4 5 6 7
C: S:
20 10 20 50 20 10 20
0 20 20 40 70 70 80
Algorithms and Complexity (Sem 2, 2021)
Dynamic Programming © University of Melbourne 8 / 20

Example 1: The Coin-Row Problem
Let us run the algorithm on the seven coins 20, 10, 20, 50, 20, 10, 20: index: 0 1 2 3 4 5 6 7
C: S:
20 10 20 50 20 10 20
0 20 20 40 70 70 80 90
Algorithms and Complexity (Sem 2, 2021)
Dynamic Programming © University of Melbourne 8 / 20

Example 1: The Coin-Row Problem
Let us run the algorithm on the seven coins 20, 10, 20, 50, 20, 10, 20:
index: 0 1 2 3 4 5 6 7 C: 20 10 20 50 20 10 20 S: 0 20 20 40 70 70 80 90
Using: 1111111 34444 67
Keeping track of the (indices of the) coins used in an optimal solution is an easy extension to the algorithm.
Algorithms and Complexity (Sem 2, 2021) Dynamic Programming © University of Melbourne 8 / 20

Example 2: The Knapsack Problem
In Lecture 5 we looked at the knapsack problem. Given n items with
weights: w1,w2,…,wn values: v1,v2,…,vn knapsack of capacity W
find the most valuable selection of items that will fit in the knapsack. We assume that all entities involved are positive integers.
Algorithms and Complexity (Sem 2, 2021) Dynamic Programming © University of Melbourne 9 / 20

Example 2: Knapsack
W = 10
w1 = 7 v1 = 42
w4 = 5 v4 = 25
w2 = 3 v2 = 12
knapsack item 1 item 2 item 3 item 4
Algorithms and Complexity (Sem 2, 2021) Dynamic Programming © University of Melbourne 10 / 20
w3 = 4 v3 = 40

Example 2: The Knapsack Problem
In Lecture 5 we devised a brute-force algorithm, but dynamic programming may give us a better solution.
The critical step is to find a good answer to the question “what is the sub-problem?”
In this case, the trick is to formulate the recurrence relation over two parameters, namely the sequence 1, 2, . . . i of items considered so far, and the remaining capacity w ≤ W .
Algorithms and Complexity (Sem 2, 2021) Dynamic Programming © University of Melbourne 11 / 20

Example 2: The Knapsack Problem
In Lecture 5 we devised a brute-force algorithm, but dynamic programming may give us a better solution.
The critical step is to find a good answer to the question “what is the sub-problem?”
In this case, the trick is to formulate the recurrence relation over two parameters, namely the sequence 1, 2, . . . i of items considered so far, and the remaining capacity w ≤ W .
Let K(i,w) be the value of the best choice of items amongst the first i using knapsack capacity w.
Then we are after K(n,W).
Algorithms and Complexity (Sem 2, 2021) Dynamic Programming © University of Melbourne 11 / 20

Example 2: The Knapsack Problem
The reason why we focus on K(i,w) is that we can express a solution to that recursively.
Amongst the first i items we either pick item i or we don’t.
Algorithms and Complexity (Sem 2, 2021) Dynamic Programming © University of Melbourne 12 / 20

Example 2: The Knapsack Problem
The reason why we focus on K(i,w) is that we can express a solution to that recursively.
Amongst the first i items we either pick item i or we don’t.
For a solution that excludes item i, the value of an optimal subset is
simply K(i −1,w).
Algorithms and Complexity (Sem 2, 2021) Dynamic Programming © University of Melbourne 12 / 20

Example 2: The Knapsack Problem
The reason why we focus on K(i,w) is that we can express a solution to that recursively.
Amongst the first i items we either pick item i or we don’t.
For a solution that excludes item i, the value of an optimal subset is
simply K(i −1,w).
For a solution that includes item i, apart from that item, an optimal solution contains an optimal subset of the first i − 1 items that will fit into a bag of capacity w −wi. The value of such a subset is
K(i −1,w −wi)+vi.
Algorithms and Complexity (Sem 2, 2021) Dynamic Programming © University of Melbourne 12 / 20

Example 2: The Knapsack Problem
The reason why we focus on K(i,w) is that we can express a solution to that recursively.
Amongst the first i items we either pick item i or we don’t.
For a solution that excludes item i, the value of an optimal subset is
simply K(i −1,w).
For a solution that includes item i, apart from that item, an optimal solution contains an optimal subset of the first i − 1 items that will fit into a bag of capacity w −wi. The value of such a subset is
K(i −1,w −wi)+vi.
…provided item i fits, that is, provided w − wi ≥ 0.
Algorithms and Complexity (Sem 2, 2021) Dynamic Programming © University of Melbourne 12 / 20

Example 2: The Knapsack Problem
Now it is easy to express the solution recursively: K(i,w) = 0 if i = 0 or w = 0
Otherwise:
K(i,w)=􏱷max(K(i−1,w),K(i−1,w−wi)+vi) ifw≥wi
K (i − 1, w ) if w < wi Algorithms and Complexity (Sem 2, 2021) Dynamic Programming © University of Melbourne 13 / 20 Example 2: The Knapsack Problem Now it is easy to express the solution recursively: K(i,w) = 0 if i = 0 or w = 0 Otherwise: K(i,w)=􏱷max(K(i−1,w),K(i−1,w−wi)+vi) ifw≥wi K (i − 1, w ) if w < wi That gives a correct algorithm for the problem, albeit an inefficient one, if we take it literally. For a bottom-up solution we need to write the code that systematically fills a two-dimensional table. The table will have n + 1 rows and W + 1 columns. Algorithms and Complexity (Sem 2, 2021) Dynamic Programming © University of Melbourne 13 / 20 Example 2: The Knapsack Problem First fill leftmost column and top row, then proceed row by row: for i ← 0 to n do K[i,0] ← 0 for j ← 1 to W do K[0,j] ← 0 for i ← 1 to n do for j ← 1 to W do ifj